During my present Exadata course in Munich, we have discussed Hybrid Columnar Compression (HCC) and how to estimate the size of tables after compression. In addition to my previous posting that introduced HCC, I have therefore prepared this little script that calls DBMS_COMPRESSION. Typically, the documentation also gives an example, but not in this case. Also my brief research in the web did not retrieve a (simple) example.

Please notice that you can use that script already on a non-Exadata Oracle Database (from 11.2 on)  also to get an estimation about HCC, not only about BASIC and OLTP compression. The estimation is quite good but takes much space – about as much as the tables you estimate are in size. Therefore, it may be advisable to use a scratch tablespace only for that purpose and drop it afterwards.

SQL> set serveroutput on
 v_blkcnt_cmp     pls_integer;
 v_blkcnt_uncmp   pls_integer;
 v_row_cmp        pls_integer;
 v_row_uncmp      pls_integer;
 v_cmp_ratio      number;
 v_comptype_str   varchar2(60);
 scratchtbsname   => upper('&ScratchTBS'),      
 ownname          => user,           
 tabname          => upper('&TableName'),   
 partname         => NULL,           
 comptype         => dbms_compression.comp_for_query_high,    
 blkcnt_cmp       => v_blkcnt_cmp,    
 blkcnt_uncmp     => v_blkcnt_uncmp,  
 row_cmp          => v_row_cmp,    
 row_uncmp        => v_row_uncmp,  
 cmp_ratio        => v_cmp_ratio,  
 comptype_str     => v_comptype_str);
 dbms_output.put_line('Estimated Compression Ratio: '||to_char(v_cmp_ratio));
 dbms_output.put_line('Blocks used by compressed sample: '||to_char(v_blkcnt_cmp));
 dbms_output.put_line('Blocks used by uncompressed sample: '||to_char(v_blkcnt_uncmp));

Addendum: See Jonathan Lewis’ nice cheat to keep the HCC compressed table that gets created during the evaluation under the  covers – even if you are NOT running DBMS_COMPRESSION on Exadata!

Second addendum: This posting was published in the current Oracle University Newsletter.

About these ads


  1. #1 by Bhavik Desai on September 13, 2011 - 06:50

    Thanks for the article Uwe..
    I used this pl/sql block for one of my non-compressed OBIA object on exadata platform which is sized at 67 GB (partitioned table)


    —————————— — ——– ———— ———- ———- ———–
    W_AR_XACT_F YES 8713869 623831659 623831659

    It ran for almost 45 min and gave me below output.
    Compression Advisor self-check validation successful. select count(*) on both Uncompressed and EHCC Compressed format = 1000001 rows
    Estimated Compression Ratio: 12.6
    Blocks used by compressed sample: 6345
    Blocks used by uncompressed sample: 80395

    PL/SQL procedure successfully completed.

    Can you help me understand what exactly it means ?

  2. #2 by Uwe Hesse on September 13, 2011 - 08:54

    It means that with HCC (compress for query high, if you used the exact same call like in the posting), your table will be about 12 times smaller in size – about 6 GB then. That ratio was derived by the number of blocks in the sample (80395) divided by the number of blocks, the sampled table took after compression (6345). This is a typical ratio, by the way.

  3. #3 by baskardba on May 8, 2012 - 11:16

    1. Compression Ratio :20.1
    2. Block Count :3696
    3. Compression Type :”Compress For Archive High”
    4. Blk Count Compressed :3696
    5. Blk Count Un-compressed:74334
    6. Row Count Compressed :271
    4. Row Count Un-Compressed:157

    Table size is 35gb. The above compression Ration of 20.1 means the table will become 20 times smaller in size when compressed. Is that right ? from 35gb to around 3gb?

  4. #4 by Uwe Hesse on May 8, 2012 - 11:52

    Yes that is true. An impressive ratio but not so unlikely when your table has much redundancy in it like many similar strings – and Archive High is the option that compresses the strongest.

  5. #5 by Horia Berca on August 23, 2013 - 18:26

    It is worth mentioning that when specifying DBMS_COMPRESSION.comp_for_query_high (or *_low) in the get_compression_ratio procedure in a NON-Exadata system, you may hit ORA-20000: Compression Advisor must have at least 1000000 rows in this table/partition segment (segment has … rows).
    So in this case, we shall use other type, as for example COMP_FOR_OLTP for compression type to avoid the error.
    This is documented in Support Note 1506214.1.


  6. #6 by Uwe Hesse on August 23, 2013 - 18:47

    Thank you for your engagement! It seems that the MOS Note 1506214.1 is a bit outdated in this aspect, though, because since (if I recall that right), HCC data types ARE supported for the get_compression_ratio procedure even on a non-Exadata (or Pillar or ZFS) system.

  7. #7 by Edmilson on February 26, 2014 - 14:58

    Hi Mr. Uwe thanks for your article. I have been using dbms_compression.get_compression_ratio til today and i have never had problems, but today i got this error message:
    ORA-00933: command not properly ended
    ORA-06512: at “SYS.PRVT_COMPRESSION”, line 776
    ORA-06512: at “SYS.DBMS_COMPRESSION”, line 214

    Have you ever seen this message? Could you help me?


  1. Oracle Technology Global Price List

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 2,429 other followers

%d bloggers like this: