During my present Exadata course in Munich, we have discussed Hybrid Columnar Compression (HCC) and how to estimate the size of tables after compression. In addition to my previous posting that introduced HCC, I have therefore prepared this little script that calls DBMS_COMPRESSION. Typically, the documentation also gives an example, but not in this case. Also my brief research in the web did not retrieve a (simple) example.
Please notice that you can use that script already on a non-Exadata Oracle Database (from 11.2 on) also to get an estimation about HCC, not only about BASIC and OLTP compression. The estimation is quite good but takes much space – about as much as the tables you estimate are in size. Therefore, it may be advisable to use a scratch tablespace only for that purpose and drop it afterwards.
SQL> set serveroutput on declare v_blkcnt_cmp pls_integer; v_blkcnt_uncmp pls_integer; v_row_cmp pls_integer; v_row_uncmp pls_integer; v_cmp_ratio number; v_comptype_str varchar2(60); begin dbms_compression.get_compression_ratio( scratchtbsname => upper('&ScratchTBS'), ownname => user, tabname => upper('&TableName'), partname => NULL, comptype => dbms_compression.comp_for_query_high, blkcnt_cmp => v_blkcnt_cmp, blkcnt_uncmp => v_blkcnt_uncmp, row_cmp => v_row_cmp, row_uncmp => v_row_uncmp, cmp_ratio => v_cmp_ratio, comptype_str => v_comptype_str); dbms_output.put_line('Estimated Compression Ratio: '||to_char(v_cmp_ratio)); dbms_output.put_line('Blocks used by compressed sample: '||to_char(v_blkcnt_cmp)); dbms_output.put_line('Blocks used by uncompressed sample: '||to_char(v_blkcnt_uncmp)); end; /
Addendum: See Jonathan Lewis‘ nice cheat to keep the HCC compressed table that gets created during the evaluation under the covers – even if you are NOT running DBMS_COMPRESSION on Exadata!
Second addendum: This posting was published in the current Oracle University Newsletter.
#1 von Bhavik Desai am September 13, 2011 - 06:50
Thanks for the article Uwe..
I used this pl/sql block for one of my non-compressed OBIA object on exadata platform which is sized at 67 GB (partitioned table)
SQL> SELECT TABLE_NAME,PARTITIONED,COMPRESSION,COMPRESS_FOR,blocks,num_rows,sample_size FROM DBA_TABLES WHERE TABLE_NAME=’W_AR_XACT_F‘ AND OWNER=’OBIA_PRD‘;
TABLE_NAME PAR COMPRESS COMPRESS_FOR BLOCKS NUM_ROWS SAMPLE_SIZE
—————————— — ——– ———— ———- ———- ———–
W_AR_XACT_F YES 8713869 623831659 623831659
It ran for almost 45 min and gave me below output.
Compression Advisor self-check validation successful. select count(*) on both Uncompressed and EHCC Compressed format = 1000001 rows
Estimated Compression Ratio: 12.6
Blocks used by compressed sample: 6345
Blocks used by uncompressed sample: 80395
PL/SQL procedure successfully completed.
Can you help me understand what exactly it means ?
#2 von Uwe Hesse am September 13, 2011 - 08:54
It means that with HCC (compress for query high, if you used the exact same call like in the posting), your table will be about 12 times smaller in size – about 6 GB then. That ratio was derived by the number of blocks in the sample (80395) divided by the number of blocks, the sampled table took after compression (6345). This is a typical ratio, by the way.
#3 von baskardba am Mai 8, 2012 - 11:16
1. Compression Ratio :20.1
2. Block Count :3696
3. Compression Type :“Compress For Archive High“
4. Blk Count Compressed :3696
5. Blk Count Un-compressed:74334
6. Row Count Compressed :271
4. Row Count Un-Compressed:157
Table size is 35gb. The above compression Ration of 20.1 means the table will become 20 times smaller in size when compressed. Is that right ? from 35gb to around 3gb?
#4 von Uwe Hesse am Mai 8, 2012 - 11:52
Yes that is true. An impressive ratio but not so unlikely when your table has much redundancy in it like many similar strings – and Archive High is the option that compresses the strongest.
#5 von Horia Berca am August 23, 2013 - 18:26
It is worth mentioning that when specifying DBMS_COMPRESSION.comp_for_query_high (or *_low) in the get_compression_ratio procedure in a NON-Exadata system, you may hit ORA-20000: Compression Advisor must have at least 1000000 rows in this table/partition segment (segment has … rows).
So in this case, we shall use other type, as for example COMP_FOR_OLTP for compression type to avoid the error.
This is documented in Support Note 1506214.1.
#6 von Uwe Hesse am August 23, 2013 - 18:47
Thank you for your engagement! It seems that the MOS Note 1506214.1 is a bit outdated in this aspect, though, because since 18.104.22.168 (if I recall that right), HCC data types ARE supported for the get_compression_ratio procedure even on a non-Exadata (or Pillar or ZFS) system.
#7 von Edmilson am Februar 26, 2014 - 14:58
Hi Mr. Uwe thanks for your article. I have been using dbms_compression.get_compression_ratio til today and i have never had problems, but today i got this error message:
ORA-00933: command not properly ended
ORA-06512: at „SYS.PRVT_COMPRESSION“, line 776
ORA-06512: at „SYS.DBMS_COMPRESSION“, line 214
Have you ever seen this message? Could you help me?
#8 von Amit am September 17, 2014 - 15:32
Is there any script / package that can estimate the ratio for decompression for already compressed tables? Didn’t find any dbms_compression.get_decompression_ratio procedure
Any idea how best is to get idea for decompression?
#9 von Chetan Varma am Mai 15, 2019 - 10:54
Can you please let me know how I can get this output in HTML format or else to store this PL/SQL output into a table.
Thanks in advance.