Lets consider a hypothetical use case:
– Limited Undo usage: either by disk space, or undo quota
– Large table with many rows to remove/update.
There are different ways to handle this situation and your particulars may vary.
Considerations may include: Creating a new table with just the rows desired, using parallel DML, etc. Ask Tom
My addition to these is a simple routine – for a known set of data that needs to be purged, but there is a limitation to the amount of undo that is usable.
Lets create a large table and prepare our test case.
-- Create large test table create table my_large_table as (select * from all_objects); -- run a few times to get a good size insert into my_large_table (select * from my_large_table); commit; SQL> select count(*) from my_large_table; COUNT(*) ---------- 3699200
To view the Undo used, we will use this query:
SELECT USED_UBLK, ( SELECT value/1024/1024 FROM v$parameter WHERE name = 'db_block_size' )*USED_UBLK as undoMB FROM sys.V$TRANSACTION; --to show usage details and undo space, this role may need to be granted GRANT "SELECT_CATALOG_ROLE" TO "DB_USERNAME";
Delete Testing – All at once
This is an example of a large number of rows being deleted – and before the commit, measuring the undo usage for that transaction.
SQL> delete from my_large_table where owner='PUBLIC'; 1,949,696 rows deleted. -- view undo currently reserved. SQL> @show_undo USED_UBLK UNDOMB ---------- ---------- 55728 435.375
About 435 MB of undo used in the blocks.
Because this has not been committed yet, we can rollback the delete and put those rows back into our table.
Delete Testing – In Chunks
Understanding your data is key. Assuming that I want to delete all those rows, and I don’t care if they are deleted all at once or chunks, I can do a different style of removal that will prevent the undo from being so large.
set serveroutput on; declare l_row_number number; l_undo_blocks number; l_undo_mb number; l_row_limit number:=10000; begin delete from my_large_table where owner='PUBLIC' and rownum<=1; -- sql%rowcount will contain the number of rows acted upon until the commit. while sql%rowcount > 0 loop -- Show the amount of undo consumed - before the commit select used_ublk, ( SELECT value/1024/1024 FROM v$parameter WHERE name = 'db_block_size' )*USED_UBLK as undoMB into l_undo_blocks, l_undo_mb from v$transaction; dbms_output.put_line('Undo Blocks ='||l_undo_blocks|| ' Undo MB ='||l_undo_mb); commit; -- Delete a chunk of rows using rownum as the limiter delete from my_large_table where owner='PUBLIC' and rownum<=l_row_limit; end loop; end; /
When you execute this block, you should see is something similar to:
... Undo Blocks =286 Undo MB =2.234375 Undo Blocks =288 Undo MB =2.25 Undo Blocks =287 Undo MB =2.2421875 ...
In this case – each delete block uses about 2.3 Mb of Undo per transaction. This can be tweaked and adjusted by setting the number of rows or the delete logic.
Does this mean that UNDO in the tablespace will not be used? No. Undo is also used to support flashback. Large deletes or updates will also impact the flashback usage. But this will reduce the amount of undo used in each transaction.
If you happen to be on a system that limits your quota usage of undo, you may run into an error as follows:
insert into my_list (select * from my_large_table) Error report - ORA-30027: Undo quota violation - failed to get 172 (bytes)
This is managed by resource manager plans.
Contact your system administrator to discuss how those limits are decided and managed.