This section provides some complete examples of where you might use autonomous transactions in your applications.
A very common requirement in applications is to keep a log of errors that occur during transaction processing. The most convenient repository for this log is a database table; with a table, all the information is retained in the database and you can use SQL to retrieve and analyze the log.
One problem with a database table log, however, is that entries in the log become a part of your transaction. If you perform (or have performed to you) a ROLLBACK, you can easily erase your log. How frustrating! Now, it is true that you can get fancy and use savepoints to preserve your log entries while cleaning up your transaction, but that approach is not only fancy, it is complicated. With autonomous transactions, however, logging becomes simpler, more manageable, and less error prone.
Suppose I have a log table defined as follows:
/* Filename on companion disk: log81.pkg */ CREATE TABLE log81tab ( code INTEGER, text VARCHAR2(4000), created_on DATE, created_by VARCHAR2(100), changed_on DATE, changed_by VARCHAR2(100), machine VARCHAR2(100), program VARCHAR2(100) );
I can use it to store errors (SQLCODE and SQLERRM) that have occurred, or even use it for non-error-related logging. The machine and program columns record information available from the virtual V$SESSION table, as you will see.
So I have my table. Now, how should I write to my log? Here's what you should not do:
EXCEPTION WHEN OTHERS THEN v_code := SQLCODE; v_msg := SQLERRM; INSERT INTO log81tab VALUES ( v_code, v_msg, SYSDATE, USER, SYSDATE, USER, NULL, NULL); END;
In other words, never expose your underlying logging mechanism by explicitly inserting into it in your exception sections and other locations. Instead, you should build a layer of code around the table (this is known as encapsulation ). Why do this? Two reasons:
If you ever change your table's structure, all those uses of the log table will not be disrupted.
People will be able to use the log table in a much easier, more consistent manner.
So here is my very simple logging package. It consists of two procedures:
CREATE OR REPLACE PACKAGE log81 IS PROCEDURE putline ( code_in IN INTEGER, text_in IN VARCHAR2 ); PROCEDURE saveline ( code_in IN INTEGER, text_in IN VARCHAR2 ); END; /
What is the difference between putline and saveline? The log81.saveline procedure (as you will see in the package body) is an autonomous transaction routine, whereas log81.putline simply performs the insert. Here is the package body:
/* Filename on companion disk: log81.pkg */ CREATE OR REPLACE PACKAGE BODY log81 IS CURSOR sess IS SELECT MACHINE, PROGRAM FROM V$SESSION WHERE AUDSID = USERENV('SESSIONID'); rec sess%ROWTYPE; PROCEDURE putline ( code_in IN INTEGER, text_in IN VARCHAR2 ) IS BEGIN INSERT INTO log81tab VALUES ( code_in, text_in, SYSDATE, USER, SYSDATE, USER, rec.machine, rec.program ); END; PROCEDURE saveline ( code_in IN INTEGER, text_in IN VARCHAR2 ) IS PRAGMA AUTONOMOUS_TRANSACTION; BEGIN putline (code_in, text_in); COMMIT; EXCEPTION WHEN OTHERS THEN ROLLBACK; END; BEGIN OPEN sess; FETCH sess INTO rec; CLOSE sess; END; /
Here are some comments on this implementation that you might find helpful:
I obtain some useful information from V$SESSION when the package is initialized (the values are not going to change during my session, so I should only query it once) and incorporate that into the log.
The putline procedure performs the straight insert. You would probably want to add some exception handling to this program if you applied this idea in your production application.
The saveline procedure calls the putline procedure (I don't want any redundant code), but does so from within the context of an autonomous transaction.
With this package in place, my error handler shown earlier can be as simple as this:
EXCEPTION WHEN OTHERS THEN log81.saveline (SQLCODE, SQLERRM); END;
No muss, no fuss; developers don't have to concern themselves with the structure of the log table. They don't even have to know they are writing to a database table. And because I have used an autonomous transaction, they can rest assured that no matter what happens in their application, the log entry has been saved.
The grand new benefit of autonomous transactions for database triggers is that inside those triggers you can now issue COMMITs and ROLLBACKs, statements that are otherwise not allowed in database triggers. The changes you commit and roll back will not, however, affect the main transaction that caused the database trigger to fire. They will only apply to DML activity taking place inside the trigger itself (or through stored program units called within the trigger).
Why would this be of value to you? You may want to take an action in the database trigger that is not affected by the ultimate disposition of the transaction that caused the trigger to fire. For example, suppose that you want to keep track of each action against a table, whether or not the action completed. You might even want to be able to detect which actions failed. Let's see how you can use autonomous transactions to do this.
First, let's construct a simple autonomous transaction trigger on the ceo_compensation table that writes a simple message to the following ceo_comp_history table. Here are the two table definitions:
/* Filename on companion disk: autontrigger.sql */ CREATE TABLE ceo_compensation ( company VARCHAR2(100), name VARCHAR2(100), compensation NUMBER, layoffs NUMBER); CREATE TABLE ceo_comp_history ( name VARCHAR2(100), description VARCHAR2(255), occurred_on DATE);
Here is the before-insert trigger to run all the elements in the script:
CREATE OR REPLACE TRIGGER bef_ins_ceo_comp BEFORE INSERT ON ceo_compensation FOR EACH ROW DECLARE PRAGMA AUTONOMOUS_TRANSACTION; BEGIN INSERT INTO ceo_comp_history VALUES ( :new.name, 'BEFORE INSERT', SYSDATE); COMMIT; END; /
With this trigger in place, I can now be certain to track every insert attempt , as shown in the steps below:
BEGIN INSERT INTO ceo_compensation VALUES ( 'Mattel', 'Jill Barad', 9100000, 2700); INSERT INTO ceo_compensation VALUES ( 'American Express Company', 'Harvey Golub', 33200000, 3300); INSERT INTO ceo_compensation VALUES ( 'Eastman Kodak', 'George Fisher', 10700000, 20100); ROLLBACK; --I wish! END; / SELECT name, description, TO_CHAR (occurred_on, 'MM/DD/YYYY HH:MI:SS') occurred_on FROM ceo_comp_history; NAME DESCRIPTION OCCURRED_ON -------------------- --------------------- ------------------- Jill Barad BEFORE INSERT 03/17/1999 04:00:56 Harvey Golub BEFORE INSERT 03/17/1999 04:00:56 George Fisher BEFORE INSERT 03/17/1999 04:00:56
You will find in the autontrigger.sql script all the statements needed to create these objects and run your own test. You can even add your CEO's name to the series of INSERTs if he or she fits the bill.
But there is something of a problem with the trigger I just defined. I defined the trigger as an autonomous transaction because I performed the alert in the body of the trigger. But what if I want to perform some additional DML for the main transaction here in the trigger? It won't be rolled back with the rest of the transaction (if a rollback occurs). That won't do at all, from the perspective of data integrity.
Generally, I would recommend that you not make a database trigger itself the autonomous transaction. Instead, push all of the independent DML activity (such as writing to the audit or history table) into its own procedure. Make that procedure the autonomous transaction. Have the trigger call the procedure.
The autontrigger2.sql script contains the following reworking of the database trigger. First, I create the audit procedure:
/* Filename on companion disk: autontrigger2.sql */ CREATE OR REPLACE PROCEDURE audit_ceo_comp ( name IN VARCHAR2, description IN VARCHAR2, occurred_on IN DATE ) IS PRAGMA AUTONOMOUS_TRANSACTION; BEGIN INSERT INTO ceo_comp_history VALUES ( audit_ceo_comp.name, audit_ceo_comp.description, audit_ceo_comp.occurred_on ); COMMIT; END; /
Then I change the trigger to the following.
CREATE OR REPLACE TRIGGER aft_ins_ceo_comp AFTER INSERT ON ceo_compensation FOR EACH ROW DECLARE ok BOOLEAN := is_valid_comp_info (:NEW.name); BEGIN IF ok THEN audit_ceo_comp ( :new.name, 'AFTER INSERT', SYSDATE); ELSE RAISE VALUE_ERROR; END IF; END; /
Note the following differences:
The trigger is now an after-insert trigger, rather than a before-insert trigger. I want to wait until after the INSERT to the compensation table takes place. Then I will perform my audit.
When the is_valid_comp_info function returns FALSE, I will not even perform an audit. Instead, I will stop the transaction by raising an error. This demonstrates the other reason you don't want the trigger itself to be autonomous. In some situations, I always want to perform my audit. Under other circumstances, however, I may want to stop my main transaction by raising an exception. I can't have both of those events happen if the exception is raised in the same block and transaction as the audit DML.
As you take advantage of the new autonomous transaction pragma, plan out how you will be using these new code elements. You will almost always be better off hiding the details of your new, independent transactions behind a procedural interface.
Suppose that you want to let a user try to get access to a resource (a file, a row of data, etc.) N times before an outright rejection. You also want to keep track of attempts between connections to the database. The autonomous transaction is a perfect fit, due to the COMMITs required.
You will find a simple prototype of a retry mechanism in the retry.pkg file on the companion disk. This mechanism allows you to specify the "item" on which you are placing a limit and keeping track of attempts. These limits are maintained for each unique username. Here is the specification of this package:
/* Filename on companion disk: retry.pkg */ CREATE OR REPLACE PACKAGE retry IS PROCEDURE incr_attempts (item IN VARCHAR2); PROCEDURE set_limit (item IN VARCHAR2, limit IN INTEGER); FUNCTION limit (item IN VARCHAR2) RETURN INTEGER; FUNCTION limit_reached (item IN VARCHAR2) RETURN BOOLEAN; PROCEDURE clear_attempts (item IN VARCHAR2); FUNCTION attempts (item IN VARCHAR2) RETURN INTEGER; FUNCTION attempts_left (item IN VARCHAR2) RETURN INTEGER; FUNCTION attempted_at (item IN VARCHAR2) RETURN DATE; PROCEDURE show_retries (item IN VARCHAR2 := '%'); END retry; /
The programs are self-explanatory; the implementations are also very straightforward. Here, for example, is the implementation of the procedure that lets you increment the number of attempts. Notice the COMMITs and ROLLBACKs; these are required, since I have used the autonomous transaction pragma.
PROCEDURE incr_attempts (item IN VARCHAR2) IS PRAGMA AUTONOMOUS_TRANSACTION; BEGIN INSERT INTO retry_counter VALUES ( USER, incr_attempts.item, SYSDATE, 1); COMMIT; EXCEPTION WHEN DUP_VAL_ON_INDEX THEN UPDATE retry_counter SET last_attempt = SYSDATE, tries = tries + 1 WHERE username = USER AND item = incr_attempts.item; COMMIT; WHEN OTHERS THEN ROLLBACK ; RAISE; END;
Here is a script based on the "three strikes" law in California that exercises the package:
/* Filename on companion disk: retry.tst */ DECLARE law_and_order VARCHAR2(20) := 'law_and_order'; TYPE string_t IS TABLE OF VARCHAR2(50); crime string_t := string_t ( 'Steal car at age 14', 'Caught with a joint at 17', 'Steal pack of cigarettes at age 42' ); BEGIN retry.set_limit (law_and_order, 2); FOR indx IN crime.FIRST .. crime.LAST LOOP DBMS_OUTPUT.PUT_LINE (crime(indx)); IF retry.limit_reached (law_and_order) THEN DBMS_OUTPUT.PUT_LINE ( '...Spend rest of life in prison'); ELSE DBMS_OUTPUT.PUT_LINE ( '...Receive punishment that fits the crime'); retry.incr_attempts (law_and_order); END IF; END LOOP; END; /
Here is the output in SQL*Plus:
SQL> @retry.tst Steal car at age 14 ...Receive punishment that fits the crime Caught with a joint at 17 ...Receive punishment that fits the crime Steal pack of cigarettes at age 42 ...Spend rest of life in prison
Copyright (c) 2000 O'Reilly & Associates. All rights reserved.