Histograms are to get accurate estimates of the distribution of column data. A histogram partitions the values in the column into bands, so that all column values in a band fall within the same range. Histograms provide improved selectivity estimates in the presence of data skew, resulting in optimal execution plans with nonuniform data distributions. ( An Excellent Information from product documentation ). Here is a test to demonstrate that.
test@9iR2> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
PL/SQL Release 9.2.0.8.0 - Production
CORE 9.2.0.8.0 Production
TNS for 32-bit Windows: Version 9.2.0.8.0 - Production
NLSRTL Version 9.2.0.8.0 - Production
test@9iR2> create table t
2 as
3 select case when rownum = 1 then 1 else 99 end as id,
4 a.*
5 from all_objects a;
Table created.
test@9iR2> create index t_ind on t(id);
Index created.
We created a Table 'T' with data skewed considerably on column ID.
test@9iR2> begin
2 dbms_stats.gather_table_stats(ownname=>USER,
3 tabname=>'T',
4 cascade=>true,
5 method_opt=>null);
6 end;
7 /
PL/SQL procedure successfully completed.
Statistics is gathered on table T and index T_IND but not the histograms. So looking at the plan.
test@9iR2> set autotrace traceonly explain;
test@9iR2> select *
2 from t
3 where id = 1;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=46 Card=16598 Bytes=1576810)
1 0 TABLE ACCESS (FULL) OF 'T' (Cost=46 Card=16598 Bytes=1576810)
test@9iR2> select *
2 from t
3 where id = 99;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=46 Card=16598 Bytes=1576810)
1 0 TABLE ACCESS (FULL) OF 'T' (Cost=46 Card=16598 Bytes=1576810)
test@9iR2> set autotrace off;
test@9iR2> begin
2 dbms_stats.gather_table_stats(ownname=>USER,
3 tabname=>'T',
4 cascade=>true,
5 method_opt=>' for all indexed columns size auto');
6 end;
7 /
PL/SQL procedure successfully completed.
Now, after gathering Histograms on columns with Skewed data, the plan changes.
test@9iR2> set autotrace traceonly explain;
test@9iR2> select *
2 from t
3 where id = 1;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=95)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=2 Card=1 Bytes=95)
2 1 INDEX (RANGE SCAN) OF 'T_IND' (NON-UNIQUE) (Cost=1 Card=1)
test@9iR2> select *
2 from t
3 where id = 99;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=46 Card=33195 Bytes=3153525)
1 0 TABLE ACCESS (FULL) OF 'T' (Cost=46 Card=33195 Bytes=3153525)
test@9iR2> set autotrace off;
Tuesday, April 27, 2010
Friday, April 23, 2010
Triggers Vs Procedures.
The SQL you code in packages and procedures will be cached for your entire session, thus reducing the number of parse calls. The SQL inside a trigger is cached only for the duration of the triggering statement, causing additional soft parses whenever the trigger is executed. The bottom line here is to put the logic into packaged PL/SQL procedures and call them from the trigger.
opstar@OPSTAR> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
opstar@OPSTAR> create table t (x number);
Table created.
opstar@OPSTAR> create or replace procedure p
2 as
3 l_count number := 0;
4 begin
5
6 select count(*)
7 into l_count
8 from dual in_procedure;
9
10 end p;
11 /
Procedure created.
opstar@OPSTAR> create or replace trigger t_trig
2 before insert on t
3 for each row
4 declare
5 l_count number := 0;
6 begin
7 select count(*)
8 into l_count
9 from dual in_trigger;
10
11 p; -- Calling procedure inside Trigger;
12 end;
13 /
Trigger created.
opstar@OPSTAR> alter session set timed_statistics=true;
Session altered.
opstar@OPSTAR> alter session set events '10046 trace name context forever, level 12';
Session altered.
opstar@OPSTAR> insert into t values (1);
1 row created.
opstar@OPSTAR> insert into t values (1);
1 row created.
opstar@OPSTAR> insert into t values (1);
1 row created.
opstar@OPSTAR> insert into t values (1);
1 row created.
opstar@OPSTAR> insert into t values (1);
1 row created.
opstar@OPSTAR> insert into t
2 select rownum
3 from all_users;
25 rows created.
Now, when I review the resulting TKPROF report, I observe the following
insert into t
values
(1)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 5 0.00 0.00 0 0 0 0
Execute 5 0.04 3.57 2 8 35 5
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10 0.04 3.57 2 8 35 5
SELECT COUNT(*)
FROM
DUAL IN_TRIGGER
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 6 0.00 0.00 0 0 0 0
Execute 30 0.00 0.06 0 0 0 0
Fetch 30 0.00 0.00 0 0 0 30
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 66 0.00 0.06 0 0 0 30
SELECT COUNT(*)
FROM
DUAL IN_PROCEDURE
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 30 0.00 0.14 0 0 0 0
Fetch 30 0.00 0.00 0 0 0 30
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 61 0.00 0.14 0 0 0 30
Now the parse count for Trigger is 6, which is 5 for single row insert and 1 for single multi row insert. The SQL statement tagged IN_PROCEDURE, however, did not exhibit that behavior. It was parsed once for my session, and PL/SQL will keep it cached as long as possible for me, reusing that cached cursor over and over. Since excessive parsing (even soft parsing) not only consumes resources (such as CPU time) but also requires latches into the shared pool, it will lead to performance and scalability issues over time.
But this behavior is changed in Oracle 11g.
scott@ORCL> select * from v$version;
BANNER
-----------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
After repeating the same scenario, when I review the resulting TKPROF report, I observe the following.
insert into t
values
(1)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 5 0.00 0.00 0 0 0 0
Execute 5 0.01 0.02 2 8 35 5
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10 0.01 0.02 2 8 35 5
SELECT COUNT(*)
FROM
DUAL IN_TRIGGER
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 42 0.00 0.00 0 0 0 0
Fetch 42 0.01 0.00 0 0 0 42
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 85 0.01 0.00 0 0 0 42
SELECT COUNT(*)
FROM
DUAL IN_PROCEDURE
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 42 0.00 0.00 0 0 0 0
Fetch 42 0.00 0.00 0 0 0 42
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 85 0.00 0.00 0 0 0 42
Now the parse count for Trigger is reduced to 1 instead of 6.
opstar@OPSTAR> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
opstar@OPSTAR> create table t (x number);
Table created.
opstar@OPSTAR> create or replace procedure p
2 as
3 l_count number := 0;
4 begin
5
6 select count(*)
7 into l_count
8 from dual in_procedure;
9
10 end p;
11 /
Procedure created.
opstar@OPSTAR> create or replace trigger t_trig
2 before insert on t
3 for each row
4 declare
5 l_count number := 0;
6 begin
7 select count(*)
8 into l_count
9 from dual in_trigger;
10
11 p; -- Calling procedure inside Trigger;
12 end;
13 /
Trigger created.
opstar@OPSTAR> alter session set timed_statistics=true;
Session altered.
opstar@OPSTAR> alter session set events '10046 trace name context forever, level 12';
Session altered.
opstar@OPSTAR> insert into t values (1);
1 row created.
opstar@OPSTAR> insert into t values (1);
1 row created.
opstar@OPSTAR> insert into t values (1);
1 row created.
opstar@OPSTAR> insert into t values (1);
1 row created.
opstar@OPSTAR> insert into t values (1);
1 row created.
opstar@OPSTAR> insert into t
2 select rownum
3 from all_users;
25 rows created.
Now, when I review the resulting TKPROF report, I observe the following
insert into t
values
(1)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 5 0.00 0.00 0 0 0 0
Execute 5 0.04 3.57 2 8 35 5
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10 0.04 3.57 2 8 35 5
SELECT COUNT(*)
FROM
DUAL IN_TRIGGER
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 6 0.00 0.00 0 0 0 0
Execute 30 0.00 0.06 0 0 0 0
Fetch 30 0.00 0.00 0 0 0 30
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 66 0.00 0.06 0 0 0 30
SELECT COUNT(*)
FROM
DUAL IN_PROCEDURE
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 30 0.00 0.14 0 0 0 0
Fetch 30 0.00 0.00 0 0 0 30
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 61 0.00 0.14 0 0 0 30
Now the parse count for Trigger is 6, which is 5 for single row insert and 1 for single multi row insert. The SQL statement tagged IN_PROCEDURE, however, did not exhibit that behavior. It was parsed once for my session, and PL/SQL will keep it cached as long as possible for me, reusing that cached cursor over and over. Since excessive parsing (even soft parsing) not only consumes resources (such as CPU time) but also requires latches into the shared pool, it will lead to performance and scalability issues over time.
But this behavior is changed in Oracle 11g.
scott@ORCL> select * from v$version;
BANNER
-----------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
After repeating the same scenario, when I review the resulting TKPROF report, I observe the following.
insert into t
values
(1)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 5 0.00 0.00 0 0 0 0
Execute 5 0.01 0.02 2 8 35 5
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10 0.01 0.02 2 8 35 5
SELECT COUNT(*)
FROM
DUAL IN_TRIGGER
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 42 0.00 0.00 0 0 0 0
Fetch 42 0.01 0.00 0 0 0 42
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 85 0.01 0.00 0 0 0 42
SELECT COUNT(*)
FROM
DUAL IN_PROCEDURE
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 42 0.00 0.00 0 0 0 0
Fetch 42 0.00 0.00 0 0 0 42
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 85 0.00 0.00 0 0 0 42
Now the parse count for Trigger is reduced to 1 instead of 6.
Interval Partition in 11g
So, something else I learned recently...it is about Interval Partitioning in 11g.
Interval partitioning is an extension of range partitioning which instructs the database to automatically create partitions of a specified interval when data inserted into the table exceeds all of the existing range partitions
scott@ORCL> CREATE TABLE t
2 (
3 object_id number,
4 object_name varchar2(50),
5 created_dt date
6 )
7 partition by range(created_dt) interval(numtoyminterval(1,'MONTH'))
8 (
9 partition p1 values less than (to_date('01/01/2006','mm/dd/yyyy'))
10 ) set store in (USERS,EXAMPLE) ;
Table created.
scott@ORCL> SELECT SEGMENT_NAME,PARTITION_NAME,TABLESPACE_NAME
2 FROM USER_SEGMENTS
3 WHERE SEGMENT_NAME ='T'
4 /
SEGMENT_NAME PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------ --------------------
T P1 USERS
So, what happens if data inserted into table T exceeds the value 01-Jan-2006, well database creates interval partitions for data beyond that transition point.
scott@ORCL> INSERT INTO T
2 SELECT object_id,object_name,created
3 FROM all_objects;
68107 rows created.
scott@ORCL> SELECT SEGMENT_NAME,PARTITION_NAME,TABLESPACE_NAME
2 FROM USER_SEGMENTS
3 WHERE SEGMENT_NAME ='T'
4 /
SEGMENT_NAME PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------ --------------------
T P1 USERS
T SYS_P163 USERS
T SYS_P164 USERS
Well, what happens if Transaction rollback, the newly created partition never drops.
scott@ORCL> rollback;
Rollback complete.
scott@ORCL> select count(*) from T;
COUNT(*)
----------
0
scott@ORCL> SELECT SEGMENT_NAME,PARTITION_NAME,TABLESPACE_NAME
2 FROM USER_SEGMENTS
3 WHERE SEGMENT_NAME ='T'
4 /
SEGMENT_NAME PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------ ------------------------
T P1 USERS
T SYS_P163 USERS
T SYS_P164 USERS
Interval partitioning is an extension of range partitioning which instructs the database to automatically create partitions of a specified interval when data inserted into the table exceeds all of the existing range partitions
scott@ORCL> CREATE TABLE t
2 (
3 object_id number,
4 object_name varchar2(50),
5 created_dt date
6 )
7 partition by range(created_dt) interval(numtoyminterval(1,'MONTH'))
8 (
9 partition p1 values less than (to_date('01/01/2006','mm/dd/yyyy'))
10 ) set store in (USERS,EXAMPLE) ;
Table created.
scott@ORCL> SELECT SEGMENT_NAME,PARTITION_NAME,TABLESPACE_NAME
2 FROM USER_SEGMENTS
3 WHERE SEGMENT_NAME ='T'
4 /
SEGMENT_NAME PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------ --------------------
T P1 USERS
So, what happens if data inserted into table T exceeds the value 01-Jan-2006, well database creates interval partitions for data beyond that transition point.
scott@ORCL> INSERT INTO T
2 SELECT object_id,object_name,created
3 FROM all_objects;
68107 rows created.
scott@ORCL> SELECT SEGMENT_NAME,PARTITION_NAME,TABLESPACE_NAME
2 FROM USER_SEGMENTS
3 WHERE SEGMENT_NAME ='T'
4 /
SEGMENT_NAME PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------ --------------------
T P1 USERS
T SYS_P163 USERS
T SYS_P164 USERS
Well, what happens if Transaction rollback, the newly created partition never drops.
scott@ORCL> rollback;
Rollback complete.
scott@ORCL> select count(*) from T;
COUNT(*)
----------
0
scott@ORCL> SELECT SEGMENT_NAME,PARTITION_NAME,TABLESPACE_NAME
2 FROM USER_SEGMENTS
3 WHERE SEGMENT_NAME ='T'
4 /
SEGMENT_NAME PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------ ------------------------
T P1 USERS
T SYS_P163 USERS
T SYS_P164 USERS
Thursday, April 22, 2010
When Objects statistics Become Stale...
So, something else I learned recently...
test@PWTK52> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
PL/SQL Release 9.2.0.8.0 - Production
CORE 9.2.0.8.0 Production
TNS for 32-bit Windows: Version 9.2.0.8.0 - Production
NLSRTL Version 9.2.0.8.0 - Production
test@PWTK52> create table t as select * from all_objects ;
Table created.
test@PWTK52> exec dbms_stats.gather_table_stats(ownname=>USER,tabname=>'T',estimate_percent=>dbms_stats.auto_sample_size);
PL/SQL procedure successfully completed.
test@PWTK52> alter table t monitoring;
Table altered.
test@PWTK52> select count(*) from T;
COUNT(*)
----------
25982
test@PWTK52> declare
2 l_ObjectTab dbms_stats.ObjectTab;
3 begin
4 dbms_stats.gather_database_stats(options=>'LIST STALE',objlist=>l_ObjectTab);
5 if l_ObjectTab.count > 0 then
6 for i in l_ObjectTab.first..l_ObjectTab.last
7 loop
8 if l_objecttab(i).ownname = USER then
9 dbms_output.put_line ( l_objecttab(i).ownname||'-'|| l_objecttab(i).objtype||'-'|| l_objecttab(i).objname||'-'|| l_objecttab(i).partname );
10 end if;
11 end loop;
12 end if;
13 l_ObjectTab.delete;
14 end;
15 /
PL/SQL procedure successfully completed.
So, now updating 10% of the records in Table ( 25982 * 10% = 2598)
test@PWTK52> update t
2 set object_name = initcap(object_name)
3 where rownum <= 2600;
2600 rows updated.
test@PWTK52> commit;
Commit complete.
Now, the objects become Stale due to 10% changes in rows.
test@PWTK52> SELECT table_name, inserts, updates, deletes
2 FROM all_tab_modifications
3 WHERE table_owner ='TEST';
TABLE_NAME INSERTS UPDATES DELETES
------------------------------ ---------- ---------- ----------
T 0 2600 0
test@PWTK52> declare
2 l_ObjectTab dbms_stats.ObjectTab;
3 begin
4 dbms_stats.gather_database_stats(options=>'LIST STALE',objlist=>l_ObjectTab);
5 if l_ObjectTab.count > 0 then
6 for i in l_ObjectTab.first..l_ObjectTab.last
7 loop
8 if l_objecttab(i).ownname = USER then
9 dbms_output.put_line ( l_objecttab(i).ownname||'-'|| l_objecttab(i).objtype||'-'|| l_objecttab(i).objname||'-'|| l_objecttab(i).partname );
10 end if;
11 end loop;
12 end if;
13 l_ObjectTab.delete;
14 end;
15 /
TEST-TABLE-T-
PL/SQL procedure successfully completed.
Script to Analyze objects having missing statistics and Empty statistics.
scott@ORCL> DECLARE
2 l_ObjectTab dbms_stats.ObjectTab;
3 BEGIN
4 -- Empty Statistics
5 dbms_stats.gather_schema_stats(ownname=>USER,OPTIONS=>'list empty',objlist=>l_ObjectTab);
6 IF l_ObjectTab.count > 0 THEN
7 FOR i IN 1..l_ObjectTab.count
8 LOOP
9 IF l_objecttab(i).objtype = 'TABLE' THEN
10 dbms_stats.gather_table_stats(ownname=>l_objecttab(i).ownname,
11 tabname=>l_objecttab(i).objname,
12 estimate_percent=>dbms_stats.auto_sample_size,
13 partname=>l_objecttab(i).partname);
14 ELSIF l_objecttab(i).objtype = 'INDEX' THEN
15 dbms_stats.gather_index_stats(ownname=>l_objecttab(i).ownname,
16 indname=>l_objecttab(i).objname,
17 estimate_percent=>dbms_stats.auto_sample_size,
18 partname=>l_objecttab(i).partname);
19 END IF;
20 END LOOP;
21 END IF;
22 -- Stale Statistics
23 dbms_stats.gather_schema_stats(ownname=>USER,OPTIONS=>'list stale',objlist=>l_ObjectTab);
24 IF l_ObjectTab.count > 0 THEN
25 FOR i IN 1..l_ObjectTab.count
26 LOOP
27 IF l_objecttab(i).objtype = 'TABLE' THEN
28 dbms_stats.gather_table_stats(ownname=>l_objecttab(i).ownname,
29 tabname=>l_objecttab(i).objname,
30 estimate_percent=>dbms_stats.auto_sample_size,
31 partname=>l_objecttab(i).partname);
32 ELSIF l_objecttab(i).objtype = 'INDEX' THEN
33 dbms_stats.gather_index_stats(ownname=>l_objecttab(i).ownname,
34 indname=>l_objecttab(i).objname,
35 estimate_percent=>dbms_stats.auto_sample_size,
36 partname=>l_objecttab(i).partname);
37 END IF;
38 END LOOP;
39 END IF;
40 END;
41 /
PL/SQL procedure successfully completed.
Objects are considered stale when 10% of the total rows have been changed. When you issue.GATHER_TABLE_STATS
withGATHER
STALE
, the procedure checks theUSER_TAB_MODIFICATIONS
view. If a monitored table has been modified more than 10%, then statistics are gathered again
test@PWTK52> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
PL/SQL Release 9.2.0.8.0 - Production
CORE 9.2.0.8.0 Production
TNS for 32-bit Windows: Version 9.2.0.8.0 - Production
NLSRTL Version 9.2.0.8.0 - Production
test@PWTK52> create table t as select * from all_objects ;
Table created.
test@PWTK52> exec dbms_stats.gather_table_stats(ownname=>USER,tabname=>'T',estimate_percent=>dbms_stats.auto_sample_size);
PL/SQL procedure successfully completed.
test@PWTK52> alter table t monitoring;
Table altered.
test@PWTK52> select count(*) from T;
COUNT(*)
----------
25982
test@PWTK52> declare
2 l_ObjectTab dbms_stats.ObjectTab;
3 begin
4 dbms_stats.gather_database_stats(options=>'LIST STALE',objlist=>l_ObjectTab);
5 if l_ObjectTab.count > 0 then
6 for i in l_ObjectTab.first..l_ObjectTab.last
7 loop
8 if l_objecttab(i).ownname = USER then
9 dbms_output.put_line ( l_objecttab(i).ownname||'-'|| l_objecttab(i).objtype||'-'|| l_objecttab(i).objname||'-'|| l_objecttab(i).partname );
10 end if;
11 end loop;
12 end if;
13 l_ObjectTab.delete;
14 end;
15 /
PL/SQL procedure successfully completed.
So, now updating 10% of the records in Table ( 25982 * 10% = 2598)
test@PWTK52> update t
2 set object_name = initcap(object_name)
3 where rownum <= 2600;
2600 rows updated.
test@PWTK52> commit;
Commit complete.
Now, the objects become Stale due to 10% changes in rows.
test@PWTK52> SELECT table_name, inserts, updates, deletes
2 FROM all_tab_modifications
3 WHERE table_owner ='TEST';
TABLE_NAME INSERTS UPDATES DELETES
------------------------------ ---------- ---------- ----------
T 0 2600 0
test@PWTK52> declare
2 l_ObjectTab dbms_stats.ObjectTab;
3 begin
4 dbms_stats.gather_database_stats(options=>'LIST STALE',objlist=>l_ObjectTab);
5 if l_ObjectTab.count > 0 then
6 for i in l_ObjectTab.first..l_ObjectTab.last
7 loop
8 if l_objecttab(i).ownname = USER then
9 dbms_output.put_line ( l_objecttab(i).ownname||'-'|| l_objecttab(i).objtype||'-'|| l_objecttab(i).objname||'-'|| l_objecttab(i).partname );
10 end if;
11 end loop;
12 end if;
13 l_ObjectTab.delete;
14 end;
15 /
TEST-TABLE-T-
PL/SQL procedure successfully completed.
Script to Analyze objects having missing statistics and Empty statistics.
scott@ORCL> DECLARE
2 l_ObjectTab dbms_stats.ObjectTab;
3 BEGIN
4 -- Empty Statistics
5 dbms_stats.gather_schema_stats(ownname=>USER,OPTIONS=>'list empty',objlist=>l_ObjectTab);
6 IF l_ObjectTab.count > 0 THEN
7 FOR i IN 1..l_ObjectTab.count
8 LOOP
9 IF l_objecttab(i).objtype = 'TABLE' THEN
10 dbms_stats.gather_table_stats(ownname=>l_objecttab(i).ownname,
11 tabname=>l_objecttab(i).objname,
12 estimate_percent=>dbms_stats.auto_sample_size,
13 partname=>l_objecttab(i).partname);
14 ELSIF l_objecttab(i).objtype = 'INDEX' THEN
15 dbms_stats.gather_index_stats(ownname=>l_objecttab(i).ownname,
16 indname=>l_objecttab(i).objname,
17 estimate_percent=>dbms_stats.auto_sample_size,
18 partname=>l_objecttab(i).partname);
19 END IF;
20 END LOOP;
21 END IF;
22 -- Stale Statistics
23 dbms_stats.gather_schema_stats(ownname=>USER,OPTIONS=>'list stale',objlist=>l_ObjectTab);
24 IF l_ObjectTab.count > 0 THEN
25 FOR i IN 1..l_ObjectTab.count
26 LOOP
27 IF l_objecttab(i).objtype = 'TABLE' THEN
28 dbms_stats.gather_table_stats(ownname=>l_objecttab(i).ownname,
29 tabname=>l_objecttab(i).objname,
30 estimate_percent=>dbms_stats.auto_sample_size,
31 partname=>l_objecttab(i).partname);
32 ELSIF l_objecttab(i).objtype = 'INDEX' THEN
33 dbms_stats.gather_index_stats(ownname=>l_objecttab(i).ownname,
34 indname=>l_objecttab(i).objname,
35 estimate_percent=>dbms_stats.auto_sample_size,
36 partname=>l_objecttab(i).partname);
37 END IF;
38 END LOOP;
39 END IF;
40 END;
41 /
PL/SQL procedure successfully completed.
Wednesday, April 21, 2010
NO_DATA_NEEDED
What if you have a pipelined function that does something like:
a) open file
b) read line - pipe row
c) when no more data, close file and return
It works perfectly - if you read all records from the file.
scott@ORCL> create or replace function read_data(
2 p_dir in varchar2,
3 p_file in varchar2) return sys.odcivarchar2list
4 pipelined as
5 l_filetype utl_file.file_type;
6 l_data varchar2(32767);
7 begin
8 dbms_output.put_line (' Begin process ');
9 l_filetype := utl_file.fopen(p_dir,p_file,'r');
10 loop
11 dbms_output.put_line (' pipe rows ');
12 utl_file.get_line(l_filetype,l_data,32767);
13 pipe row(l_data);
14 end loop;
15
16 dbms_output.put_line (' End process ');
17 utl_file.fclose(l_filetype);
18 return;
19 exception
20 when no_data_found then
21 dbms_output.put_line (' End process ');
22 utl_file.fclose(l_filetype);
23 return;
24 end read_data;
25 /
Function created.
scott@ORCL> select *
2 from
3 table(read_data('DATA_PUMP_DIR','SAMPLE.TXT'))
COLUMN_VALUE
---------------------------------------------------
A
B
C
D
E
Begin process
pipe rows
pipe rows
pipe rows
pipe rows
pipe rows
pipe rows
End process
However, if you call it from a query such as
SELECT * FROM TABLE(read_data('DATA_PUMP_DIR','SAMPLE.TXT')) where rownum <= 1;
you would leave a file handle open - and if you called this function over and over, you would leak an open file each time and eventually run out of file handles. So, you would have an error that would "sometimes happen" and "sometimes not happen"
scott@ORCL> SELECT * FROM TABLE(read_data('DATA_PUMP_DIR','SAMPLE.TXT')) where rownum <= 1;
COLUMN_VALUE
------------------------------
A
Begin process
pipe rows
Enter NO_DATA_NEEDED - an exception that doesn't behave like any other exception. An exception that is raised - but does not cause failure. An exception that can be caught, but if it isn't - everything is still OK.
scott@ORCL> create or replace function read_data(
2 p_dir in varchar2,
3 p_file in varchar2) return sys.odcivarchar2list
4 pipelined as
5 l_filetype utl_file.file_type;
6 l_data varchar2(32767);
7 begin
8 dbms_output.put_line (' Begin process ');
9 l_filetype := utl_file.fopen(p_dir,p_file,'r');
10 loop
11 dbms_output.put_line (' pipe rows ');
12 utl_file.get_line(l_filetype,l_data,32767);
13 pipe row(l_data);
14 end loop;
15
16 dbms_output.put_line (' End process ');
17
18 utl_file.fclose(l_filetype);
19 return;
20 exception
21 when no_data_needed then
22 dbms_output.put_line (' End process in exception');
23 utl_file.fclose(l_filetype);
24 return;
25 end read_data;
26 /
Function created.
scott@ORCL> SELECT *
2 FROM
3 TABLE (read_data('DATA_PUMP_DIR','SAMPLE.TXT'))
4 WHERE rownum <= 1;
COLUMN_VALUE
------------------------------------------------------
A
Begin process
pipe rows
End process in exception
As you can see – our special cleanup code ( **End process in exception** ) was executed and we could clean up any resources we allocated.
a) open file
b) read line - pipe row
c) when no more data, close file and return
It works perfectly - if you read all records from the file.
scott@ORCL> create or replace function read_data(
2 p_dir in varchar2,
3 p_file in varchar2) return sys.odcivarchar2list
4 pipelined as
5 l_filetype utl_file.file_type;
6 l_data varchar2(32767);
7 begin
8 dbms_output.put_line (' Begin process ');
9 l_filetype := utl_file.fopen(p_dir,p_file,'r');
10 loop
11 dbms_output.put_line (' pipe rows ');
12 utl_file.get_line(l_filetype,l_data,32767);
13 pipe row(l_data);
14 end loop;
15
16 dbms_output.put_line (' End process ');
17 utl_file.fclose(l_filetype);
18 return;
19 exception
20 when no_data_found then
21 dbms_output.put_line (' End process ');
22 utl_file.fclose(l_filetype);
23 return;
24 end read_data;
25 /
Function created.
scott@ORCL> select *
2 from
3 table(read_data('DATA_PUMP_DIR','SAMPLE.TXT'))
COLUMN_VALUE
---------------------------------------------------
A
B
C
D
E
Begin process
pipe rows
pipe rows
pipe rows
pipe rows
pipe rows
pipe rows
End process
However, if you call it from a query such as
SELECT * FROM TABLE(read_data('DATA_PUMP_DIR','SAMPLE.TXT')) where rownum <= 1;
you would leave a file handle open - and if you called this function over and over, you would leak an open file each time and eventually run out of file handles. So, you would have an error that would "sometimes happen" and "sometimes not happen"
scott@ORCL> SELECT * FROM TABLE(read_data('DATA_PUMP_DIR','SAMPLE.TXT')) where rownum <= 1;
COLUMN_VALUE
------------------------------
A
Begin process
pipe rows
Enter NO_DATA_NEEDED - an exception that doesn't behave like any other exception. An exception that is raised - but does not cause failure. An exception that can be caught, but if it isn't - everything is still OK.
scott@ORCL> create or replace function read_data(
2 p_dir in varchar2,
3 p_file in varchar2) return sys.odcivarchar2list
4 pipelined as
5 l_filetype utl_file.file_type;
6 l_data varchar2(32767);
7 begin
8 dbms_output.put_line (' Begin process ');
9 l_filetype := utl_file.fopen(p_dir,p_file,'r');
10 loop
11 dbms_output.put_line (' pipe rows ');
12 utl_file.get_line(l_filetype,l_data,32767);
13 pipe row(l_data);
14 end loop;
15
16 dbms_output.put_line (' End process ');
17
18 utl_file.fclose(l_filetype);
19 return;
20 exception
21 when no_data_needed then
22 dbms_output.put_line (' End process in exception');
23 utl_file.fclose(l_filetype);
24 return;
25 end read_data;
26 /
Function created.
scott@ORCL> SELECT *
2 FROM
3 TABLE (read_data('DATA_PUMP_DIR','SAMPLE.TXT'))
4 WHERE rownum <= 1;
COLUMN_VALUE
------------------------------------------------------
A
Begin process
pipe rows
End process in exception
As you can see – our special cleanup code ( **End process in exception** ) was executed and we could clean up any resources we allocated.
Monday, April 19, 2010
Exporting & Importing Statistics
I learn's something newly today, exporting and Importing Statistics across the Schema. Here is a quick demo.
Exporting Statistics
a@10GR2> create table t1 as select * from all_users;
Table created.
a@10GR2> alter table t1 add constraint t1_pk primary key(user_id);
Table altered.
a@10GR2> create table t2 as select * from all_objects;
Table created.
a@10GR2> alter table t2 add constraint t2_pk primary key(object_id);
Table altered.
a@10GR2> exec dbms_stats.gather_schema_stats(ownname=>USER,estimate_percent=>dbms_stats.auto_sample_size);
PL/SQL procedure successfully completed.
a@10GR2> exec dbms_stats.create_stat_table(ownname=>USER,stattab=>'mystat');
PL/SQL procedure successfully completed.
a@10GR2> exec dbms_stats.export_schema_stats(ownname=>USER,stattab=>'mystat');
PL/SQL procedure successfully completed.
a@10GR2> select 'Table', table_name, num_rows from user_tables
2 union all
3 select 'index',index_name,num_rows from user_indexes
4 union all
5 select 'col', table_name||'.'|| column_name,count(*) from user_tab_histograms
6 group by table_name||'.'|| column_name;
'TABL TABLE_NAME NUM_ROWS
----- ------------------------------ ---------
Table T1 40
Table T2 40734
Table MYSTAT
index T1_PK 40
index MYSTAT
index T2_PK 40734
col T2.OBJECT_TYPE 2
col T2.OBJECT_ID 2
col T1.USERNAME 2
col T2.STATUS 2
col T2.CREATED 2
col T1.CREATED 2
col T2.SECONDARY 2
col T1.USER_ID 2
col T2.OWNER 2
col T2.OBJECT_NAME 2
col T2.TEMPORARY 2
col T2.GENERATED 2
col T2.TIMESTAMP 2
col T2.DATA_OBJECT_ID 2
col T2.LAST_DDL_TIME 2
21 rows selected.
a@10GR2> update mystat set c5='B';
20 rows updated.
a@10GR2> commit;
Commit complete.
Importing Statistics
b@10GR2> create table t1 as select * from all_users;
Table created.
b@10GR2> alter table t1 add constraint t1_pk primary key(user_id);
Table altered.
b@10GR2> create table t2 as select * from all_objects;
Table created.
b@10GR2> alter table t2 add constraint t2_pk primary key(object_id);
Table altered.
Exporting Statistics
a@10GR2> create table t1 as select * from all_users;
Table created.
a@10GR2> alter table t1 add constraint t1_pk primary key(user_id);
Table altered.
a@10GR2> create table t2 as select * from all_objects;
Table created.
a@10GR2> alter table t2 add constraint t2_pk primary key(object_id);
Table altered.
a@10GR2> exec dbms_stats.gather_schema_stats(ownname=>USER,estimate_percent=>dbms_stats.auto_sample_size);
PL/SQL procedure successfully completed.
a@10GR2> exec dbms_stats.create_stat_table(ownname=>USER,stattab=>'mystat');
PL/SQL procedure successfully completed.
a@10GR2> exec dbms_stats.export_schema_stats(ownname=>USER,stattab=>'mystat');
PL/SQL procedure successfully completed.
a@10GR2> select 'Table', table_name, num_rows from user_tables
2 union all
3 select 'index',index_name,num_rows from user_indexes
4 union all
5 select 'col', table_name||'.'|| column_name,count(*) from user_tab_histograms
6 group by table_name||'.'|| column_name;
'TABL TABLE_NAME NUM_ROWS
----- ------------------------------ ---------
Table T1 40
Table T2 40734
Table MYSTAT
index T1_PK 40
index MYSTAT
index T2_PK 40734
col T2.OBJECT_TYPE 2
col T2.OBJECT_ID 2
col T1.USERNAME 2
col T2.STATUS 2
col T2.CREATED 2
col T1.CREATED 2
col T2.SECONDARY 2
col T1.USER_ID 2
col T2.OWNER 2
col T2.OBJECT_NAME 2
col T2.TEMPORARY 2
col T2.GENERATED 2
col T2.TIMESTAMP 2
col T2.DATA_OBJECT_ID 2
col T2.LAST_DDL_TIME 2
21 rows selected.
a@10GR2> update mystat set c5='B';
20 rows updated.
a@10GR2> commit;
Commit complete.
D:\>exp userid=a/a@10.239.61.210:1521/10GR2 tables=mystat file=d:\a_dmp.dmp log=d:\a_log.txt
Export: Release 10.1.0.2.0 - Production on Mon Apr 19 19:39:06 2010
Copyright (c) 1982, 2004, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
About to export specified tables via Conventional Path ...
. . exporting table MYSTAT 20 rows exported
Export terminated successfully without warnings.
b@10GR2> create table t1 as select * from all_users;
Table created.
b@10GR2> alter table t1 add constraint t1_pk primary key(user_id);
Table altered.
b@10GR2> create table t2 as select * from all_objects;
Table created.
b@10GR2> alter table t2 add constraint t2_pk primary key(object_id);
Table altered.
D:\>imp userid=b/b@10.239.61.210:1521/10GR2 tables=mystat file=d:\a_dmp.dmp log=d:\b_log.txt fromuser=a touser=b
Import: Release 10.1.0.2.0 - Production on Mon Apr 19 19:40:43 2010
Copyright (c) 1982, 2004, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export file created by EXPORT:V10.01.00 via conventional path
Warning: the objects were exported by A, not by you
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. . importing table "MYSTAT" 20 rows imported
Import terminated successfully without warnings.
b@10GR2> exec dbms_stats.import_schema_stats(ownname=>USER,stattab=>'MYSTAT');
PL/SQL procedure successfully completed.
b@10GR2> select 'Table', table_name, num_rows from user_tables
2 union all
3 select 'index',index_name,num_rows from user_indexes
4 union all
5 select 'col', table_name||'.'|| column_name,count(*) from user_tab_histograms
6 group by table_name||'.'|| column_name;
'TABL TABLE_NAME NUM_ROWS
----- ------------------------------ ----------
Table MYSTAT
Table T1 40
Table T2 40734
index MYSTAT 20
index T1_PK 40
index T2_PK 40734
col T2.OBJECT_TYPE 2
col T2.OBJECT_ID 2
col T1.USERNAME 2
col T2.STATUS 2
col T2.CREATED 2
col T1.CREATED 2
col T2.SECONDARY 2
col T1.USER_ID 2
col T2.OWNER 2
col T2.OBJECT_NAME 2
col T2.TEMPORARY 2
col T2.GENERATED 2
col T2.TIMESTAMP 2
col T2.DATA_OBJECT_ID 2
col T2.LAST_DDL_TIME 2
21 rows selected.
PL/SQL procedure successfully completed.
b@10GR2> select 'Table', table_name, num_rows from user_tables
2 union all
3 select 'index',index_name,num_rows from user_indexes
4 union all
5 select 'col', table_name||'.'|| column_name,count(*) from user_tab_histograms
6 group by table_name||'.'|| column_name;
'TABL TABLE_NAME NUM_ROWS
----- ------------------------------ ----------
Table MYSTAT
Table T1 40
Table T2 40734
index MYSTAT 20
index T1_PK 40
index T2_PK 40734
col T2.OBJECT_TYPE 2
col T2.OBJECT_ID 2
col T1.USERNAME 2
col T2.STATUS 2
col T2.CREATED 2
col T1.CREATED 2
col T2.SECONDARY 2
col T1.USER_ID 2
col T2.OWNER 2
col T2.OBJECT_NAME 2
col T2.TEMPORARY 2
col T2.GENERATED 2
col T2.TIMESTAMP 2
col T2.DATA_OBJECT_ID 2
col T2.LAST_DDL_TIME 2
21 rows selected.
Sunday, April 18, 2010
Evaluating an expression
Today, I learned a new 11g Release 1 and above 'trick' that I wasn't aware of. This is pretty cool.
scott@ORCL> select * from v$version;
BANNER
-------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
scott@ORCL> variable x varchar2(30);
scott@ORCL> exec :x := '(1+2)/3';
PL/SQL procedure successfully completed.
scott@ORCL> SELECT xmlquery( REPLACE(:x,'/','div') returning content).getnumberval() AS result
2 FROM dual;
RESULT
----------
1
The XQuery was there in Oracle 10g but it doesn't support literals.
scott@RASDEV> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Solaris: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
scott@RASDEV> variable x varchar2(30);
scott@RASDEV> exec :x := '(1+2)/3';
PL/SQL procedure successfully completed.
scott@RASDEV> SELECT xmlquery( REPLACE(:x,'/','div') returning content).getnumberval() AS result
2 FROM dual;
SELECT xmlquery( REPLACE(:x,'/','div') returning content).getnumberval() AS result
*
ERROR at line 1:
ORA-19102: XQuery string literal expected
http://www.xquery.com/tutorials/guided-tour/xquery-operators.html
scott@ORCL> select * from v$version;
BANNER
-------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
scott@ORCL> variable x varchar2(30);
scott@ORCL> exec :x := '(1+2)/3';
PL/SQL procedure successfully completed.
scott@ORCL> SELECT xmlquery( REPLACE(:x,'/','div') returning content).getnumberval() AS result
2 FROM dual;
RESULT
----------
1
The XQuery was there in Oracle 10g but it doesn't support literals.
scott@RASDEV> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Solaris: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
scott@RASDEV> variable x varchar2(30);
scott@RASDEV> exec :x := '(1+2)/3';
PL/SQL procedure successfully completed.
scott@RASDEV> SELECT xmlquery( REPLACE(:x,'/','div') returning content).getnumberval() AS result
2 FROM dual;
SELECT xmlquery( REPLACE(:x,'/','div') returning content).getnumberval() AS result
*
ERROR at line 1:
ORA-19102: XQuery string literal expected
http://www.xquery.com/tutorials/guided-tour/xquery-operators.html
Trouble With Triggers
The first reason for disliking triggers is Incorrect Implementation. Can you immediately see the huge bug in this trigger?
scott@ORCL> create or replace trigger send_email
2 after insert on emp
3 for each row
4 DECLARE
5 c utl_smtp.connection;
6 PROCEDURE send_header(name IN VARCHAR2, header IN VARCHAR2) AS
7 BEGIN
8 utl_smtp.write_data(c, name || ': ' || header || utl_tcp.CRLF);
9 END;
10 begin
11 c := utl_smtp.open_connection('smtp-server.acme.com');
12 utl_smtp.helo(c, 'foo.com');
13 utl_smtp.mail(c, 'sender@foo.com');
14 utl_smtp.rcpt(c, 'recipient@foo.com');
15 utl_smtp.open_data(c);
16 send_header('From', '"Sender"');
17 send_header('To', '"Recipient"');
18 send_header('Subject', 'Newly Employee with Employee id'||:new.empno||' Joined Organization ');
19 utl_smtp.write_data(c, utl_tcp.CRLF || 'Newly Employee with Employee id'||:new.empno||' Joined Organization ');
20 utl_smtp.close_data(c);
21 utl_smtp.quit(c);
22 EXCEPTION
23 WHEN utl_smtp.transient_error OR utl_smtp.permanent_error THEN
24 BEGIN
25 utl_smtp.quit(c);
26 EXCEPTION
27 WHEN utl_smtp.transient_error OR utl_smtp.permanent_error THEN
28 NULL; -- When the SMTP server is down or unavailable, we don't
29 -- have a connection to the server. The quit call will
30 -- raise an exception that we can ignore.
31 END;
32 raise_application_error(-20000,
33 'Failed to send mail due to the following error: ' || sqlerrm);
34 end;
35 /
Trigger created.
Obviously, its syntactically correct- it compiles. And if you insert any record the trigger will flawlessly. But it is very wrong, the implementation has a huge mistake.What heppens if you insert 100 records into Emp table and then decided to Rollback. the SMTP Protocol wont praticipate in distributed Transaction with Oracle database.so sending the email will not rollback. You will have 100 emails send that 100 new employees joined the organization that never really happened. This is perhaps the most frequent errors made in use of triggers - Performing an operation that cannot be rolled back.
The first rule of trigger coding is, Do not perform any operation that cannot be rolled back. Consider what happens if your trigger fires—the code executes—but the transaction rolls back
The next implementation problem with triggers stems from the fact that developers don't understand all the nuisance of Concurrency controls and Isolation levels. One of the greatest oracle feature is that reads don't block writes and writes don't block reads. However that single feature when not fully understood by the developer turns into a liability when the developer turns to triggers. especially triggers that enforce some "rule"
Suppose we have a table like below that includes currency combination along with primary currency.
create table currency
(
country varchar2(5),
currency varchar2(5),
primary_ind varchar2(1)
);
insert into currency values ('US','USD','Y');
insert into currency values ('US','USN','N');
insert into currency values ('US','USS','N');
commit;
We need to enforce the rule that at most one currency can be primary for a given country. we have a before update trigger on that table for each row to check whether the country has primary currency.
scott@ORCL> create or replace trigger currency_trig
2 before update on currency
3 for each row
4 declare
5 pragma autonomous_transaction;
6 l_count number;
7 begin
8 select count(*)
9 into l_count
10 from currency
11 where primary_ind ='Y'
12 and country = :new.country;
13
14 if l_count > 1 then
15 raise_application_error (-20458,'More than one primary currency ');
16 end if;
17 end;
18 /
Trigger created.
Now, there are many things wrong with this trigger. But the first obvious clue that something seriously wrong with the trigger is Autonomous Transaction. without it an update would produce error.
scott@ORCL> update currency set primary_ind = 'Y' where country='US' and currency='USN';
update currency set primary_ind = 'Y' where country='US' and currency='USN'
*
ERROR at line 1:
ORA-04091: table SCOTT.CURRENCY is mutating, trigger/function may not see it
ORA-06512: at "SCOTT.CURRENCY_TRIG", line 4
ORA-04088: error during execution of trigger 'SCOTT.CURRENCY_TRIG'
That is not really an error but more of a warning. Basically it is saying, “You are doing something so fundamentally wrong in your trigger that Oracle Database is just not going to permit you to do that.”
Autonomous Transaction That “feature” permits developers to query the table the trigger is firing on yet query it as if it were in another session/transaction altogether. The trigger will not see its own modifications to the table, and that is the huge flaw with this thinking: the trigger is attempting to validate the very modifications it cannot see. The trigger’s sole purpose is to validate the modifications to the data, but the trigger is reading the data before the modifications take place. It therefore cannot work!
scott@ORCL> select * from currency;
COUNT CURRE P
----- ----- -
US USD Y
US USN N
US USS N
scott@ORCL> update currency set primary_ind = 'Y' where country='US' and currency='USN';
1 row updated.
scott@ORCL> select * from currency;
COUNT CURRE P
----- ----- -
US USD Y
US USN Y
US USS N
Now we end up with two primary currency and that is the flaw with using triggers enforcing Entity Integrity.
Correct Answers :
The correct answer is to fix the data model, A table with primary currency and a table with other currency.
scott@ORCL> create table primary_currency
2 (
3 country varchar2(5),
4 currency varchar2(5),
5 constraint primary_currency_pk primary key(country)
6 );
Table created.
scott@ORCL> create table other_currency
2 (
3 country varchar2(5),
4 currency varchar2(5),
5 constraint other_currency_pk primary key(country,currency)
6 );
Table created.
We are done, now it is impossible to have two primary currency for a country and the primary key enforces that. Now at most one currency can be a primary currency and a country should have a primary currency. Again a simple foreign key constraint can do that.
scott@ORCL> alter table other_currency
2 add constraint must_have_primary_currency
3 foreign key (country) references primary_currency(country);
Table altered.
Now we are probably done with the rules needed for this. But the real world problem is that a country having a primary currency should not be allowed in other currency. That is if USD is primary currency in PRIMARY_CURRENCY then USD cannot appear in OTHER_CURRENCY table. This is like a "Anti-Foreign key" a feature that doesn't exists. we can implement as a database rule. basically we need to make sure that if we join PRIMARY_CURRENCY and OTHER_CURRENCY for each country and currency there always be a zero records in result sets. That can be achieved like below.
scott@ORCL> create materialized view log on PRIMARY_CURRENCY with rowid;
Materialized view log created.
scott@ORCL> create materialized view log on OTHER_CURRENCY with rowid;
Materialized view log created.
scott@ORCL> create materialized view currency_mv
2 refresh fast on commit
3 as
4 select p.rowid as p_rid, o.rowid as o_rid
5 from PRIMARY_CURRENCY p,
6 OTHER_CURRENCY o
7 where p.country = o.country
8 and p.currency = o.currency;
Materialized view created.
scott@ORCL> alter table currency_mv add constraint currency_mv_chk check (p_rid is null and o_rid is null);
Table altered.
So now we have a materialized view that will refresh on COMMIT and ensure that no data can be joined between the two tables. This materialized view will always be empty. Again, it is scalable (the only opportunity for any serialization would be at COMMIT time) and correct. The database is enforcing this constraint for us.
scott@ORCL> select * from primary_currency;
COUNT CURRE
----- -----
US USD
scott@ORCL> select * from other_currency;
COUNT CURRE
----- -----
US USN
US USS
scott@ORCL> insert into other_currency values ('US','USD');
1 row created.
scott@ORCL> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-02290: check constraint (SCOTT.CURRENCY_MV_CHK) violated
Triggers should be viewed with a skeptic’s eye. If the trigger is in place to enforce entity integrity, be very suspicious of it. Think about multiuser conditions. Think about what happens when two or three people operate on similar data at, or at about, the same time. Run all the combinations in your head or on the whiteboard. Play with your schema, using multiple sessions. See what happens when you have concurrent access.
scott@ORCL> create or replace trigger send_email
2 after insert on emp
3 for each row
4 DECLARE
5 c utl_smtp.connection;
6 PROCEDURE send_header(name IN VARCHAR2, header IN VARCHAR2) AS
7 BEGIN
8 utl_smtp.write_data(c, name || ': ' || header || utl_tcp.CRLF);
9 END;
10 begin
11 c := utl_smtp.open_connection('smtp-server.acme.com');
12 utl_smtp.helo(c, 'foo.com');
13 utl_smtp.mail(c, 'sender@foo.com');
14 utl_smtp.rcpt(c, 'recipient@foo.com');
15 utl_smtp.open_data(c);
16 send_header('From', '"Sender"
17 send_header('To', '"Recipient"
18 send_header('Subject', 'Newly Employee with Employee id'||:new.empno||' Joined Organization ');
19 utl_smtp.write_data(c, utl_tcp.CRLF || 'Newly Employee with Employee id'||:new.empno||' Joined Organization ');
20 utl_smtp.close_data(c);
21 utl_smtp.quit(c);
22 EXCEPTION
23 WHEN utl_smtp.transient_error OR utl_smtp.permanent_error THEN
24 BEGIN
25 utl_smtp.quit(c);
26 EXCEPTION
27 WHEN utl_smtp.transient_error OR utl_smtp.permanent_error THEN
28 NULL; -- When the SMTP server is down or unavailable, we don't
29 -- have a connection to the server. The quit call will
30 -- raise an exception that we can ignore.
31 END;
32 raise_application_error(-20000,
33 'Failed to send mail due to the following error: ' || sqlerrm);
34 end;
35 /
Trigger created.
Obviously, its syntactically correct- it compiles. And if you insert any record the trigger will flawlessly. But it is very wrong, the implementation has a huge mistake.What heppens if you insert 100 records into Emp table and then decided to Rollback. the SMTP Protocol wont praticipate in distributed Transaction with Oracle database.so sending the email will not rollback. You will have 100 emails send that 100 new employees joined the organization that never really happened. This is perhaps the most frequent errors made in use of triggers - Performing an operation that cannot be rolled back.
The first rule of trigger coding is, Do not perform any operation that cannot be rolled back. Consider what happens if your trigger fires—the code executes—but the transaction rolls back
The next implementation problem with triggers stems from the fact that developers don't understand all the nuisance of Concurrency controls and Isolation levels. One of the greatest oracle feature is that reads don't block writes and writes don't block reads. However that single feature when not fully understood by the developer turns into a liability when the developer turns to triggers. especially triggers that enforce some "rule"
Suppose we have a table like below that includes currency combination along with primary currency.
create table currency
(
country varchar2(5),
currency varchar2(5),
primary_ind varchar2(1)
);
insert into currency values ('US','USD','Y');
insert into currency values ('US','USN','N');
insert into currency values ('US','USS','N');
commit;
We need to enforce the rule that at most one currency can be primary for a given country. we have a before update trigger on that table for each row to check whether the country has primary currency.
scott@ORCL> create or replace trigger currency_trig
2 before update on currency
3 for each row
4 declare
5 pragma autonomous_transaction;
6 l_count number;
7 begin
8 select count(*)
9 into l_count
10 from currency
11 where primary_ind ='Y'
12 and country = :new.country;
13
14 if l_count > 1 then
15 raise_application_error (-20458,'More than one primary currency ');
16 end if;
17 end;
18 /
Trigger created.
Now, there are many things wrong with this trigger. But the first obvious clue that something seriously wrong with the trigger is Autonomous Transaction. without it an update would produce error.
scott@ORCL> update currency set primary_ind = 'Y' where country='US' and currency='USN';
update currency set primary_ind = 'Y' where country='US' and currency='USN'
*
ERROR at line 1:
ORA-04091: table SCOTT.CURRENCY is mutating, trigger/function may not see it
ORA-06512: at "SCOTT.CURRENCY_TRIG", line 4
ORA-04088: error during execution of trigger 'SCOTT.CURRENCY_TRIG'
That is not really an error but more of a warning. Basically it is saying, “You are doing something so fundamentally wrong in your trigger that Oracle Database is just not going to permit you to do that.”
Autonomous Transaction That “feature” permits developers to query the table the trigger is firing on yet query it as if it were in another session/transaction altogether. The trigger will not see its own modifications to the table, and that is the huge flaw with this thinking: the trigger is attempting to validate the very modifications it cannot see. The trigger’s sole purpose is to validate the modifications to the data, but the trigger is reading the data before the modifications take place. It therefore cannot work!
scott@ORCL> select * from currency;
COUNT CURRE P
----- ----- -
US USD Y
US USN N
US USS N
scott@ORCL> update currency set primary_ind = 'Y' where country='US' and currency='USN';
1 row updated.
scott@ORCL> select * from currency;
COUNT CURRE P
----- ----- -
US USD Y
US USN Y
US USS N
Now we end up with two primary currency and that is the flaw with using triggers enforcing Entity Integrity.
Correct Answers :
The correct answer is to fix the data model, A table with primary currency and a table with other currency.
scott@ORCL> create table primary_currency
2 (
3 country varchar2(5),
4 currency varchar2(5),
5 constraint primary_currency_pk primary key(country)
6 );
Table created.
scott@ORCL> create table other_currency
2 (
3 country varchar2(5),
4 currency varchar2(5),
5 constraint other_currency_pk primary key(country,currency)
6 );
Table created.
We are done, now it is impossible to have two primary currency for a country and the primary key enforces that. Now at most one currency can be a primary currency and a country should have a primary currency. Again a simple foreign key constraint can do that.
scott@ORCL> alter table other_currency
2 add constraint must_have_primary_currency
3 foreign key (country) references primary_currency(country);
Table altered.
Now we are probably done with the rules needed for this. But the real world problem is that a country having a primary currency should not be allowed in other currency. That is if USD is primary currency in PRIMARY_CURRENCY then USD cannot appear in OTHER_CURRENCY table. This is like a "Anti-Foreign key" a feature that doesn't exists. we can implement as a database rule. basically we need to make sure that if we join PRIMARY_CURRENCY and OTHER_CURRENCY for each country and currency there always be a zero records in result sets. That can be achieved like below.
scott@ORCL> create materialized view log on PRIMARY_CURRENCY with rowid;
Materialized view log created.
scott@ORCL> create materialized view log on OTHER_CURRENCY with rowid;
Materialized view log created.
scott@ORCL> create materialized view currency_mv
2 refresh fast on commit
3 as
4 select p.rowid as p_rid, o.rowid as o_rid
5 from PRIMARY_CURRENCY p,
6 OTHER_CURRENCY o
7 where p.country = o.country
8 and p.currency = o.currency;
Materialized view created.
scott@ORCL> alter table currency_mv add constraint currency_mv_chk check (p_rid is null and o_rid is null);
Table altered.
So now we have a materialized view that will refresh on COMMIT and ensure that no data can be joined between the two tables. This materialized view will always be empty. Again, it is scalable (the only opportunity for any serialization would be at COMMIT time) and correct. The database is enforcing this constraint for us.
scott@ORCL> select * from primary_currency;
COUNT CURRE
----- -----
US USD
scott@ORCL> select * from other_currency;
COUNT CURRE
----- -----
US USN
US USS
scott@ORCL> insert into other_currency values ('US','USD');
1 row created.
scott@ORCL> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-02290: check constraint (SCOTT.CURRENCY_MV_CHK) violated
Triggers should be viewed with a skeptic’s eye. If the trigger is in place to enforce entity integrity, be very suspicious of it. Think about multiuser conditions. Think about what happens when two or three people operate on similar data at, or at about, the same time. Run all the combinations in your head or on the whiteboard. Play with your schema, using multiple sessions. See what happens when you have concurrent access.
Friday, April 16, 2010
Optimizer Plan Stability
Using Optimizer Plan stability we can preserve our existing execution plans, and isolate out applications from these plan changes. It should be noted that in most cases its desirable that plan changes over the time due to the distribution of data changes. A Quick example to demonstrate about Optimizer plan stability.
scott@9iR2> select * from v$version;
BANNER
-------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
PL/SQL Release 9.2.0.8.0 - Production
CORE 9.2.0.8.0 Production
TNS for 32-bit Windows: Version 9.2.0.8.0 - Production
NLSRTL Version 9.2.0.8.0 - Production
scott@9iR2> create table emp as select * from scott.emp;
Table created.
scott@9iR2> alter table emp add constraint emp_pk primary key(empno);
Table altered.
scott@9iR2> set autotrace traceonly explain;
scott@9iR2> select * from emp where empno > 0;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP'
2 1 INDEX (RANGE SCAN) OF 'EMP_PK' (UNIQUE)
scott@9iR2> set autotrace off;
In the absence of Statistics Rule Based Optimizer (RBO) is Invoked. lets assume this query comes from the application, where the end user would like to get some data very quickly, and the index access does this nicely for us. we are happy with this plan and we would like to always use this plan. So the next thing to do is to Create an Outline for it.
scott@9iR2> create or replace outline myoutline for category mycategory
2 on
3 select * from emp where empno > 0;
Outline created.
This created the Query outline, and defines our required execution plan. Now let's simply analyze our table.
scott@9iR2> begin
2 dbms_stats.gather_table_stats(ownname=>USER,tabname=>'EMP');
3 end;
4 /
PL/SQL procedure successfully completed.
scott@9iR2> set autotrace traceonly explain;
scott@9iR2> select * from emp where empno > 0;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=14 Bytes=518)
1 0 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=14 Bytes=518)
Instead of using the Index as with RBO, we now have the statistics to allow the CBO to be invoked and it chooses Full Table scan. The CBO infact chooses the correct plan. There are only 14 rows and it understands that all of them statisfy the perdicate in this case. In order to get back to our preferred plan we need to use the Optimizer Plan Stability feature, To do that we need to issue the below command.
scott@9iR2> alter session set use_stored_outlines=mycategory;
Session altered.
This enforces use of our MYCATEGORY stored outline. If we take a look at our execution plan:
scott@9iR2> select * from emp where empno > 0 ;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=826 Card=14 Bytes=518)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=826 Card=14 Bytes=518)
2 1 INDEX (RANGE SCAN) OF 'EMP_PK' (UNIQUE) (Cost=26 Card=14)
scott@9iR2> set autotrace off;
We find that we are back to using the original plan with the index again. This is the goal of optimizer plan stability
scott@9iR2> select * from v$version;
BANNER
-------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
PL/SQL Release 9.2.0.8.0 - Production
CORE 9.2.0.8.0 Production
TNS for 32-bit Windows: Version 9.2.0.8.0 - Production
NLSRTL Version 9.2.0.8.0 - Production
scott@9iR2> create table emp as select * from scott.emp;
Table created.
scott@9iR2> alter table emp add constraint emp_pk primary key(empno);
Table altered.
scott@9iR2> set autotrace traceonly explain;
scott@9iR2> select * from emp where empno > 0;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP'
2 1 INDEX (RANGE SCAN) OF 'EMP_PK' (UNIQUE)
scott@9iR2> set autotrace off;
In the absence of Statistics Rule Based Optimizer (RBO) is Invoked. lets assume this query comes from the application, where the end user would like to get some data very quickly, and the index access does this nicely for us. we are happy with this plan and we would like to always use this plan. So the next thing to do is to Create an Outline for it.
scott@9iR2> create or replace outline myoutline for category mycategory
2 on
3 select * from emp where empno > 0;
Outline created.
This created the Query outline, and defines our required execution plan. Now let's simply analyze our table.
scott@9iR2> begin
2 dbms_stats.gather_table_stats(ownname=>USER,tabname=>'EMP');
3 end;
4 /
PL/SQL procedure successfully completed.
scott@9iR2> set autotrace traceonly explain;
scott@9iR2> select * from emp where empno > 0;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=14 Bytes=518)
1 0 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=14 Bytes=518)
Instead of using the Index as with RBO, we now have the statistics to allow the CBO to be invoked and it chooses Full Table scan. The CBO infact chooses the correct plan. There are only 14 rows and it understands that all of them statisfy the perdicate in this case. In order to get back to our preferred plan we need to use the Optimizer Plan Stability feature, To do that we need to issue the below command.
scott@9iR2> alter session set use_stored_outlines=mycategory;
Session altered.
This enforces use of our MYCATEGORY stored outline. If we take a look at our execution plan:
scott@9iR2> select * from emp where empno > 0 ;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=826 Card=14 Bytes=518)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=826 Card=14 Bytes=518)
2 1 INDEX (RANGE SCAN) OF 'EMP_PK' (UNIQUE) (Cost=26 Card=14)
scott@9iR2> set autotrace off;
We find that we are back to using the original plan with the index again. This is the goal of optimizer plan stability
Multiset operators- 10g New Features
Multiset operators combine the results of two nested tables into a single nested table. Here is a quick demonstration about that.
declare
type t1 is table of number;
list1 t1 := t1(1,2,3,4,5);
list2 t1 := t1(1,2,3);
list3 t1;
begin
list3 := list1 multiset except list2;
dbms_output.put_line (' Results of Multiset Except');
dbms_output.put_line (' ');
for i in list3.first..list3.last
loop
dbms_output.put_line (list3(i));
end loop;
dbms_output.put_line (' ');
list3.delete;
list3 := list1 multiset intersect list2;
dbms_output.put_line (' Results of Multiset Intersect');
dbms_output.put_line (' ');
for i in list3.first..list3.last
loop
dbms_output.put_line ( list3(i));
end loop;
dbms_output.put_line (' ');
list3.delete;
list3 := list1 multiset union list2;
dbms_output.put_line (' Results of Multiset Union');
dbms_output.put_line (' ');
for i in list3.first..list3.last
loop
dbms_output.put_line (list3(i));
end loop;
dbms_output.put_line (' ');
list3.delete;
end;
scott@ORCL> declare
2 type t1 is table of number;
3 list1 t1 := t1(1,2,3,4,5);
4 list2 t1 := t1(1,2,3);
5 list3 t1;
6 begin
7 list3 := list1 multiset except list2;
8 dbms_output.put_line (' Results of Multiset Except');
9 dbms_output.put_line (' ');
10 for i in list3.first..list3.last
11 loop
12 dbms_output.put_line (list3(i));
13 end loop;
14 dbms_output.put_line (' ');
15
16 list3.delete;
17 list3 := list1 multiset intersect list2;
18 dbms_output.put_line (' Results of Multiset Intersect');
19 dbms_output.put_line (' ');
20 for i in list3.first..list3.last
21 loop
22 dbms_output.put_line ( list3(i));
23 end loop;
24 dbms_output.put_line (' ');
25
26 list3.delete;
27 list3 := list1 multiset union list2;
28 dbms_output.put_line (' Results of Multiset Union');
29 dbms_output.put_line (' ');
30 for i in list3.first..list3.last
31 loop
32 dbms_output.put_line (list3(i));
33 end loop;
34 dbms_output.put_line (' ');
35
36 list3.delete;
37 end;
38 /
Results of Multiset Except
4
5
Results of Multiset Intersect
1
2
3
Results of Multiset Union
1
2
3
4
5
1
2
3
PL/SQL procedure successfully completed.
declare
type t1 is table of number;
list1 t1 := t1(1,2,3,4,5);
list2 t1 := t1(1,2,3);
list3 t1;
begin
list3 := list1 multiset except list2;
dbms_output.put_line (' Results of Multiset Except');
dbms_output.put_line (' ');
for i in list3.first..list3.last
loop
dbms_output.put_line (list3(i));
end loop;
dbms_output.put_line (' ');
list3.delete;
list3 := list1 multiset intersect list2;
dbms_output.put_line (' Results of Multiset Intersect');
dbms_output.put_line (' ');
for i in list3.first..list3.last
loop
dbms_output.put_line ( list3(i));
end loop;
dbms_output.put_line (' ');
list3.delete;
list3 := list1 multiset union list2;
dbms_output.put_line (' Results of Multiset Union');
dbms_output.put_line (' ');
for i in list3.first..list3.last
loop
dbms_output.put_line (list3(i));
end loop;
dbms_output.put_line (' ');
list3.delete;
end;
scott@ORCL> declare
2 type t1 is table of number;
3 list1 t1 := t1(1,2,3,4,5);
4 list2 t1 := t1(1,2,3);
5 list3 t1;
6 begin
7 list3 := list1 multiset except list2;
8 dbms_output.put_line (' Results of Multiset Except');
9 dbms_output.put_line (' ');
10 for i in list3.first..list3.last
11 loop
12 dbms_output.put_line (list3(i));
13 end loop;
14 dbms_output.put_line (' ');
15
16 list3.delete;
17 list3 := list1 multiset intersect list2;
18 dbms_output.put_line (' Results of Multiset Intersect');
19 dbms_output.put_line (' ');
20 for i in list3.first..list3.last
21 loop
22 dbms_output.put_line ( list3(i));
23 end loop;
24 dbms_output.put_line (' ');
25
26 list3.delete;
27 list3 := list1 multiset union list2;
28 dbms_output.put_line (' Results of Multiset Union');
29 dbms_output.put_line (' ');
30 for i in list3.first..list3.last
31 loop
32 dbms_output.put_line (list3(i));
33 end loop;
34 dbms_output.put_line (' ');
35
36 list3.delete;
37 end;
38 /
Results of Multiset Except
4
5
Results of Multiset Intersect
1
2
3
Results of Multiset Union
1
2
3
4
5
1
2
3
PL/SQL procedure successfully completed.
Saturday, April 10, 2010
Where did I Leave my Keys - PLSCOPE_SETTINGS in 11g
PL/Scope is a compiler-driven tool that collects data about user-defined identifiers from PL/SQL source code at program-unit compilation time and makes it available in static data dictionary views. The collected data includes information about identifier types, usages (declaration, definition, reference, call, and assignment), and the location of each usage in the source code.
That is a fancy way of saying that all of your variables, procedures, functions, and so on are extracted from the code and made visible in a database table for you (or some tool) to query. I believe that seeing an example is the easiest way to understand something, so a quick demonstration is called for. First, I need to enable this feature, and I do that via a session-settable PLSCOPE_SETTINGS parameter:
create or replace procedure emp_update_sal
as
l_start_time number := 0;
l_end_time number := 0;
begin
l_start_time := dbms_utility.get_time;
for r in (select empno,ename,sal,deptno from emp)
loop
if r.deptno = 10 then
update emp
set sal = sal + (r.sal * 0.1)
where empno = r.empno;
elsif r.deptno = 20 then
update emp
set sal = sal + (r.sal * 0.2)
where empno = r.empno;
elsif r.deptno = 30 then
update emp
set sal = sal + (r.sal * 0.3)
where empno = r.empno;
end if;
end loop;
l_end_time := dbms_utility.get_time;
dbms_output.put_line ( ' Total Time taken by Looping construct is ' || (l_end_time - l_start_time) );
l_start_time := dbms_utility.get_time;
update emp
set sal = sal + (sal * (deptno/100));
l_end_time := dbms_utility.get_time;
dbms_output.put_line ( ' Total Time taken by SQL construct is ' || (l_end_time - l_start_time) );
end;
scott@ORCL> select * from v$version;
BANNER
------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
scott@ORCL> show parameter plscope;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
plscope_settings string IDENTIFIERS:NONE
scott@ORCL> alter session set plscope_settings='identifiers:all';
Session altered.
scott@ORCL> alter procedure EMP_UPDATE_SAL compile;
Procedure altered.
scott@ORCL> SELECT name,
2 type,
3 USAGE,
4 line,
5 COL
6 FROM user_identifiers
7 WHERE object_type = 'PROCEDURE'
8 AND object_name = 'EMP_UPDATE_SAL'
9 ORDER BY name,type,line;
NAME TYPE USAGE LINE COL
------------------------------ ------------------ ----------- ---------- ----------
DBMS_OUTPUT SYNONYM REFERENCE 26 2
DBMS_OUTPUT SYNONYM REFERENCE 33 2
DBMS_UTILITY SYNONYM REFERENCE 7 18
DBMS_UTILITY SYNONYM REFERENCE 24 16
DBMS_UTILITY SYNONYM REFERENCE 28 18
DBMS_UTILITY SYNONYM REFERENCE 31 16
EMP_UPDATE_SAL PROCEDURE DEFINITION 1 11
EMP_UPDATE_SAL PROCEDURE DECLARATION 1 11
L_END_TIME VARIABLE DECLARATION 4 2
L_END_TIME VARIABLE ASSIGNMENT 4 2
L_END_TIME VARIABLE ASSIGNMENT 24 2
L_END_TIME VARIABLE REFERENCE 26 75
L_END_TIME VARIABLE ASSIGNMENT 31 2
L_END_TIME VARIABLE REFERENCE 33 71
L_START_TIME VARIABLE DECLARATION 3 2
L_START_TIME VARIABLE ASSIGNMENT 3 2
L_START_TIME VARIABLE ASSIGNMENT 7 2
L_START_TIME VARIABLE REFERENCE 26 88
L_START_TIME VARIABLE ASSIGNMENT 28 2
L_START_TIME VARIABLE REFERENCE 33 84
R ITERATOR DECLARATION 8 6
R ITERATOR REFERENCE 10 6
R ITERATOR REFERENCE 12 21
R ITERATOR REFERENCE 13 18
R ITERATOR REFERENCE 14 9
R ITERATOR REFERENCE 16 21
R ITERATOR REFERENCE 17 18
R ITERATOR REFERENCE 18 9
R ITERATOR REFERENCE 20 21
R ITERATOR REFERENCE 21 19
30 rows selected.
The listing shows me all referenced identifiers (the package DBMS_OUTPUT that I invoke many times, for example) as well as all local, global, and parameter variables in my code. It shows me not only the variables but also how they are used, where they are defined, where I reference them, and where I assign to them.
scott@ORCL> SELECT iden.line,
2 iden.USAGE,
3 src.text
4 FROM user_source src,
5 user_identifiers iden
6 WHERE src.name = 'EMP_UPDATE_SAL'
7 AND src.type = 'PROCEDURE'
8 AND src.name = iden.object_name
9 AND src.type = iden.object_type
10 AND src.line = iden.line
11 AND iden.name = 'L_END_TIME'
12 ORDER BY iden.line;
LINE USAGE TEXT
---- --------------- ---------------------------------------------------
4 DECLARATION l_end_time number := 0;
4 ASSIGNMENT l_end_time number := 0;
24 ASSIGNMENT l_end_time := dbms_utility.get_time;
26 REFERENCE dbms_output.put_line ( ' Total Time taken by Looping construct is ' || (l_end_time - l_start_time) );
31 ASSIGNMENT l_end_time := dbms_utility.get_time;
33 REFERENCE dbms_output.put_line ( ' Total Time taken by SQL construct is ' || (l_end_time - l_start_time) );
6 rows selected.
Note how I can focus right in on where and how the L_END_TIME Variable is used.
That is a fancy way of saying that all of your variables, procedures, functions, and so on are extracted from the code and made visible in a database table for you (or some tool) to query. I believe that seeing an example is the easiest way to understand something, so a quick demonstration is called for. First, I need to enable this feature, and I do that via a session-settable PLSCOPE_SETTINGS parameter:
create or replace procedure emp_update_sal
as
l_start_time number := 0;
l_end_time number := 0;
begin
l_start_time := dbms_utility.get_time;
for r in (select empno,ename,sal,deptno from emp)
loop
if r.deptno = 10 then
update emp
set sal = sal + (r.sal * 0.1)
where empno = r.empno;
elsif r.deptno = 20 then
update emp
set sal = sal + (r.sal * 0.2)
where empno = r.empno;
elsif r.deptno = 30 then
update emp
set sal = sal + (r.sal * 0.3)
where empno = r.empno;
end if;
end loop;
l_end_time := dbms_utility.get_time;
dbms_output.put_line ( ' Total Time taken by Looping construct is ' || (l_end_time - l_start_time) );
l_start_time := dbms_utility.get_time;
update emp
set sal = sal + (sal * (deptno/100));
l_end_time := dbms_utility.get_time;
dbms_output.put_line ( ' Total Time taken by SQL construct is ' || (l_end_time - l_start_time) );
end;
scott@ORCL> select * from v$version;
BANNER
------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
scott@ORCL> show parameter plscope;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
plscope_settings string IDENTIFIERS:NONE
scott@ORCL> alter session set plscope_settings='identifiers:all';
Session altered.
scott@ORCL> alter procedure EMP_UPDATE_SAL compile;
Procedure altered.
scott@ORCL> SELECT name,
2 type,
3 USAGE,
4 line,
5 COL
6 FROM user_identifiers
7 WHERE object_type = 'PROCEDURE'
8 AND object_name = 'EMP_UPDATE_SAL'
9 ORDER BY name,type,line;
NAME TYPE USAGE LINE COL
------------------------------ ------------------ ----------- ---------- ----------
DBMS_OUTPUT SYNONYM REFERENCE 26 2
DBMS_OUTPUT SYNONYM REFERENCE 33 2
DBMS_UTILITY SYNONYM REFERENCE 7 18
DBMS_UTILITY SYNONYM REFERENCE 24 16
DBMS_UTILITY SYNONYM REFERENCE 28 18
DBMS_UTILITY SYNONYM REFERENCE 31 16
EMP_UPDATE_SAL PROCEDURE DEFINITION 1 11
EMP_UPDATE_SAL PROCEDURE DECLARATION 1 11
L_END_TIME VARIABLE DECLARATION 4 2
L_END_TIME VARIABLE ASSIGNMENT 4 2
L_END_TIME VARIABLE ASSIGNMENT 24 2
L_END_TIME VARIABLE REFERENCE 26 75
L_END_TIME VARIABLE ASSIGNMENT 31 2
L_END_TIME VARIABLE REFERENCE 33 71
L_START_TIME VARIABLE DECLARATION 3 2
L_START_TIME VARIABLE ASSIGNMENT 3 2
L_START_TIME VARIABLE ASSIGNMENT 7 2
L_START_TIME VARIABLE REFERENCE 26 88
L_START_TIME VARIABLE ASSIGNMENT 28 2
L_START_TIME VARIABLE REFERENCE 33 84
R ITERATOR DECLARATION 8 6
R ITERATOR REFERENCE 10 6
R ITERATOR REFERENCE 12 21
R ITERATOR REFERENCE 13 18
R ITERATOR REFERENCE 14 9
R ITERATOR REFERENCE 16 21
R ITERATOR REFERENCE 17 18
R ITERATOR REFERENCE 18 9
R ITERATOR REFERENCE 20 21
R ITERATOR REFERENCE 21 19
30 rows selected.
The listing shows me all referenced identifiers (the package DBMS_OUTPUT that I invoke many times, for example) as well as all local, global, and parameter variables in my code. It shows me not only the variables but also how they are used, where they are defined, where I reference them, and where I assign to them.
scott@ORCL> SELECT iden.line,
2 iden.USAGE,
3 src.text
4 FROM user_source src,
5 user_identifiers iden
6 WHERE src.name = 'EMP_UPDATE_SAL'
7 AND src.type = 'PROCEDURE'
8 AND src.name = iden.object_name
9 AND src.type = iden.object_type
10 AND src.line = iden.line
11 AND iden.name = 'L_END_TIME'
12 ORDER BY iden.line;
LINE USAGE TEXT
---- --------------- ---------------------------------------------------
4 DECLARATION l_end_time number := 0;
4 ASSIGNMENT l_end_time number := 0;
24 ASSIGNMENT l_end_time := dbms_utility.get_time;
26 REFERENCE dbms_output.put_line ( ' Total Time taken by Looping construct is ' || (l_end_time - l_start_time) );
31 ASSIGNMENT l_end_time := dbms_utility.get_time;
33 REFERENCE dbms_output.put_line ( ' Total Time taken by SQL construct is ' || (l_end_time - l_start_time) );
6 rows selected.
Note how I can focus right in on where and how the L_END_TIME Variable is used.
Subscribe to:
Posts (Atom)