Prior to Oracle 11g, when you add a column with not null constraint and with a default value, Oracle actually populates the value in all the rows of the table. All the rows, ouch! Imagine a multimillion-row table where the data will be updated several million times and how much redo and undo it will generate. In addition, it will also lock the table for the entire duration preventing DDLs. This caused a lot of consternation among users.But In Oracle 11g, this is handled different.
Ran the script in 9iR2, 10gR2 and 11gR1 and Tkprof show's the below results.
create table T nologging
as
select *
from all_objects;
alter session set timed_statistics=true;
alter session set events '10046 trace name context forever, level 16';
alter table T add grade varchar2(1) default 'x' not null;
Oracle - 9iR2
********************************************************************************
alter table t add grade varchar2(1) default 'x' not null
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.01 0 0 0 0
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 107
********************************************************************************
update "T" set "GRADE"='x'
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.22 0 0 0 0
Execute 1 1.35 1.95 452 459 33914 33243
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.35 2.18 452 459 33914 33243
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 107 (recursive depth: 1)
Rows Row Source Operation
------- ---------------------------------------------------
0 UPDATE
33243 TABLE ACCESS FULL T
********************************************************************************
Oracle -10gR2
********************************************************************************
alter table t add grade varchar2(1) default 'x' not null
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.04 0 0 0 0
Execute 1 0.01 0.30 0 1 2 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.03 0.35 0 1 2 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 54
********************************************************************************
update "T" set "GRADE"='x'
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 1 0 0
Execute 1 3.21 5.51 548 2228 304319 56172
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 3.21 5.51 548 2229 304319 56172
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 54 (recursive depth: 1)
Rows Row Source Operation
------- ---------------------------------------------------
0 UPDATE T (cr=1598 pr=95 pw=0 time=3176654 us)
159118 TABLE ACCESS FULL T (cr=2220 pr=548 pw=0 time=1113963 us)
********************************************************************************
Oracle - 11gR1
********************************************************************************
SQL ID : dpj1atuacv8zx
alter table t add grade varchar2(1) default 'x' not null
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 1 25 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.01 0 1 25 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 81
********************************************************************************
looking at the 11g Trace File I could not see a reference to the UPDATE T...Statement This behavior results in significantly less redo and undo, and also completes faster.
Monday, July 26, 2010
Sunday, July 25, 2010
Masking Sensitive data (using datapump in 11G)
One of the several requirements in today’s regulated environments is to mask sensitive data such as credit card numbers, and so on, when moving production data to test systems. This is a hard requirement to satisfy, especially if you have a large database. It’s about time Oracle came up with a facility to support it.
Oracle Data Pump now offers the ability to obfuscate data during an export or import operation. This functionality is supported with the use of the new remap_data parameter. With the remap_data parameter you define the schema table.column object(s) to be remapped, and you also define an associated function that
will be called “remap” (or obfuscate) the column data.
rajesh@ORA11G> create or replace package mypkg is
2 function foo(p_x in varchar2)
3 return varchar2;
4 end;
5 /
Package created.
Elapsed: 00:00:00.20
rajesh@ORA11G>
rajesh@ORA11G> create or replace package body mypkg is
2 function foo(p_x in varchar2)
3 return varchar2
4 as
5 begin
6 return( translate(p_x,
7 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
8 'bcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZa') );
9 end;
10 end;
11 /
Package body created.
rajesh@ORA11G> create table t as
2 select username x
3 from all_users
4 where rownum <= 5;
Table created.
Elapsed: 00:00:00.14
rajesh@ORA11G> select * from t;
X
------------------------------
SYS
SYSTEM
OUTLN
DIP
ORACLE_OCM
5 rows selected.
Now exporting the table T using expdp (Export datapump) utility
D:>expdp rajesh/oracle@ora11g tables=RAJESH.T directory=ETL_DATA_DIR dumpfile=t.dmp logfile=log.txt remap_data=RAJESH.T.X:RAJESH.MYPKG.FOO
Export: Release 11.2.0.1.0 - Production on Wed Jul 3 17:59:09 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "RAJESH"."SYS_EXPORT_TABLE_01": rajesh/********@ora11g tables=RAJESH.T directory=ETL_DATA_DIR dumpfile=t.dmp logfile=log.txt remap_data=RAJESH.T.X:RAJESH.MYPKG.FOO
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "RAJESH"."T" 5.054 KB 5 rows
Master table "RAJESH"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for RAJESH.SYS_EXPORT_TABLE_01 is:
D:\APP\ETL_DATA\T.DMP
Job "RAJESH"."SYS_EXPORT_TABLE_01" successfully completed at 17:59:36
Now importing the data from this dumpfile to database by invoking data masking plsql routine on the column having sensitive data.
D:>impdp scott/tiger@ora11g directory=ETL_DATA_DIR dumpfile=t.dmp logfile=implog.txt remap_schema=rajesh:scott
Import: Release 11.2.0.1.0 - Production on Wed Jul 3 17:59:47 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SCOTT"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_IMPORT_FULL_01": scott/********@ora11g directory=ETL_DATA_DIR dumpfile=t.dmp logfile=implog.txt remap_schema=rajesh:scott
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SCOTT"."T" 5.054 KB 5 rows
Job "SCOTT"."SYS_IMPORT_FULL_01" successfully completed at 17:59:51
scott@ORA11G> select * from t;
X
------------------------------
TZT
TZTUFN
PVUMO
EJQ
PSBDMF_PDN
5 rows selected.
Now the sensitive data is masked using this new remap_data parameter available in Oracle 11g Datapump utility.
Oracle Data Pump now offers the ability to obfuscate data during an export or import operation. This functionality is supported with the use of the new remap_data parameter. With the remap_data parameter you define the schema table.column object(s) to be remapped, and you also define an associated function that
will be called “remap” (or obfuscate) the column data.
rajesh@ORA11G> create or replace package mypkg is
2 function foo(p_x in varchar2)
3 return varchar2;
4 end;
5 /
Package created.
Elapsed: 00:00:00.20
rajesh@ORA11G>
rajesh@ORA11G> create or replace package body mypkg is
2 function foo(p_x in varchar2)
3 return varchar2
4 as
5 begin
6 return( translate(p_x,
7 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
8 'bcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZa') );
9 end;
10 end;
11 /
Package body created.
rajesh@ORA11G> create table t as
2 select username x
3 from all_users
4 where rownum <= 5;
Table created.
Elapsed: 00:00:00.14
rajesh@ORA11G> select * from t;
X
------------------------------
SYS
SYSTEM
OUTLN
DIP
ORACLE_OCM
5 rows selected.
Now exporting the table T using expdp (Export datapump) utility
D:>expdp rajesh/oracle@ora11g tables=RAJESH.T directory=ETL_DATA_DIR dumpfile=t.dmp logfile=log.txt remap_data=RAJESH.T.X:RAJESH.MYPKG.FOO
Export: Release 11.2.0.1.0 - Production on Wed Jul 3 17:59:09 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "RAJESH"."SYS_EXPORT_TABLE_01": rajesh/********@ora11g tables=RAJESH.T directory=ETL_DATA_DIR dumpfile=t.dmp logfile=log.txt remap_data=RAJESH.T.X:RAJESH.MYPKG.FOO
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "RAJESH"."T" 5.054 KB 5 rows
Master table "RAJESH"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for RAJESH.SYS_EXPORT_TABLE_01 is:
D:\APP\ETL_DATA\T.DMP
Job "RAJESH"."SYS_EXPORT_TABLE_01" successfully completed at 17:59:36
Now importing the data from this dumpfile to database by invoking data masking plsql routine on the column having sensitive data.
D:>impdp scott/tiger@ora11g directory=ETL_DATA_DIR dumpfile=t.dmp logfile=implog.txt remap_schema=rajesh:scott
Import: Release 11.2.0.1.0 - Production on Wed Jul 3 17:59:47 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SCOTT"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_IMPORT_FULL_01": scott/********@ora11g directory=ETL_DATA_DIR dumpfile=t.dmp logfile=implog.txt remap_schema=rajesh:scott
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SCOTT"."T" 5.054 KB 5 rows
Job "SCOTT"."SYS_IMPORT_FULL_01" successfully completed at 17:59:51
scott@ORA11G> select * from t;
X
------------------------------
TZT
TZTUFN
PVUMO
EJQ
PSBDMF_PDN
5 rows selected.
Now the sensitive data is masked using this new remap_data parameter available in Oracle 11g Datapump utility.
Flashback Data Archive (Oracle Total Recall)
I learn something new about Oracle Database pretty much every day. Recently I learned something its about Flashback Data Archive (Oracle Total Recall) in 11gR1.
Oracle Flashback Data Archives provides the ability to track changes that occur on a table over the lifetime of the table. Until now many applications had to build in logging for this kind of transaction monitoring. You can use the Flashback Data Archives feature to satisfy security and compliance requirements. You can also use the data for data analysis and within a decision support system (DSS) environment.
The user who will be defining any Flashback Data Archives will need to have the privilege flashback archive administer granted to them.
sys@11GR1> grant flashback archive administer to scott;
Grant succeeded.
To create the archive we use the create flashback archive SQL DDL command. When you create the archive, you will assign the archive to a tablespace and assign it a space quota and a retention period.
scott@11GR1> create flashback archive archive_one_year
2 tablespace EXAMPLE retention 1 year;
Flashback archive created.
By default, archiving is disabled. You can enable archiving on specific tables when you create the table or you can alter the table to enable archiving.
scott@11GR1> create table t_flash (x number) tablespace EXAMPLE flashback archive archive_one_year;
Table created.
scott@11GR1> insert into t_flash values (1);
1 row created.
scott@11GR1> insert into t_flash values (2);
1 row created.
scott@11GR1> insert into t_flash values (3);
1 row created.
scott@11GR1> insert into t_flash values (4);
1 row created.
scott@11GR1> insert into t_flash values (5);
1 row created.
Oracle provides views that you can use to administer Flashback Data Archives. These views include
DBA_FLASHBACK_ARCHIVE - Provides information on all flashback archives contained in the database.
scott@11GR1> SELECT *
2 from DBA_FLASHBACK_ARCHIVE
3 /
FLASHBACK_ARCHIVE_NAME FLASHBACK_ARCHIVE# RETENTION_IN_DAYS CREATE_TIME LAST_PURGE_TIME STATUS
------------------------------ ------------------ ----------------- ----------------------------------- ----------------------------------- ---------
ARCHIVE_ONE_YEAR 1 365 25-JUL-10 05.28.01.000000000 PM 25-JUL-10 05.28.01.000000000 PM
DBA_FLASHBACK_ARCHIVE_TS - Provides information on all tablespaces that contain flashback archives
scott@11GR1> SELECT *
2 FROM DBA_FLASHBACK_ARCHIVE_TS
3 /
FLASHBACK_ARCHIVE_NAME FLASHBACK_ARCHIVE# TABLESPACE_NAME QUOTA_IN_MB
------------------------------ ------------------ ------------------------------ ----------------------------------------
ARCHIVE_ONE_YEAR 1 EXAMPLE
DBA_FLASHBACK_ARCHIVE_TABLES - This view indicates which flashback archive a given table is assigned to. If a table is not assigned to a flashback archive, it will not appear in this view.
scott@11GR1> SELECT *
2 FROM DBA_FLASHBACK_ARCHIVE_TABLES
3 /
TABLE_NAME OWNER_NAME FLASHBACK_ARCHIVE_NAME ARCHIVE_TABLE_NAME
------------------------------ ------------------------------ ------------------------------ -----------------------------------------------------
T_FLASH SCOTT ARCHIVE_ONE_YEAR SYS_FBA_HIST_73124
So, now that we have configured archiving, how do we use it? It’s really quite simple, as the Flashback Data Archives feature is an extension of the existing Flashback technologies already in place. Before, we were subject to the availability of undo and its retention. If the undo was available, then we could flashback a table
easily and see how it looked back in time; if the undo was not available, then the flashback efforts would fail. Guaranteed retention of undo helped to stabilize the availability of undo, but this was a global parameter, and could cause the whole database undo structure to become large and difficult to manage Oracle’s Flashback Data Archives feature allowing you to preserve undo at a table level for as long as might be required. Because this feature is granular to the table as opposed to the database, the space impacts are reduced for long-term undo storage. Now, SQL queries using the as of timestamp parameter are no longer subject to limited undo
storage, so queries such as the following are possible:
scott@11GR1> select * from T_FLASH as of timestamp (systimestamp - interval '280' second);
X
----------
1
2
Note that when a table is being archived, certain DDL commands are disabled for that object.These include certain alter table commands and the drop table, rename table, and truncate table commands. Also commands that modify a column are not allowed
scott@11GR1> drop table t_flash;
drop table t_flash
*
ERROR at line 1:
ORA-55610: Invalid DDL statement on history-tracked table
Elapsed: 00:00:00.03
scott@11GR1> truncate table t_flash;
truncate table t_flash
*
ERROR at line 1:
ORA-55610: Invalid DDL statement on history-tracked table
Elapsed: 00:00:00.03
Oracle Flashback Data Archives provides the ability to track changes that occur on a table over the lifetime of the table. Until now many applications had to build in logging for this kind of transaction monitoring. You can use the Flashback Data Archives feature to satisfy security and compliance requirements. You can also use the data for data analysis and within a decision support system (DSS) environment.
The user who will be defining any Flashback Data Archives will need to have the privilege flashback archive administer granted to them.
sys@11GR1> grant flashback archive administer to scott;
Grant succeeded.
To create the archive we use the create flashback archive SQL DDL command. When you create the archive, you will assign the archive to a tablespace and assign it a space quota and a retention period.
scott@11GR1> create flashback archive archive_one_year
2 tablespace EXAMPLE retention 1 year;
Flashback archive created.
By default, archiving is disabled. You can enable archiving on specific tables when you create the table or you can alter the table to enable archiving.
scott@11GR1> create table t_flash (x number) tablespace EXAMPLE flashback archive archive_one_year;
Table created.
scott@11GR1> insert into t_flash values (1);
1 row created.
scott@11GR1> insert into t_flash values (2);
1 row created.
scott@11GR1> insert into t_flash values (3);
1 row created.
scott@11GR1> insert into t_flash values (4);
1 row created.
scott@11GR1> insert into t_flash values (5);
1 row created.
Oracle provides views that you can use to administer Flashback Data Archives. These views include
DBA_FLASHBACK_ARCHIVE - Provides information on all flashback archives contained in the database.
scott@11GR1> SELECT *
2 from DBA_FLASHBACK_ARCHIVE
3 /
FLASHBACK_ARCHIVE_NAME FLASHBACK_ARCHIVE# RETENTION_IN_DAYS CREATE_TIME LAST_PURGE_TIME STATUS
------------------------------ ------------------ ----------------- ----------------------------------- ----------------------------------- ---------
ARCHIVE_ONE_YEAR 1 365 25-JUL-10 05.28.01.000000000 PM 25-JUL-10 05.28.01.000000000 PM
DBA_FLASHBACK_ARCHIVE_TS - Provides information on all tablespaces that contain flashback archives
scott@11GR1> SELECT *
2 FROM DBA_FLASHBACK_ARCHIVE_TS
3 /
FLASHBACK_ARCHIVE_NAME FLASHBACK_ARCHIVE# TABLESPACE_NAME QUOTA_IN_MB
------------------------------ ------------------ ------------------------------ ----------------------------------------
ARCHIVE_ONE_YEAR 1 EXAMPLE
DBA_FLASHBACK_ARCHIVE_TABLES - This view indicates which flashback archive a given table is assigned to. If a table is not assigned to a flashback archive, it will not appear in this view.
scott@11GR1> SELECT *
2 FROM DBA_FLASHBACK_ARCHIVE_TABLES
3 /
TABLE_NAME OWNER_NAME FLASHBACK_ARCHIVE_NAME ARCHIVE_TABLE_NAME
------------------------------ ------------------------------ ------------------------------ -----------------------------------------------------
T_FLASH SCOTT ARCHIVE_ONE_YEAR SYS_FBA_HIST_73124
So, now that we have configured archiving, how do we use it? It’s really quite simple, as the Flashback Data Archives feature is an extension of the existing Flashback technologies already in place. Before, we were subject to the availability of undo and its retention. If the undo was available, then we could flashback a table
easily and see how it looked back in time; if the undo was not available, then the flashback efforts would fail. Guaranteed retention of undo helped to stabilize the availability of undo, but this was a global parameter, and could cause the whole database undo structure to become large and difficult to manage Oracle’s Flashback Data Archives feature allowing you to preserve undo at a table level for as long as might be required. Because this feature is granular to the table as opposed to the database, the space impacts are reduced for long-term undo storage. Now, SQL queries using the as of timestamp parameter are no longer subject to limited undo
storage, so queries such as the following are possible:
scott@11GR1> select * from T_FLASH as of timestamp (systimestamp - interval '280' second);
X
----------
1
2
Note that when a table is being archived, certain DDL commands are disabled for that object.These include certain alter table commands and the drop table, rename table, and truncate table commands. Also commands that modify a column are not allowed
scott@11GR1> drop table t_flash;
drop table t_flash
*
ERROR at line 1:
ORA-55610: Invalid DDL statement on history-tracked table
Elapsed: 00:00:00.03
scott@11GR1> truncate table t_flash;
truncate table t_flash
*
ERROR at line 1:
ORA-55610: Invalid DDL statement on history-tracked table
Elapsed: 00:00:00.03
Friday, July 23, 2010
Multi column Statistics - 11g New Features
Starting with 11gR1 Oracle can even gather Statistics on group of columns within a table using extended statistics available in dbms_stats package.When multiple columns from a single table are used together in a query (multi column predicate) and If information about these combined columns are available to the optimizer then it can yield a better execution plan for optimal performance.
scott@11GR1> create table t
2 nologging parallel 4
3 as
4 select level as x,
5 case when mod(level,2) = 0 then 'Y' else 'N' end as flag_1
6 case when mod(level,2) = 0 then 'N' else 'Y' end as flag_2
7 from dual
8 connect by level <= 1000000
9 /
Table created.
scott@11GR1> create index t_ind on t(flag_1,flag_2) nologging;
Index created.
scott@11GR1> begin
2 dbms_stats.gather_table_stats(ownname=>USER,tabname=>'T',cascade=>true);
3 end;
4 /
PL/SQL procedure successfully completed.
Any record in the table T don't have the same value for the column FLAG_1 and FLAG_2.
scott@11GR1> set autotrace traceonly explain statistics;
scott@11GR1> select *
2 from t
3 where flag_1 ='Y'
4 and flag_2 ='Y';
no rows selected
Elapsed: 00:00:01.68
Execution Plan
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 500K| 3906K| 159 (5)| 00:00:02 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM)| :TQ10000 | 500K| 3906K| 159 (5)| 00:00:02 | Q1,00 | P->S | QC (RAND) |
| 3 | PX BLOCK ITERATOR | | 500K| 3906K| 159 (5)| 00:00:02 | Q1,00 | PCWC | |
|* 4 | TABLE ACCESS FULL| T | 500K| 3906K| 159 (5)| 00:00:02 | Q1,00 | PCWP | |
--------------------------------------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
12 recursive calls
0 db block gets
2107 consistent gets
1948 physical reads
0 redo size
389 bytes sent via SQL*Net to client
405 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed
scott@11GR1> set autotrace off;
It took 4000 IO's to answer our query without proper information about these combined columns available to Optimizer. But information about these combined columns in a table can be provided to optimizer by using Extended Stats available in Oracle 11gR1.
scott@11GR1> declare
2 l_name varchar2(200);
3 begin
4 l_name := dbms_stats.create_extended_stats(ownname=>null,tabname=>'T',extension=>'(FLAG_1,FLAG_2)');
5 end;
6 /
PL/SQL procedure successfully completed.
Now, we have defined our column group, lets collect the statistics for this defined column groups.
scott@11GR1> begin
2 dbms_stats.gather_table_stats(ownname=>USER,tabname=>'T',method_opt=>'for all columns size skewonly for columns (FLAG_1,FLAG_2) ');
3 end;
4 /
PL/SQL procedure successfully completed.
Now, we have collected the multicolumn statistics and it is available to Optimizer.
scott@11GR1> set autotrace traceonly explain statistics;
scott@11GR1> select *
2 from t
3 where flag_1 ='Y'
4 and flag_2 ='Y';
no rows selected
Elapsed: 00:00:00.03
Execution Plan
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 9 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 9 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T_IND | 1 | | 3 (0)| 00:00:01 |
-------------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3 consistent gets
0 physical reads
0 redo size
389 bytes sent via SQL*Net to client
405 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
scott@11GR1> set autotrace off;
Now the plan changes drastically and it took only 3 logical IO's and it scanned the Index segment which is far better than the Table segment to answer our queries.
scott@11GR1> create table t
2 nologging parallel 4
3 as
4 select level as x,
5 case when mod(level,2) = 0 then 'Y' else 'N' end as flag_1
6 case when mod(level,2) = 0 then 'N' else 'Y' end as flag_2
7 from dual
8 connect by level <= 1000000
9 /
Table created.
scott@11GR1> create index t_ind on t(flag_1,flag_2) nologging;
Index created.
scott@11GR1> begin
2 dbms_stats.gather_table_stats(ownname=>USER,tabname=>'T',cascade=>true);
3 end;
4 /
PL/SQL procedure successfully completed.
Any record in the table T don't have the same value for the column FLAG_1 and FLAG_2.
scott@11GR1> set autotrace traceonly explain statistics;
scott@11GR1> select *
2 from t
3 where flag_1 ='Y'
4 and flag_2 ='Y';
no rows selected
Elapsed: 00:00:01.68
Execution Plan
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 500K| 3906K| 159 (5)| 00:00:02 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM)| :TQ10000 | 500K| 3906K| 159 (5)| 00:00:02 | Q1,00 | P->S | QC (RAND) |
| 3 | PX BLOCK ITERATOR | | 500K| 3906K| 159 (5)| 00:00:02 | Q1,00 | PCWC | |
|* 4 | TABLE ACCESS FULL| T | 500K| 3906K| 159 (5)| 00:00:02 | Q1,00 | PCWP | |
--------------------------------------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
12 recursive calls
0 db block gets
2107 consistent gets
1948 physical reads
0 redo size
389 bytes sent via SQL*Net to client
405 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed
scott@11GR1> set autotrace off;
It took 4000 IO's to answer our query without proper information about these combined columns available to Optimizer. But information about these combined columns in a table can be provided to optimizer by using Extended Stats available in Oracle 11gR1.
scott@11GR1> declare
2 l_name varchar2(200);
3 begin
4 l_name := dbms_stats.create_extended_stats(ownname=>null,tabname=>'T',extension=>'(FLAG_1,FLAG_2)');
5 end;
6 /
PL/SQL procedure successfully completed.
Now, we have defined our column group, lets collect the statistics for this defined column groups.
scott@11GR1> begin
2 dbms_stats.gather_table_stats(ownname=>USER,tabname=>'T',method_opt=>'for all columns size skewonly for columns (FLAG_1,FLAG_2) ');
3 end;
4 /
PL/SQL procedure successfully completed.
Now, we have collected the multicolumn statistics and it is available to Optimizer.
scott@11GR1> set autotrace traceonly explain statistics;
scott@11GR1> select *
2 from t
3 where flag_1 ='Y'
4 and flag_2 ='Y';
no rows selected
Elapsed: 00:00:00.03
Execution Plan
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 9 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 9 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T_IND | 1 | | 3 (0)| 00:00:01 |
-------------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3 consistent gets
0 physical reads
0 redo size
389 bytes sent via SQL*Net to client
405 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
scott@11GR1> set autotrace off;
Now the plan changes drastically and it took only 3 logical IO's and it scanned the Index segment which is far better than the Table segment to answer our queries.
Sequence CACHE or NOCACHE?
Sequence created with CACHE settings enhance performance in single user / multi user environments
scott@11GR1> create sequence s_cache CACHE 1000;
Sequence created.
scott@11GR1> create sequence s_no_cache NOCACHE ;
Sequence created.
scott@11GR1> declare
2 l_val number;
3 l_start_time number;
4 l_end_time number;
5 begin
6 runstats_pkg.rs_start;
7
8 l_start_time := dbms_utility.get_time;
9
10 for i in 1..1000000
11 loop
12 select s_cache.nextval
13 into l_val
14 from dual;
15 end loop;
16
17 l_end_time := dbms_utility.get_time;
18 dbms_output.put_line (' Total Time taken = '|| ( l_end_time - l_start_time) );
19
20 runstats_pkg.rs_middle;
21
22 l_start_time := dbms_utility.get_time;
23 for i in 1..1000000
24 loop
25 select s_no_cache.nextval
26 into l_val
27 from dual;
28 end loop;
29
30 l_end_time := dbms_utility.get_time;
31 dbms_output.put_line (' Total Time taken = '|| ( l_end_time - l_start_time) );
32 end;
33 /
Total Time taken = 2488
Total Time taken = 20886
PL/SQL procedure successfully completed.
Elapsed: 00:03:53.84
Elapsed time is quite HUGELY different.... the cached sequence is obviously faster.
scott@11GR1> exec runstats_pkg.rs_stop(1000000);
Run1 ran in 2488 hsecs
Run2 ran in 27870 hsecs
run 1 ran in 8.93% of the time
Name Run1 Run2 Diff
LATCH.Change Notification Hash 1,291 1,001,329 1,000,038
LATCH.session idle bit 1,345 1,001,645 1,000,300
LATCH.sequence cache 3,002,018 2,000,030 -1,001,988
STAT...consistent gets - exami 1,001 1,020,450 1,019,449
STAT...consistent gets from ca 1,018 1,023,803 1,022,785
STAT...consistent gets 1,018 1,023,803 1,022,785
STAT...calls to kcmgas 1,041 1,035,784 1,034,743
LATCH.MinActiveScn Latch 2,173 2,001,865 1,999,692
STAT...enqueue requests 3,010 2,006,178 2,003,168
STAT...enqueue releases 3,009 2,006,178 2,003,169
STAT...redo entries 2,012 2,008,815 2,006,803
LATCH.session allocation 2,259 2,013,396 2,011,137
STAT...calls to get snapshot s 1,002,008 3,020,923 2,018,915
LATCH.DML lock allocation 12,771 2,054,305 2,041,534
STAT...session pga memory max 131,072 3,137,420 3,006,348
STAT...db block gets 3,090 3,055,841 3,052,751
STAT...db block gets from cach 3,090 3,055,841 3,052,751
STAT...db block changes 4,069 4,009,523 4,005,454
LATCH.enqueue hash chains 6,749 4,023,192 4,016,443
STAT...session logical reads 4,108 4,079,644 4,075,536
STAT...physical read total byt 40,960 4,857,856 4,816,896
STAT...calls to kcmgrs 5,030 5,002,112 4,997,082
LATCH.undo global data 3,096 5,482,793 5,479,697
LATCH.row cache objects 8,238 7,016,514 7,008,276
STAT...recursive calls 1,014,042 15,006,097 13,992,055
LATCH.cache buffers chains 20,510 18,290,102 18,269,592
STAT...physical write total by 0 151,044,096 151,044,096
STAT...undo change vector size 287,024 284,172,572 283,885,548
STAT...redo size 891,976 888,577,148 887,685,172
Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
6,068,802 47,504,769 41,435,967 12.78%
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.23
Remember -- LATCHES are locks, locks imply serialization, serialization implies less scalable -- longer waits. Also only 891K redo for "cache" but 888M redo for "nocache".
scott@11GR1> create sequence s_cache CACHE 1000;
Sequence created.
scott@11GR1> create sequence s_no_cache NOCACHE ;
Sequence created.
scott@11GR1> declare
2 l_val number;
3 l_start_time number;
4 l_end_time number;
5 begin
6 runstats_pkg.rs_start;
7
8 l_start_time := dbms_utility.get_time;
9
10 for i in 1..1000000
11 loop
12 select s_cache.nextval
13 into l_val
14 from dual;
15 end loop;
16
17 l_end_time := dbms_utility.get_time;
18 dbms_output.put_line (' Total Time taken = '|| ( l_end_time - l_start_time) );
19
20 runstats_pkg.rs_middle;
21
22 l_start_time := dbms_utility.get_time;
23 for i in 1..1000000
24 loop
25 select s_no_cache.nextval
26 into l_val
27 from dual;
28 end loop;
29
30 l_end_time := dbms_utility.get_time;
31 dbms_output.put_line (' Total Time taken = '|| ( l_end_time - l_start_time) );
32 end;
33 /
Total Time taken = 2488
Total Time taken = 20886
PL/SQL procedure successfully completed.
Elapsed: 00:03:53.84
Elapsed time is quite HUGELY different.... the cached sequence is obviously faster.
scott@11GR1> exec runstats_pkg.rs_stop(1000000);
Run1 ran in 2488 hsecs
Run2 ran in 27870 hsecs
run 1 ran in 8.93% of the time
Name Run1 Run2 Diff
LATCH.Change Notification Hash 1,291 1,001,329 1,000,038
LATCH.session idle bit 1,345 1,001,645 1,000,300
LATCH.sequence cache 3,002,018 2,000,030 -1,001,988
STAT...consistent gets - exami 1,001 1,020,450 1,019,449
STAT...consistent gets from ca 1,018 1,023,803 1,022,785
STAT...consistent gets 1,018 1,023,803 1,022,785
STAT...calls to kcmgas 1,041 1,035,784 1,034,743
LATCH.MinActiveScn Latch 2,173 2,001,865 1,999,692
STAT...enqueue requests 3,010 2,006,178 2,003,168
STAT...enqueue releases 3,009 2,006,178 2,003,169
STAT...redo entries 2,012 2,008,815 2,006,803
LATCH.session allocation 2,259 2,013,396 2,011,137
STAT...calls to get snapshot s 1,002,008 3,020,923 2,018,915
LATCH.DML lock allocation 12,771 2,054,305 2,041,534
STAT...session pga memory max 131,072 3,137,420 3,006,348
STAT...db block gets 3,090 3,055,841 3,052,751
STAT...db block gets from cach 3,090 3,055,841 3,052,751
STAT...db block changes 4,069 4,009,523 4,005,454
LATCH.enqueue hash chains 6,749 4,023,192 4,016,443
STAT...session logical reads 4,108 4,079,644 4,075,536
STAT...physical read total byt 40,960 4,857,856 4,816,896
STAT...calls to kcmgrs 5,030 5,002,112 4,997,082
LATCH.undo global data 3,096 5,482,793 5,479,697
LATCH.row cache objects 8,238 7,016,514 7,008,276
STAT...recursive calls 1,014,042 15,006,097 13,992,055
LATCH.cache buffers chains 20,510 18,290,102 18,269,592
STAT...physical write total by 0 151,044,096 151,044,096
STAT...undo change vector size 287,024 284,172,572 283,885,548
STAT...redo size 891,976 888,577,148 887,685,172
Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
6,068,802 47,504,769 41,435,967 12.78%
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.23
Remember -- LATCHES are locks, locks imply serialization, serialization implies less scalable -- longer waits. Also only 891K redo for "cache" but 888M redo for "nocache".
Friday, July 16, 2010
Secondary Indexes on IOT - Limitations of IOT
Secondary indexes on IOT can be an issue. Rows in an IOT don't have a true rowid logical row identifiers (logical rowids) that are based on the table's primary key. A logical rowid optionally includes a physical guess, which identifies the block location of the row.
So, if you create, populate and then index an IOT - the secondary index will have in it:
a) the columns you indexed
b) a rowid HINT, a rowid GUESS, the rowid observed when the row was first seen in the index - this is the rowid of the leaf block, it is not a true rowid
c) the logical rowid (in actuality the primary key of the IOT row being indexed)
As you modify the IOT, rows will move due to block splits, the secondary indexes will become less efficient as the rowid GUESS goes stale.
After running through below test scripts in Oracle 9iR2, 10gR2 and 11gR1 founded the below observations.
CREATE TABLE T
(
owner,
object_name,
object_type,
object_id,
created,
last_ddl_time,
CONSTRAINT t_pk PRIMARY KEY(owner,object_name,object_type,object_id)
)
organization INDEX AS
SELECT owner,
object_name,
object_type,
object_id,
created,
last_ddl_time
FROM all_objects;
create index t_ind on t(object_type,created) nologging;
begin dbms_stats.gather_table_stats(ownname=>USER,tabname=>'T',cascade=>true,method_opt=>'for all indexed columns size 254');
end;
/
select /* first_query */ /*+ INDEX(t,t_ind) */ *
from t
where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
and object_type = 'INDEX';
update t
set object_name = lower(object_name);
commit;
select /* second_query */ /*+ INDEX(t,t_ind) */ *
from t
where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
and object_type = 'INDEX';
alter index t_ind rebuild;
select /* third_query */ /*+ INDEX(t,t_ind) */ *
from t
where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
and object_type = 'INDEX';
test@9iR2> select * from v$version;
BANNER
--------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
PL/SQL Release 9.2.0.8.0 - Production
CORE 9.2.0.8.0 Production
TNS for 32-bit Windows: Version 9.2.0.8.0 - Production
NLSRTL Version 9.2.0.8.0 - Production
test@9iR2> select /* first_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=958 Card=635 Bytes=36830)
1 0 FILTER
2 1 INDEX (UNIQUE SCAN) OF 'T_PK' (UNIQUE) (Cost=9 Card=635 Bytes=36830)
3 2 INDEX (RANGE SCAN) OF 'T_IND' (NON-UNIQUE) (Cost=9 Card=635)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
696 consistent gets
0 physical reads
0 redo size
33669 bytes sent via SQL*Net to client
427 bytes received via SQL*Net from client
6 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
682 rows processed
As you modify the IOT, rows will move due to block splits, the secondary indexes will become less efficient as the rowid GUESS goes stale.
test@9iR2> select /* second_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=958 Card=635 Bytes=36830)
1 0 FILTER
2 1 INDEX (UNIQUE SCAN) OF 'T_PK' (UNIQUE) (Cost=9 Card=635 Bytes=36830)
3 2 INDEX (RANGE SCAN) OF 'T_IND' (NON-UNIQUE) (Cost=9 Card=635)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
819 consistent gets
33 physical reads
3888 redo size
33669 bytes sent via SQL*Net to client
427 bytes received via SQL*Net from client
6 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
682 rows processed
So, in a read/write system (eg: not a warehouse), the rowid guess'es go stale and we end up doing N range scans - one to find the entries in the secondary index and then N-1 to retrieve the N-1 rows from the IOT.And then you rebuild the secondary index and all is well.
test@9iR2> select /* third_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=958 Card=635 Bytes=36830)
1 0 FILTER
2 1 INDEX (UNIQUE SCAN) OF 'T_PK' (UNIQUE) (Cost=9 Card=635 Bytes=36830)
3 2 INDEX (RANGE SCAN) OF 'T_IND' (NON-UNIQUE) (Cost=9 Card=635)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
696 consistent gets
0 physical reads
0 redo size
33669 bytes sent via SQL*Net to client
427 bytes received via SQL*Net from client
6 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
682 rows processed
scott@10GR2> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
scott@10GR2> select /* first_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 372 | 21576 | 935 (1)| 00:00:12 |
|* 1 | FILTER | | | | | |
|* 2 | INDEX UNIQUE SCAN| T_PK | 372 | 21576 | 935 (1)| 00:00:12 |
|* 3 | INDEX RANGE SCAN| T_IND | 372 | | 6 (0)| 00:00:01 |
----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1067 consistent gets
0 physical reads
0 redo size
52440 bytes sent via SQL*Net to client
462 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1049 rows processed
scott@10GR2> select /* second_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 372 | 21576 | 935 (1)| 00:00:12 |
|* 1 | FILTER | | | | | |
|* 2 | INDEX UNIQUE SCAN| T_PK | 372 | 21576 | 935 (1)| 00:00:12 |
|* 3 | INDEX RANGE SCAN| T_IND | 372 | | 6 (0)| 00:00:01 |
----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3682 consistent gets
0 physical reads
0 redo size
52440 bytes sent via SQL*Net to client
462 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1049 rows processed
scott@10GR2> select /* third_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 372 | 21576 | 960 (1)| 00:00:12 |
|* 1 | FILTER | | | | | |
|* 2 | INDEX UNIQUE SCAN| T_PK | 372 | 21576 | 960 (1)| 00:00:12 |
|* 3 | INDEX RANGE SCAN| T_IND | 372 | | 6 (0)| 00:00:01 |
----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1067 consistent gets
0 physical reads
0 redo size
52440 bytes sent via SQL*Net to client
462 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1049 rows processed
scott@11GR1> select * from v$version;
BANNER
--------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
scott@11GR1> select /* first_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3507 | 202K| 8700 (1)| 00:01:45 |
|* 1 | FILTER | | | | | |
|* 2 | INDEX UNIQUE SCAN| T_PK | 3507 | 202K| 8700 (1)| 00:01:45 |
|* 3 | INDEX RANGE SCAN| T_IND | 3507 | | 34 (0)| 00:00:01 |
----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3344 consistent gets
0 physical reads
0 redo size
150077 bytes sent via SQL*Net to client
647 bytes received via SQL*Net from client
23 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3292 rows processed
scott@11GR1> select /* second_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3507 | 202K| 8700 (1)| 00:01:45 |
|* 1 | FILTER | | | | | |
|* 2 | INDEX UNIQUE SCAN| T_PK | 3507 | 202K| 8700 (1)| 00:01:45 |
|* 3 | INDEX RANGE SCAN| T_IND | 3507 | | 34 (0)| 00:00:01 |
----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
10920 consistent gets
0 physical reads
0 redo size
150093 bytes sent via SQL*Net to client
647 bytes received via SQL*Net from client
23 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3292 rows processed
scott@11GR1> select /* third_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3507 | 202K| 9008 (1)| 00:01:49 |
|* 1 | FILTER | | | | | |
|* 2 | INDEX UNIQUE SCAN| T_PK | 3507 | 202K| 9008 (1)| 00:01:49 |
|* 3 | INDEX RANGE SCAN| T_IND | 3507 | | 34 (0)| 00:00:01 |
----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3344 consistent gets
0 physical reads
0 redo size
150093 bytes sent via SQL*Net to client
647 bytes received via SQL*Net from client
23 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3292 rows processed
Hence secondary indexes can be an issue with IOT. But in some cases Secondary indexes provide fast and efficient access to IOT using columns other than primary keys.
rajesh@11GR2> create table t(
2 owner,
3 object_name,
4 object_type,
5 object_id,
6 created,
7 constraint t_pk primary key(object_id)
8 )organization index
9 nologging
10 as
11 select owner,object_name,object_type,object_id,created
12 from all_objects;
Table created.
Elapsed: 00:00:02.70
rajesh@11GR2>
rajesh@11GR2> create index t_ind on t(created) nologging;
Index created.
Elapsed: 00:00:00.65
rajesh@11GR2>
rajesh@11GR2> begin
2 dbms_stats.gather_table_stats(
3 ownname=>user,
4 tabname=>'T',
5 cascade=>true,
6 method_opt=>'for all columns size 254',
7 estimate_percent=>100);
8 end;
9 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:01.64
rajesh@11GR2>
rajesh@11GR2>
rajesh@11GR2> set autotrace traceonly explain statistics;
rajesh@11GR2>
rajesh@11GR2> select * from t
2 where created = to_date('01/07/2010','dd/mm/yyyy')
3 /
no rows selected
Elapsed: 00:00:00.00
Execution Plan
----------------------------------------------------------
Plan hash value: 2448192542
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 44 | 2288 | 1 (0)| 00:00:01 |
|* 1 | INDEX UNIQUE SCAN| T_PK | 44 | 2288 | 1 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN| T_IND | 44 | | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("CREATED"=TO_DATE(' 2010-07-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
2 - access("CREATED"=TO_DATE(' 2010-07-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2 consistent gets
0 physical reads
0 redo size
539 bytes sent via SQL*Net to client
404 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
rajesh@11GR2>
So, if you create, populate and then index an IOT - the secondary index will have in it:
a) the columns you indexed
b) a rowid HINT, a rowid GUESS, the rowid observed when the row was first seen in the index - this is the rowid of the leaf block, it is not a true rowid
c) the logical rowid (in actuality the primary key of the IOT row being indexed)
As you modify the IOT, rows will move due to block splits, the secondary indexes will become less efficient as the rowid GUESS goes stale.
After running through below test scripts in Oracle 9iR2, 10gR2 and 11gR1 founded the below observations.
CREATE TABLE T
(
owner,
object_name,
object_type,
object_id,
created,
last_ddl_time,
CONSTRAINT t_pk PRIMARY KEY(owner,object_name,object_type,object_id)
)
organization INDEX AS
SELECT owner,
object_name,
object_type,
object_id,
created,
last_ddl_time
FROM all_objects;
create index t_ind on t(object_type,created) nologging;
begin dbms_stats.gather_table_stats(ownname=>USER,tabname=>'T',cascade=>true,method_opt=>'for all indexed columns size 254');
end;
/
select /* first_query */ /*+ INDEX(t,t_ind) */ *
from t
where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
and object_type = 'INDEX';
update t
set object_name = lower(object_name);
commit;
select /* second_query */ /*+ INDEX(t,t_ind) */ *
from t
where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
and object_type = 'INDEX';
alter index t_ind rebuild;
select /* third_query */ /*+ INDEX(t,t_ind) */ *
from t
where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
and object_type = 'INDEX';
test@9iR2> select * from v$version;
BANNER
--------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
PL/SQL Release 9.2.0.8.0 - Production
CORE 9.2.0.8.0 Production
TNS for 32-bit Windows: Version 9.2.0.8.0 - Production
NLSRTL Version 9.2.0.8.0 - Production
test@9iR2> select /* first_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=958 Card=635 Bytes=36830)
1 0 FILTER
2 1 INDEX (UNIQUE SCAN) OF 'T_PK' (UNIQUE) (Cost=9 Card=635 Bytes=36830)
3 2 INDEX (RANGE SCAN) OF 'T_IND' (NON-UNIQUE) (Cost=9 Card=635)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
696 consistent gets
0 physical reads
0 redo size
33669 bytes sent via SQL*Net to client
427 bytes received via SQL*Net from client
6 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
682 rows processed
As you modify the IOT, rows will move due to block splits, the secondary indexes will become less efficient as the rowid GUESS goes stale.
test@9iR2> select /* second_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=958 Card=635 Bytes=36830)
1 0 FILTER
2 1 INDEX (UNIQUE SCAN) OF 'T_PK' (UNIQUE) (Cost=9 Card=635 Bytes=36830)
3 2 INDEX (RANGE SCAN) OF 'T_IND' (NON-UNIQUE) (Cost=9 Card=635)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
819 consistent gets
33 physical reads
3888 redo size
33669 bytes sent via SQL*Net to client
427 bytes received via SQL*Net from client
6 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
682 rows processed
So, in a read/write system (eg: not a warehouse), the rowid guess'es go stale and we end up doing N range scans - one to find the entries in the secondary index and then N-1 to retrieve the N-1 rows from the IOT.And then you rebuild the secondary index and all is well.
test@9iR2> select /* third_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=958 Card=635 Bytes=36830)
1 0 FILTER
2 1 INDEX (UNIQUE SCAN) OF 'T_PK' (UNIQUE) (Cost=9 Card=635 Bytes=36830)
3 2 INDEX (RANGE SCAN) OF 'T_IND' (NON-UNIQUE) (Cost=9 Card=635)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
696 consistent gets
0 physical reads
0 redo size
33669 bytes sent via SQL*Net to client
427 bytes received via SQL*Net from client
6 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
682 rows processed
scott@10GR2> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
scott@10GR2> select /* first_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 372 | 21576 | 935 (1)| 00:00:12 |
|* 1 | FILTER | | | | | |
|* 2 | INDEX UNIQUE SCAN| T_PK | 372 | 21576 | 935 (1)| 00:00:12 |
|* 3 | INDEX RANGE SCAN| T_IND | 372 | | 6 (0)| 00:00:01 |
----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1067 consistent gets
0 physical reads
0 redo size
52440 bytes sent via SQL*Net to client
462 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1049 rows processed
scott@10GR2> select /* second_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 372 | 21576 | 935 (1)| 00:00:12 |
|* 1 | FILTER | | | | | |
|* 2 | INDEX UNIQUE SCAN| T_PK | 372 | 21576 | 935 (1)| 00:00:12 |
|* 3 | INDEX RANGE SCAN| T_IND | 372 | | 6 (0)| 00:00:01 |
----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3682 consistent gets
0 physical reads
0 redo size
52440 bytes sent via SQL*Net to client
462 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1049 rows processed
scott@10GR2> select /* third_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 372 | 21576 | 960 (1)| 00:00:12 |
|* 1 | FILTER | | | | | |
|* 2 | INDEX UNIQUE SCAN| T_PK | 372 | 21576 | 960 (1)| 00:00:12 |
|* 3 | INDEX RANGE SCAN| T_IND | 372 | | 6 (0)| 00:00:01 |
----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1067 consistent gets
0 physical reads
0 redo size
52440 bytes sent via SQL*Net to client
462 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1049 rows processed
scott@11GR1> select * from v$version;
BANNER
--------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
scott@11GR1> select /* first_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3507 | 202K| 8700 (1)| 00:01:45 |
|* 1 | FILTER | | | | | |
|* 2 | INDEX UNIQUE SCAN| T_PK | 3507 | 202K| 8700 (1)| 00:01:45 |
|* 3 | INDEX RANGE SCAN| T_IND | 3507 | | 34 (0)| 00:00:01 |
----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3344 consistent gets
0 physical reads
0 redo size
150077 bytes sent via SQL*Net to client
647 bytes received via SQL*Net from client
23 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3292 rows processed
scott@11GR1> select /* second_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3507 | 202K| 8700 (1)| 00:01:45 |
|* 1 | FILTER | | | | | |
|* 2 | INDEX UNIQUE SCAN| T_PK | 3507 | 202K| 8700 (1)| 00:01:45 |
|* 3 | INDEX RANGE SCAN| T_IND | 3507 | | 34 (0)| 00:00:01 |
----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
10920 consistent gets
0 physical reads
0 redo size
150093 bytes sent via SQL*Net to client
647 bytes received via SQL*Net from client
23 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3292 rows processed
scott@11GR1> select /* third_query */ /*+ INDEX(t,t_ind) */ *
2 from t
3 where created between to_date('01/01/2006','mm/dd/yyyy') and sysdate
4 and object_type = 'INDEX';
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3507 | 202K| 9008 (1)| 00:01:49 |
|* 1 | FILTER | | | | | |
|* 2 | INDEX UNIQUE SCAN| T_PK | 3507 | 202K| 9008 (1)| 00:01:49 |
|* 3 | INDEX RANGE SCAN| T_IND | 3507 | | 34 (0)| 00:00:01 |
----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3344 consistent gets
0 physical reads
0 redo size
150093 bytes sent via SQL*Net to client
647 bytes received via SQL*Net from client
23 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3292 rows processed
Hence secondary indexes can be an issue with IOT. But in some cases Secondary indexes provide fast and efficient access to IOT using columns other than primary keys.
rajesh@11GR2> create table t(
2 owner,
3 object_name,
4 object_type,
5 object_id,
6 created,
7 constraint t_pk primary key(object_id)
8 )organization index
9 nologging
10 as
11 select owner,object_name,object_type,object_id,created
12 from all_objects;
Table created.
Elapsed: 00:00:02.70
rajesh@11GR2>
rajesh@11GR2> create index t_ind on t(created) nologging;
Index created.
Elapsed: 00:00:00.65
rajesh@11GR2>
rajesh@11GR2> begin
2 dbms_stats.gather_table_stats(
3 ownname=>user,
4 tabname=>'T',
5 cascade=>true,
6 method_opt=>'for all columns size 254',
7 estimate_percent=>100);
8 end;
9 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:01.64
rajesh@11GR2>
rajesh@11GR2>
rajesh@11GR2> set autotrace traceonly explain statistics;
rajesh@11GR2>
rajesh@11GR2> select * from t
2 where created = to_date('01/07/2010','dd/mm/yyyy')
3 /
no rows selected
Elapsed: 00:00:00.00
Execution Plan
----------------------------------------------------------
Plan hash value: 2448192542
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 44 | 2288 | 1 (0)| 00:00:01 |
|* 1 | INDEX UNIQUE SCAN| T_PK | 44 | 2288 | 1 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN| T_IND | 44 | | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("CREATED"=TO_DATE(' 2010-07-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
2 - access("CREATED"=TO_DATE(' 2010-07-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2 consistent gets
0 physical reads
0 redo size
539 bytes sent via SQL*Net to client
404 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
rajesh@11GR2>
Thursday, July 15, 2010
PLS-00801: internal error [*** ASSERT at file pdw4.c - Bug in Oracle 10g
Learn t something accidentally working in project. Its about a bug (Bug 5212759) in Oracle 10g fixed in 11G R1.
scott@10GR2> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
scott@10GR2> create type foo is object( x number);
2 /
Type created.
scott@10GR2> create type too is table of foo;
2 /
Type created.
scott@10GR2> declare
2 t1 too := new too();
3 t2 too ;
4 begin
5 select foo(level)
6 bulk collect into t1
7 from dual
8 connect by level <= 5;
9
10 t2 := t1 multiset union t2;
11 end;
12 /
t2 := t1 multiset union t2;
*
ERROR at line 10:
ORA-06550: line 10, column 2:
PLS-00801: internal error [*** ASSERT at file pdw4.c, line 2076; Type 0x0AE3E5F4 has no MAP method.; _anon__2F37FA0C__AB[10, 2]]
scott@11GR1> select * from v$version;
BANNER
--------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
scott@11GR1> create type foo is object( x number);
2 /
Type created.
scott@11GR1> create type too is table of foo;
2 /
Type created.
scott@11GR1> declare
2 t1 too := new too();
3 t2 too ;
4 begin
5 select foo(level)
6 bulk collect into t1
7 from dual
8 connect by level <= 5;
9
10 t2 := t1 multiset union t2;
11 end;
12 /
PL/SQL procedure successfully completed.
As a workaround in Oracle 10g is to add a dummy MAP method to the object type foo.
scott@10GR2> drop type too;
Type dropped.
scott@10GR2> drop type foo;
Type dropped.
scott@10GR2> create type foo is object( x number,
2 map member function mem return number);
3 /
Type created.
scott@10GR2> create type too is table of foo;
2 /
Type created.
scott@10GR2> declare
2 type t_arr is table of number;
3 data1 t_arr;
4 data2 t_arr;
5 begin
6 select level lvl
7 bulk collect into
8 data1
9 from dual
10 connect by level<=5;
11
12 data2 := data1 multiset union data2;
13 end;
14 /
PL/SQL procedure successfully completed.
scott@10GR2> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
scott@10GR2> create type foo is object( x number);
2 /
Type created.
scott@10GR2> create type too is table of foo;
2 /
Type created.
scott@10GR2> declare
2 t1 too := new too();
3 t2 too ;
4 begin
5 select foo(level)
6 bulk collect into t1
7 from dual
8 connect by level <= 5;
9
10 t2 := t1 multiset union t2;
11 end;
12 /
t2 := t1 multiset union t2;
*
ERROR at line 10:
ORA-06550: line 10, column 2:
PLS-00801: internal error [*** ASSERT at file pdw4.c, line 2076; Type 0x0AE3E5F4 has no MAP method.; _anon__2F37FA0C__AB[10, 2]]
scott@11GR1> select * from v$version;
BANNER
--------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
scott@11GR1> create type foo is object( x number);
2 /
Type created.
scott@11GR1> create type too is table of foo;
2 /
Type created.
scott@11GR1> declare
2 t1 too := new too();
3 t2 too ;
4 begin
5 select foo(level)
6 bulk collect into t1
7 from dual
8 connect by level <= 5;
9
10 t2 := t1 multiset union t2;
11 end;
12 /
PL/SQL procedure successfully completed.
As a workaround in Oracle 10g is to add a dummy MAP method to the object type foo.
scott@10GR2> drop type too;
Type dropped.
scott@10GR2> drop type foo;
Type dropped.
scott@10GR2> create type foo is object( x number,
2 map member function mem return number);
3 /
Type created.
scott@10GR2> create type too is table of foo;
2 /
Type created.
scott@10GR2> declare
2 type t_arr is table of number;
3 data1 t_arr;
4 data2 t_arr;
5 begin
6 select level lvl
7 bulk collect into
8 data1
9 from dual
10 connect by level<=5;
11
12 data2 := data1 multiset union data2;
13 end;
14 /
PL/SQL procedure successfully completed.
Thursday, July 8, 2010
Dynamic Sampling
Dynamic sampling first became available in Oracle9i Database Release 2. It is the ability of the cost-based optimizer (CBO) to sample the tables a query references during a hard parse, to determine better default statistics for unanalyzed segments, and to verify its “guesses.” This sampling takes place only at hard parse time and is used to dynamically generate better statistics for the optimizer to use, hence the name dynamic sampling.
The optimizer uses a variety of inputs to come up with a plan. It uses any and all constraints defined on the table; system statistics—information about your server’s I/O speeds, CPU speed, and the like; and statistics gathered from the segments involved in the query. The optimizer uses statistics to estimate cardinalities—the number of rows each step in a given plan is expected to return—and those cardinalities are a major variable in computing the cost of a query. When cardinalities are incorrectly estimated, the optimizer may choose an inefficient query plan. The No. 1, and some might say only, reason for an inefficient plan’s being generated by the optimizer is inaccurate cardinality estimations “right cardinality equals right plan; wrong cardinality equals wrong plan.”
scott@10GR2> create table t
2 as
3 select object_name,object_id
4 from all_objects
5 /
Table created.
scott@10GR2> select TABLE_NAME,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,AVG_SPACE
2 from user_tab_statistics
3 WHERE TABLE_NAME ='T'
4 /
TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE
------------------------------ ---------- ---------- ------------ ----------
T
scott@10GR2> select count(*) from T;
COUNT(*)
----------
55859
scott@10GR2> show parameter optimizer_dynamic;
NAME TYPE VALUE
------------------------------------ ----------- --------
optimizer_dynamic_sampling integer 2
Disabling the Dynamic Sampling to see the default cardinality.
scott@10GR2> select /*+ dynamic_sampling(t,0) */ * from t;
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 22136 | 648K| 61 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T | 22136 | 648K| 61 (0)| 00:00:01 |
--------------------------------------------------------------------------
The Estimated cardinality is 22136 which is very far from real cardinality 55859. If I permit dynamic sampling then i get much more realistic cardinality.
scott@10GR2> select * from t;
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 58007 | 1699K| 62 (2)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T | 58007 | 1699K| 62 (2)| 00:00:01 |
--------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement
So, where would dynamic sampling be useful? First and foremost, dynamic sampling is useful when you are accessing any table that has been created and loaded but not yet analyzed (As I did above).
In addition to providing the optimizer with necessary statistics when it is accessing unanalyzed segments, dynamic sampling may also help the optimizer to validate any of its guesses. To demonstrate this, I’ll create a table with some very specific data. Note that in this table, if the FLAG1 column has a value of Y, the FLAG2 column will have a value of N, and vice versa. All the data is either Y, N, or N, Y—there are no Y, Y records and no N, N records .
scott@10GR2> create table t
2 as
3 select decode(mod(rownum,2),0,'N','Y') as flag1,
4 decode(mod(rownum,2),0,'Y','N') as flag2,
5 a.*
6 from all_objects a
7 /
Table created.
scott@10GR2> create index t_ind on t(flag1,flag2);
Index created.
scott@10GR2> begin
2 dbms_stats.gather_table_stats(ownname=>USER,tabname=>'T',cascade=>true,method_opt=>'for all indexed columns size 254');
3 end;
4 /
PL/SQL procedure successfully completed.
scott@10GR2> select count(*), count(*)/2,count(*)/2/2 from t;
COUNT(*) COUNT(*)/2 COUNT(*)/2/2
---------- ---------- ------------
55859 27929.5 13964.75
scott@10GR2> select * from t where flag1='N';
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 27357 | 2618K| 184 (2)| 00:00:03 |
|* 1 | TABLE ACCESS FULL| T | 27357 | 2618K| 184 (2)| 00:00:03 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("FLAG1"='N')
Good cardinality looking at half of the data.
scott@10GR2> select * from t where flag2='N';
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 28502 | 2727K| 184 (2)| 00:00:03 |
|* 1 | TABLE ACCESS FULL| T | 28502 | 2727K| 184 (2)| 00:00:03 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("FLAG2"='N')
Again, Good cardinality looking at half of the data.
scott@10GR2> select * from t where flag1='N' and flag2='N';
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 13959 | 1335K| 184 (2)| 00:00:03 |
|* 1 | TABLE ACCESS FULL| T | 13959 | 1335K| 184 (2)| 00:00:03 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("FLAG1"='N' AND "FLAG2"='N')
Poor cardinality looking at more than Quarter of the data. The optimizer is making a wrong decision about cardinality and going for a wrong Access path. The same query when used Dynamic Sampling for level 3 it goes for better cardinality and finds the Optimal execution path.
scott@10GR2> select /*+ dynamic_sampling(t,3) */ * from t where flag1='N' and flag2='N';
Execution Plan
----------------------------------------------------------
Plan hash value: 1376202287
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5 | 490 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 5 | 490 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T_IND | 5 | | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("FLAG1"='N' AND "FLAG2"='N')
Note
-----
- dynamic sampling used for this statement
Detailed information about levels of dynamic sampling at product documentation.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#i43032
The optimizer uses a variety of inputs to come up with a plan. It uses any and all constraints defined on the table; system statistics—information about your server’s I/O speeds, CPU speed, and the like; and statistics gathered from the segments involved in the query. The optimizer uses statistics to estimate cardinalities—the number of rows each step in a given plan is expected to return—and those cardinalities are a major variable in computing the cost of a query. When cardinalities are incorrectly estimated, the optimizer may choose an inefficient query plan. The No. 1, and some might say only, reason for an inefficient plan’s being generated by the optimizer is inaccurate cardinality estimations “right cardinality equals right plan; wrong cardinality equals wrong plan.”
scott@10GR2> create table t
2 as
3 select object_name,object_id
4 from all_objects
5 /
Table created.
scott@10GR2> select TABLE_NAME,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,AVG_SPACE
2 from user_tab_statistics
3 WHERE TABLE_NAME ='T'
4 /
TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE
------------------------------ ---------- ---------- ------------ ----------
T
scott@10GR2> select count(*) from T;
COUNT(*)
----------
55859
scott@10GR2> show parameter optimizer_dynamic;
NAME TYPE VALUE
------------------------------------ ----------- --------
optimizer_dynamic_sampling integer 2
Disabling the Dynamic Sampling to see the default cardinality.
scott@10GR2> select /*+ dynamic_sampling(t,0) */ * from t;
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 22136 | 648K| 61 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T | 22136 | 648K| 61 (0)| 00:00:01 |
--------------------------------------------------------------------------
The Estimated cardinality is 22136 which is very far from real cardinality 55859. If I permit dynamic sampling then i get much more realistic cardinality.
scott@10GR2> select * from t;
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 58007 | 1699K| 62 (2)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T | 58007 | 1699K| 62 (2)| 00:00:01 |
--------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement
So, where would dynamic sampling be useful? First and foremost, dynamic sampling is useful when you are accessing any table that has been created and loaded but not yet analyzed (As I did above).
In addition to providing the optimizer with necessary statistics when it is accessing unanalyzed segments, dynamic sampling may also help the optimizer to validate any of its guesses. To demonstrate this, I’ll create a table with some very specific data. Note that in this table, if the FLAG1 column has a value of Y, the FLAG2 column will have a value of N, and vice versa. All the data is either Y, N, or N, Y—there are no Y, Y records and no N, N records .
scott@10GR2> create table t
2 as
3 select decode(mod(rownum,2),0,'N','Y') as flag1,
4 decode(mod(rownum,2),0,'Y','N') as flag2,
5 a.*
6 from all_objects a
7 /
Table created.
scott@10GR2> create index t_ind on t(flag1,flag2);
Index created.
scott@10GR2> begin
2 dbms_stats.gather_table_stats(ownname=>USER,tabname=>'T',cascade=>true,method_opt=>'for all indexed columns size 254');
3 end;
4 /
PL/SQL procedure successfully completed.
scott@10GR2> select count(*), count(*)/2,count(*)/2/2 from t;
COUNT(*) COUNT(*)/2 COUNT(*)/2/2
---------- ---------- ------------
55859 27929.5 13964.75
scott@10GR2> select * from t where flag1='N';
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 27357 | 2618K| 184 (2)| 00:00:03 |
|* 1 | TABLE ACCESS FULL| T | 27357 | 2618K| 184 (2)| 00:00:03 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("FLAG1"='N')
Good cardinality looking at half of the data.
scott@10GR2> select * from t where flag2='N';
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 28502 | 2727K| 184 (2)| 00:00:03 |
|* 1 | TABLE ACCESS FULL| T | 28502 | 2727K| 184 (2)| 00:00:03 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("FLAG2"='N')
Again, Good cardinality looking at half of the data.
scott@10GR2> select * from t where flag1='N' and flag2='N';
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 13959 | 1335K| 184 (2)| 00:00:03 |
|* 1 | TABLE ACCESS FULL| T | 13959 | 1335K| 184 (2)| 00:00:03 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("FLAG1"='N' AND "FLAG2"='N')
Poor cardinality looking at more than Quarter of the data. The optimizer is making a wrong decision about cardinality and going for a wrong Access path. The same query when used Dynamic Sampling for level 3 it goes for better cardinality and finds the Optimal execution path.
scott@10GR2> select /*+ dynamic_sampling(t,3) */ * from t where flag1='N' and flag2='N';
Execution Plan
----------------------------------------------------------
Plan hash value: 1376202287
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5 | 490 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 5 | 490 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T_IND | 5 | | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("FLAG1"='N' AND "FLAG2"='N')
Note
-----
- dynamic sampling used for this statement
Detailed information about levels of dynamic sampling at product documentation.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#i43032
Tuesday, July 6, 2010
XML Type Vs Clobs
Before XMLType datatype was released in Oracle 9i Release 1, peoples were storing their data in Varchar2 or character large objects (CLOB).Because free-form text data was traditionally stored in VARCHAR2s or CLOBs, it seemed a logical fit for XML as well.
With the first release of Oracle9i Database, when you stored an instance of the XMLType datatype, the underlying storage model was a CLOB value. The entire document was read and stored as-is in a CLOB object in the database.
CREATE TABLE "SCOTT"."T"
( "X" "SCOTT"."XMLTYPE"
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS NOLOGGING
STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "TOOLS"
XMLTYPE COLUMN "X" STORE AS CLOB (
TABLESPACE "TOOLS" ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10
NOCACHE NOLOGGING
STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT))
scott@10GR2> create table t_xmlclobs(x clob);
Table created.
scott@10GR2> create table t_xmltype (x xmltype);
Table created.
scott@10GR2> declare
2 l_start_time number;
3 begin
4 l_start_time := dbms_utility.get_time;
5 for i in 1..10000
6 loop
7 insert into t_xmlclobs(x) values ('');
8 end loop;
9 commit;
10
11 dbms_output.put_line(' Total Time for Clobs='|| ( dbms_utility.get_time - l_start_time) );
12
13 l_start_time := dbms_utility.get_time;
14 for i in 1..10000
15 loop
16 insert into t_xmltype(x) values ('');
17 end loop;
18 commit;
19
20 dbms_output.put_line(' Total Time for Xmltype ='|| ( dbms_utility.get_time - l_start_time) );
21 end;
22 /
Total Time for Clobs = 42
Total Time for Xmltype = 1641
PL/SQL procedure successfully completed.
When an XMLType instance is created, an XML syntax check is performed—regardless of the storage architecture—to ensure that the XML document is well formed.
With the first release of Oracle9i Database, when you stored an instance of the XMLType datatype, the underlying storage model was a CLOB value. The entire document was read and stored as-is in a CLOB object in the database.
CREATE TABLE "SCOTT"."T"
( "X" "SCOTT"."XMLTYPE"
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS NOLOGGING
STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "TOOLS"
XMLTYPE COLUMN "X" STORE AS CLOB (
TABLESPACE "TOOLS" ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10
NOCACHE NOLOGGING
STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT))
scott@10GR2> create table t_xmlclobs(x clob);
Table created.
scott@10GR2> create table t_xmltype (x xmltype);
Table created.
scott@10GR2> declare
2 l_start_time number;
3 begin
4 l_start_time := dbms_utility.get_time;
5 for i in 1..10000
6 loop
7 insert into t_xmlclobs(x) values ('
8 end loop;
9 commit;
10
11 dbms_output.put_line(' Total Time for Clobs='|| ( dbms_utility.get_time - l_start_time) );
12
13 l_start_time := dbms_utility.get_time;
14 for i in 1..10000
15 loop
16 insert into t_xmltype(x) values ('
17 end loop;
18 commit;
19
20 dbms_output.put_line(' Total Time for Xmltype ='|| ( dbms_utility.get_time - l_start_time) );
21 end;
22 /
Total Time for Clobs = 42
Total Time for Xmltype = 1641
PL/SQL procedure successfully completed.
When an XMLType instance is created, an XML syntax check is performed—regardless of the storage architecture—to ensure that the XML document is well formed.
Sunday, July 4, 2010
PLS-00201: identifier 'x' must be declared - Roles-and-Procedures
Roles are never enabled during the execution of a procedure except in the special case of Invokers Rights which is a new feature in Oracle8i, release 8.1.
This fact is documented application developers guide:
To create a stand-alone procedure or function, or package specification or body, you must meet the following prerequisites:
You must have the CREATE PROCEDURE system privilege to create a procedure or package in your schema, or the CREATE ANY PROCEDURE system privilege to create a procedure or package in
another user's schema.
Attention: To create without errors, that is, to compile the procedure or package successfully, requires the following additional privileges: The owner of the procedure or package must have been explicitly granted the necessary object privileges for all objects referenced within the body of the code; the owner cannot have obtained required privileges through roles.
scott@10GR2> drop table t purge;
Table dropped.
scott@10GR2> create table t(x number);
Table created.
scott@10GR2> create role r1;
Role created.
scott@10GR2> grant select on t to r1;
Grant succeeded.
scott@10GR2> grant r1 to hr;
Grant succeeded.
scott@10GR2> connect hr/hr
Connected.
hr@10GR2> create synonym t for scott.t;
Synonym created.
hr@10GR2> create or replace procedure p1 as
2 l_val scott.t%rowtype;
3 begin
4 dbms_output.put_line (' procedure p1 invoked ');
5 end;
6 /
Warning: Procedure created with compilation errors.
hr@10GR2> show err;
Errors for PROCEDURE P1:
LINE/COL ERROR
-------- -----------------------------------------------------------------
2/8 PL/SQL: Item ignored
2/8 PLS-00201: identifier 'SCOTT.T' must be declared
hr@10GR2> create or replace procedure p2 as
2 l_val t%rowtype;
3 begin
4 dbms_output.put_line (' procedure p1 invoked ');
5 end;
6 /
Warning: Procedure created with compilation errors.
hr@10GR2> show err;
Errors for PROCEDURE P2:
LINE/COL ERROR
-------- -----------------------------------------------------------------
2/8 PL/SQL: Item ignored
2/8 PLS-00201: identifier 'T' must be declared
As stated from Product documentation, this can be solved by granting necessary privileges to owner rather that through Roles.
hr@10GR2> connect scott/tiger
Connected.
scott@10GR2> grant select on t to hr;
Grant succeeded.
scott@10GR2> connect hr/hr
Connected.
hr@10GR2> alter procedure p1 compile;
Procedure altered.
hr@10GR2> alter procedure p2 compile;
Procedure altered.
This fact is documented application developers guide:
To create a stand-alone procedure or function, or package specification or body, you must meet the following prerequisites:
You must have the CREATE PROCEDURE system privilege to create a procedure or package in your schema, or the CREATE ANY PROCEDURE system privilege to create a procedure or package in
another user's schema.
Attention: To create without errors, that is, to compile the procedure or package successfully, requires the following additional privileges: The owner of the procedure or package must have been explicitly granted the necessary object privileges for all objects referenced within the body of the code; the owner cannot have obtained required privileges through roles.
scott@10GR2> drop table t purge;
Table dropped.
scott@10GR2> create table t(x number);
Table created.
scott@10GR2> create role r1;
Role created.
scott@10GR2> grant select on t to r1;
Grant succeeded.
scott@10GR2> grant r1 to hr;
Grant succeeded.
scott@10GR2> connect hr/hr
Connected.
hr@10GR2> create synonym t for scott.t;
Synonym created.
hr@10GR2> create or replace procedure p1 as
2 l_val scott.t%rowtype;
3 begin
4 dbms_output.put_line (' procedure p1 invoked ');
5 end;
6 /
Warning: Procedure created with compilation errors.
hr@10GR2> show err;
Errors for PROCEDURE P1:
LINE/COL ERROR
-------- -----------------------------------------------------------------
2/8 PL/SQL: Item ignored
2/8 PLS-00201: identifier 'SCOTT.T' must be declared
hr@10GR2> create or replace procedure p2 as
2 l_val t%rowtype;
3 begin
4 dbms_output.put_line (' procedure p1 invoked ');
5 end;
6 /
Warning: Procedure created with compilation errors.
hr@10GR2> show err;
Errors for PROCEDURE P2:
LINE/COL ERROR
-------- -----------------------------------------------------------------
2/8 PL/SQL: Item ignored
2/8 PLS-00201: identifier 'T' must be declared
As stated from Product documentation, this can be solved by granting necessary privileges to owner rather that through Roles.
hr@10GR2> connect scott/tiger
Connected.
scott@10GR2> grant select on t to hr;
Grant succeeded.
scott@10GR2> connect hr/hr
Connected.
hr@10GR2> alter procedure p1 compile;
Procedure altered.
hr@10GR2> alter procedure p2 compile;
Procedure altered.
Thursday, July 1, 2010
MAXTRANS - Deprecated in 10g
Learned something new from Oracle product documentation. Its about the Physical Storage parameter MAXTRANS which is deprecated in Oracle 10g, its defaulted to 255. Here is the Test case to demonstrate that.
test@9iR2> create table t(x number) maxtrans 2;
Table created.
test@9iR2> select dbms_metadata.get_ddl('TABLE','T') from dual;
DBMS_METADATA.GET_DDL('TABLE','T')
--------------------------------------------------------------------------
CREATE TABLE "TEST"."T"
( "X" NUMBER
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 2 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "SYSTEM"
test@9iR2> SELECT tablespace_name, segment_space_management
2 FROM dba_tablespaces
3 where tablespace_name ='SYSTEM';
TABLESPACE_NAME SEGMEN
------------------------------ ------
SYSTEM MANUAL
test@9iR2> exec show_space(USER,'T');
l_free_blks******************* 0
l_total_blocks**************** 8
l_total_bytes***************** 65536
l_unused_blocks*************** 7
l_unused_bytes**************** 57344
l_last_used_extent_file_id**** 1
l_last_used_extent_block_id*** 55793
l_last_used_block************* 1
PL/SQL procedure successfully completed.
You see now the Unused Block is 7, Since 1 Block is allocated to Block header.
test@9iR2> insert into t
2 select level
3 from dual
4 connect by level <= 3;
3 rows created.
test@9iR2> commit;
Commit complete.
test@9iR2> exec show_space(USER,'T');
l_free_blks******************* 1
l_total_blocks**************** 8
l_total_bytes***************** 65536
l_unused_blocks*************** 6
l_unused_bytes**************** 49152
l_last_used_extent_file_id**** 1
l_last_used_extent_block_id*** 55793
l_last_used_block************* 2
PL/SQL procedure successfully completed.
Now 1 Block is filled with Table data and 1 Block allocated to Block header.
--Session 1
test@9iR2> update t set x = x+1 where x = 1;
1 row updated.
--Session 2
test@9iR2> update t set x = x + 1 where x = 2;
1 row updated.
--Session 3
test@9iR2> update t set x = x+1 where x = 3; <<<=== This is kept blocked due to MAXTRANS set to 2.
scott@10GR2> create table t(x number) maxtrans 2;
Table created.
scott@10GR2> select dbms_metadata.get_ddl('TABLE','T') from dual;
DBMS_METADATA.GET_DDL('TABLE','T')
-----------------------------------------------------------------------------
CREATE TABLE "SCOTT"."T"
( "X" NUMBER
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS NOLOGGING
STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "TOOLS"
The MAXTRANS defaulted to 255 in Oracle 10g, though its explicitly specified 2 with Create table statement.
scott@10GR2> SELECT tablespace_name, segment_space_management
2 FROM dba_tablespaces
3 where tablespace_name ='TOOLS';
TABLESPACE_NAME SEGMEN
------------------------------ ------
TOOLS AUTO
scott@10GR2> insert into t
2 select level
3 from dual
4 connect by level <= 3;
3 rows created.
scott@10GR2> commit;
Commit complete.
--Session 1
scott@10GR2> update t set x = x+1 where x = 1;
1 row updated.
--Session 2
scott@10GR2> update t set x = x + 1 where x = 2;
1 row updated.
--Session 3
scott@10GR2> update t set x = x+1 where x = 3;
1 row updated.
test@9iR2> create table t(x number) maxtrans 2;
Table created.
test@9iR2> select dbms_metadata.get_ddl('TABLE','T') from dual;
DBMS_METADATA.GET_DDL('TABLE','T')
--------------------------------------------------------------------------
CREATE TABLE "TEST"."T"
( "X" NUMBER
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 2 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "SYSTEM"
test@9iR2> SELECT tablespace_name, segment_space_management
2 FROM dba_tablespaces
3 where tablespace_name ='SYSTEM';
TABLESPACE_NAME SEGMEN
------------------------------ ------
SYSTEM MANUAL
test@9iR2> exec show_space(USER,'T');
l_free_blks******************* 0
l_total_blocks**************** 8
l_total_bytes***************** 65536
l_unused_blocks*************** 7
l_unused_bytes**************** 57344
l_last_used_extent_file_id**** 1
l_last_used_extent_block_id*** 55793
l_last_used_block************* 1
PL/SQL procedure successfully completed.
You see now the Unused Block is 7, Since 1 Block is allocated to Block header.
test@9iR2> insert into t
2 select level
3 from dual
4 connect by level <= 3;
3 rows created.
test@9iR2> commit;
Commit complete.
test@9iR2> exec show_space(USER,'T');
l_free_blks******************* 1
l_total_blocks**************** 8
l_total_bytes***************** 65536
l_unused_blocks*************** 6
l_unused_bytes**************** 49152
l_last_used_extent_file_id**** 1
l_last_used_extent_block_id*** 55793
l_last_used_block************* 2
PL/SQL procedure successfully completed.
Now 1 Block is filled with Table data and 1 Block allocated to Block header.
--Session 1
test@9iR2> update t set x = x+1 where x = 1;
1 row updated.
--Session 2
test@9iR2> update t set x = x + 1 where x = 2;
1 row updated.
--Session 3
test@9iR2> update t set x = x+1 where x = 3; <<<=== This is kept blocked due to MAXTRANS set to 2.
scott@10GR2> create table t(x number) maxtrans 2;
Table created.
scott@10GR2> select dbms_metadata.get_ddl('TABLE','T') from dual;
DBMS_METADATA.GET_DDL('TABLE','T')
-----------------------------------------------------------------------------
CREATE TABLE "SCOTT"."T"
( "X" NUMBER
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS NOLOGGING
STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "TOOLS"
The MAXTRANS defaulted to 255 in Oracle 10g, though its explicitly specified 2 with Create table statement.
scott@10GR2> SELECT tablespace_name, segment_space_management
2 FROM dba_tablespaces
3 where tablespace_name ='TOOLS';
TABLESPACE_NAME SEGMEN
------------------------------ ------
TOOLS AUTO
scott@10GR2> insert into t
2 select level
3 from dual
4 connect by level <= 3;
3 rows created.
scott@10GR2> commit;
Commit complete.
--Session 1
scott@10GR2> update t set x = x+1 where x = 1;
1 row updated.
--Session 2
scott@10GR2> update t set x = x + 1 where x = 2;
1 row updated.
--Session 3
scott@10GR2> update t set x = x+1 where x = 3;
1 row updated.
Subscribe to:
Posts (Atom)