Thursday, May 13, 2010

Export and Import (DataPump) Over the Network - Schema Refresh.

Here's how to set up an Oracle Data Pump Import, using a network-based approach 
  • Set up a remote link from a satellite database (database 2) to the master database (database 1).
  • Create the import as a SQL script to run weekly from the satellite database to import selected tables from the source system, filtering and transforming as needed   
 Creating user name TEST_DPUMP to populate data from Master Database ( longdesc ).
 
system@ORCL> create user test_dpump identified by test_dpump;

User created.

system@ORCL> grant connect ,resource,dba to test_dpump;

Grant succeeded.

system@ORCL>

test_dpump@ORCL> create database link test_dpump_to_longdes connect to LONGDESC identified by password using 'orcl';

Database link created.

test_dpump@ORCL> select * from dual@test_dpump_to_longdes;

D
-
X

test_dpump@ORCL>




longdesc@ORCL> select count(*) from user_tables;

  COUNT(*)
----------
       136



  
Script for Schema Refresh from Master Database.

DECLARE
    ind NUMBER;                 -- Loop index number
    job_handle number;            -- Data Pump job handle
    percent_done number := 0;    -- Percentage of job complete
    job_state  varchar2(100) := 'UNDEFINED';    --    Keeps track of job state
    le ku$_LogEntry;            -- work-in-progress and error messages
    js ku$_JobStatus;           -- Job status from get_status
    jd ku$_JobDesc;             -- Job description from get_status
    sts ku$_Status;             -- Status object returned by get_status
BEGIN
   
    -- Using database link establish connection to pull data from source to destination database.  
    job_handle := dbms_datapump.open (    operation=>'IMPORT',
                                        job_mode=>'TABLE',
                                        remote_link=>'TEST_DPUMP_TO_LONGDES',
                                        job_name=> 'TEST_DPUMP_REFRESH');
   
    -- Setting the degree of parallelization to 1 for this Refresh operation.  
    dbms_datapump.set_parallel ( handle =>job_handle,
                                 degree => 1 );
   
    --- Remap Schema objects from source schema Named LONGDESC to destination schema TEST_DPUMP.
    dbms_datapump.metadata_remap ( handle => job_handle,name => 'REMAP_SCHEMA',old_value => 'LONGDESC',value => 'TEST_DPUMP' );
   
    -- If a table already exists in the destination schema, don't override (this is default behavior.)
    dbms_datapump.set_parameter( handle => job_handle,name => 'TABLE_EXISTS_ACTION',value => 'SKIP');
   
    -- Use statistics (rather than blocks) to estimate time.
    dbms_datapump.set_parameter( handle => job_handle, name => 'ESTIMATE', value => 'STATISTICS' );
   
    -- Start the job. An exception is returned if something is not set up properly.
    dbms_datapump.start_job (handle => job_handle);
   
    -- The import job should now be running. This loop monitors the job until it completes, displaying progress information if there are problems.

        while ( job_state <> 'COMPLETED' and job_state <> 'STOPPED')
        loop
            dbms_datapump.get_status(  job_handle,
                                       DBMS_DATAPUMP.ku$_status_job_error +
                                       DBMS_DATAPUMP.ku$_status_job_status +
                                       DBMS_DATAPUMP.ku$_status_wip ,
                                       -1,
                                       job_state ,
                                       sts );
          
            js := sts.job_status;
          
    -- As the percentage-complete changes in this loop, the new value displays.      
            if js.percent_done != percent_done then
                DBMS_OUTPUT.PUT_LINE('*** Job percent done = ' || to_char(js.percent_done));
                percent_done := js.percent_done;
            end if;  
          
            -- Displays any work-in-progress or error messages received for the job.
            if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0) then
                le := sts.wip;
            else
                if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0) then
                le := sts.error;
                else
                    le := null;
                end if;
            end if;
          
            if le is not null then
                ind := le.FIRST;
                while ind is not null
                loop
                    DBMS_OUTPUT.PUT_LINE(le(ind).LogText);
                    ind := le.NEXT(ind);
                end loop;
            end if;  
          
        end loop; --- End of While loop.
      
        -- When the job finishes, display status before detaching from job.
        DBMS_OUTPUT.PUT_LINE('Job has completed');
        DBMS_OUTPUT.PUT_LINE('Final job state = ' || job_state);
        DBMS_DATAPUMP.DETACH(job_handle);      
END;
/






test_dpump@ORCL> select count(*) from user_tables;

  COUNT(*)
----------
       136





Oracle Data Pump imports are anywhere from 15 to 40 times as fast as with traditional import, even with parallelism set to 1, in part because of Oracle Data Pump's use of direct path (when possible).
 

5 comments:

  1. Hi,
    Where is the expdp dump file?

    ReplyDelete
  2. @Anonymous:

    There is not dumpfile here, we use DB link to refresh from source db to destination db.

    ReplyDelete
  3. Thank you very much, it's been very helpful, especially the monitor loop, I think the documentation is very confusing when explaining the constants used, most of all, for non-english speakers

    ReplyDelete
  4. Hi Rajesh,

    "job_mode=>'TABLE',"

    should that not be SCHEMA ?
    Thanks you very much for the info.

    ReplyDelete
  5. Also can we push the data.
    In this case, you are pulling the data, thanks.

    ReplyDelete