Friday, November 10, 2023

Non blocking updates - Part I

The database uses locks to ensure that, at most, one transaction is modifying a given piece of data at any given time. Basically, locks are the mechanism that allows for concurrency. Blocking occurs when one session holds a lock on a resource that another session is requesting. As a result, the requesting session will be blocked—it will hang until the holding session gives up the locked resource.
 
In this blog post series we will see how that blocking feature got changed in Oracle 23c, and what kind of business application/needs can benefit from this change.
 
From an Oracle 21c database, I create a table and executed dml like this

 
demo@PDB1> create table t( x int primary key, y number );
demo@PDB1> insert into t(x,y) values(1,1);
demo@PDB1> commit;
demo@PDB1> update t set y = y+1 where x =1;
 
1 row updated.
 
The above transaction was not committed yet.
 
demo@PDB1> select k.*, u.object_name, l.xidusn,l.xidslot,l.xidsqn
  2  from v$lock k,
  3      user_objects u , v$locked_object l
  4  where k.id1 = u.object_id (+)
  5  and u.object_id = l.object_id (+)
  6  and k.sid = l.session_id (+)
  7  and k.sid = 620 ;
 
SID TY     ID1        ID2  LMODE REQUEST   BLOCK CON_ID OBJECT_NAM  XIDUSN    XIDSLOT     XIDSQN
--- -- ------- ---------- ------ ------- ------- ------ ---------- ------- ---------- ----------
620 TM  102210          0      3       0       0      3 T                5         21       7879
620 AE     134 3319336658      4       0       0      0
620 UL       1          0      4       0       0      3
620 TX  327701       7879      6       0       0      3
 
So we ended up with a Transaction locks (TX) – a TX lock is acquired when a transaction initiates its first change. That lock is held until the transaction perform a commit/rollback, and DML Enqueue (TM) are used to ensure that the structure of a table is not altered while you are modifying its contents. we get only one TX lock per transaction, we can get as many TM locks as the objects we modify. Here, the interesting thing is that the ID1 column for the TM lock is the object ID of the DML-locked object, so it is easy to find the object on which the lock is being held.
 
AE is an edition lock, available in Oracle 11g and above and it is part of Edition based redefinition feature.
 
Now running this update from second session – updating the same row from same table - got blocked like this
 
demo@PDB1> @sid
 
INST_ SESSION_ID
----- ----------
1             14
 
demo@PDB1> update t set y = y+1 where x =1;
 
A quick query on v$lock show this
 
demo@PDB1> select * from v$lock where sid in (14,620) order by sid;
 
  SID TY      ID1        ID2   LMODE  REQUEST    BLOCK  CON_ID
----- -- -------- ---------- ------- -------- -------- -------
   14 UL        1          0       4        0        0       3
   14 AE      134 3319336658       4        0        0       0
   14 TX   327701       7879       0        6        0       3
   14 TM   102210          0       3        0        0       3
  620 AE      134 3319336658       4        0        0       0
  620 UL        1          0       4        0        0       3
  620 TM   102210          0       3        0        0       3
  620 TX   327701       7879       6        0        1       3
 
8 rows selected.
 
We can see that the second session (session id = 14) request the Transaction lock, which was held by first session (session id 620) and that is blocking the other session.
 
However, running the same script to Oracle database 23c goes like this

 
demo@FREEPDB1> /* From session-1 did this */
demo@FREEPDB1> @sid
 
INST_ SESSION_ID
----- ----------
1            295
 
demo@FREEPDB1> select * from t;
 
         X          Y
---------- ----------
1               1
 
demo@FREEPDB1> update t set y = y+1 where x =1;
 
1 row updated.
 
demo@FREEPDB1> select * from v$lock where sid = 295 ;
 
   SID TY        ID1        ID2      LMODE    REQUEST   BLOCK     CON_ID
------ -- ---------- ---------- ---------- ---------- ------- ----------
   295 TX     262163       1360          6          0       0          3
   295 TM     117538          0          3          0       0          3
   295 TM     117537          0          3          0       0          3
   295 AE        138 1423580300          4          0       0          0
   295 UL          1          0          4          0       0          3
  
 
demo@FREEPDB1> /* From session-2 did this */
demo@FREEPDB1> @sid
 
INST_ SESSION_ID
----- ----------
1             60
 
demo@FREEPDB1> update t set y = y+1 where x =1;
 
1 row updated.
 
demo@FREEPDB1>
demo@FREEPDB1> select * from v$lock where sid = 60;
 
 SID TY        ID1        ID2  LMODE    REQUEST  BLOCK CON_ID
---- -- ---------- ---------- ------ ---------- ------ ------
  60 TX     524317       1560      6          0      0      3
  60 TM     117538          0      3          0      0      3
  60 TM     117537          0      3          0      0      3
  60 AE        138 1423580300      4          0      0      0
  60 UL          1          0      4          0      0      3 
 
The update from the second session was not blocked – even though it was updating the same row as it got updated from the first session – how this was even remotely possible? This seems to break the whole concept of database locking.
 
Now let us do commit from second session and go back to first session

 
demo@FREEPDB1>  --
demo@FREEPDB1>  -- Session-1
demo@FREEPDB1>  --
demo@FREEPDB1> select * from t;
 
         X          Y
---------- ----------
         1          2
 
Even though we have an ongoing lock in first session, this session can see now the second session (session id = 60) committed changes – which is really good – if that doesn’t happen/notified then this session will still assume the value of y = 1 then the expression y = y+1  upon execution will set y=2, and results into a lost update issues (that is changes from second session, will get overwritten)
 
Now when we commit from first session, it goes like this

 
demo@FREEPDB1>  --
demo@FREEPDB1>  -- Session-1
demo@FREEPDB1>  --
demo@FREEPDB1> commit;
 
Commit complete.
 
demo@FREEPDB1> select * from t;
 
         X          Y
---------- ----------
         1          3
 
demo@FREEPDB1>
 
and it worked well, even though the same row/value got updated from two different session – which was impossible with prior version of Oracle database.
 
So what is happening with Oracle database 23c and how the database has become efficient with handling these non-blocking updates? we will see in the next blogpost.

Tuesday, October 31, 2023

convert a partitioned table into a non-partitioned table

Today’s blogpost is about how to convert a partitioned table to a non-partitioned (un-partition) table, yes with Oracle 12c we got a single DDL command to Convert a non-partitioned table to a partitioned ones, using that DDL we
 
  • Can Compress the partitions & put them in different tablespace if needed.
  • Can update indexes to have them partitioned and compress & put them in different tablespace if needed
  • Can pick up a subset of rows from the table and archive them.
  • All of these can be done online (without downtime)
 
But there is no option available to turn a partitioned table into a non-partitioned table. In this demo we will see about how to get that done efficiently using partition maintenance operation.
 
First we will create a partitioned table with few indexes
 
demo@PDB1> create table t
  2  partition by range( created )
  3  interval ( numtoyminterval(1,'month') )
  4  ( partition pJan2023 values less than
  5     (to_date('01-feb-2023','dd-mon-yyyy')) )
  6  as
  7  select * from all_objects;
 
Table created.
 
demo@PDB1> create index t_idx1 on t(object_id) local;
 
Index created.
 
demo@PDB1> create index t_idx2 on t(object_name);
 
Index created.
 
demo@PDB1> create index t_idx3 on t(object_id,owner)
  2  global partition by hash(object_id)
  3  ( partition p1, partition p2);
 
Index created.
 
demo@PDB1> select partition_name , high_value
  2  from user_tab_partitions
  3  where table_name ='T';
 
PARTITION_NAME  HIGH_VALUE
--------------- ------------------------------
PJAN2023        TO_DATE(' 2023-02-01 00:00:00'
SYS_P12137      TO_DATE(' 2023-03-01 00:00:00'
SYS_P12138      TO_DATE(' 2023-11-01 00:00:00'
SYS_P12139      TO_DATE(' 2023-05-01 00:00:00'
SYS_P12140      TO_DATE(' 2023-04-01 00:00:00'
SYS_P12141      TO_DATE(' 2023-06-01 00:00:00'
SYS_P12142      TO_DATE(' 2023-07-01 00:00:00'
SYS_P12143      TO_DATE(' 2023-08-01 00:00:00'
SYS_P12144      TO_DATE(' 2023-09-01 00:00:00'
SYS_P12145      TO_DATE(' 2023-10-01 00:00:00'
 
 
One of the abilities of Oracle database is to go from one partition approach to another easily with a single DDL command. Using that we can modify the table to have single partition with all set of data in it.
 
demo@PDB1> alter table t
  2  modify partition by range( created )
  3  ( partition pmax values less than(maxvalue) )
  4  update indexes (
  5  t_idx1 local, t_idx2 local, t_idx3 local );
 
Table altered.
demo@PDB1> select partition_name , high_value
  2  from user_tab_partitions
  3  where table_name ='T';
 
PARTITION_NAME  HIGH_VALUE
--------------- ------------------------------
PMAX            MAXVALUE
 
demo@PDB1>
 
now all the partitions got merged/condensed into a single partition, but it is still a partitioned table.
 
Finally, all we need is exchange this partition into a non-partitioned table.

 
demo@PDB1> create table t1
  2  for exchange with table t;
 
Table created.
 
demo@PDB1>
demo@PDB1> create index t1_idx1 on t1(object_id);
 
Index created.
 
demo@PDB1> create index t1_idx2 on t1(object_name);
 
Index created.
 
demo@PDB1> create index t1_idx3 on t1(object_id,owner);
 
Index created.
 
demo@PDB1> alter table t
  2  exchange partition pmax
  3  with table t1
  4  including indexes;
 
Table altered.
demo@PDB1> select count(*) from t;
 
  COUNT(*)
----------
         0
demo@PDB1> select count(*) from t1;
 
  COUNT(*)
----------
     71007
demo@PDB1>

then finally drop and rename the tables as needed, so within a couple of steps we can turn a partitioned table into a non-partitioned table using the capabilities of Oracle partition techniques.

demo@PDB1> drop table t purge;
 
Table dropped.
demo@PDB1> rename t1 to t;
 
Table renamed.
demo@PDB1>
 
 

Tuesday, October 24, 2023

Load data from files in cloud storage using API Keys

 
In this post we will see about how to use API keys for OCI user to authenticate the cloud resources.
 
Step#1 – generate an API signing key
 
Generate a private key and pem format public key.
 
In this example the public and private key are stored in %HOME%\TESTKEY2

 
Rajeshwaran@rajeyaba-3WH3DK3 MINGW64 /c/users/Rajeshwaran
$ cd /c/users/rajeshwaran/
 
Rajeshwaran@rajeyaba-3WH3DK3 MINGW64 /c/users/rajeshwaran
$ mkdir TESTKEY2
 
Rajeshwaran@rajeyaba-3WH3DK3 MINGW64 /c/users/rajeshwaran
$ cd TESTKEY2
 
Generating the private key and ensuring that we can only read the private key file.
 
Rajeshwaran@rajeyaba-3WH3DK3 MINGW64 /c/users/rajeshwaran/TESTKEY2
$ openssl genrsa -out oci_api_key.pem 2048
Generating RSA private key, 2048 bit long modulus (2 primes)
..............................................+++++
.....................................+++++
e is 65537 (0x010001)
 
Rajeshwaran@rajeyaba-3WH3DK3 MINGW64 /c/users/rajeshwaran/TESTKEY2
$ ls -ltr
total 4
-rw-r--r-- 1 Rajeshwaran 197121 1702 Oct 21 19:05 oci_api_key.pem
 
Rajeshwaran@rajeyaba-3WH3DK3 MINGW64 /c/users/rajeshwaran/TESTKEY2
$ chmod go-rwx oci_api_key.pem
 
Rajeshwaran@rajeyaba-3WH3DK3 MINGW64 /c/users/rajeshwaran/TESTKEY2
$ ls -ltr
total 4
-rw-r--r-- 1 Rajeshwaran 197121 1702 Oct 21 19:05 oci_api_key.pem
 
Generating the public key
 
Rajeshwaran@rajeyaba-3WH3DK3 MINGW64 /c/users/rajeshwaran/TESTKEY2
$ openssl rsa -pubout -in oci_api_key.pem -out oci_api_key_public.pem
writing RSA key
 
Rajeshwaran@rajeyaba-3WH3DK3 MINGW64 /c/users/rajeshwaran/TESTKEY2
$ ls -ltr
total 5
-rw-r--r-- 1 Rajeshwaran 197121 1702 Oct 21 19:05 oci_api_key.pem
-rw-r--r-- 1 Rajeshwaran 197121  460 Oct 21 19:06 oci_api_key_public.pem
 
Generating the key’s fingerprint
 
Rajeshwaran@rajeyaba-3WH3DK3 MINGW64 /c/users/rajeshwaran/TESTKEY2
$ openssl rsa -pubout -outform DER -in oci_api_key.pem | openssl md5 -c
writing RSA key
(stdin)= d8:b3:74:c1:5c:76:51:5a:0b:5d:89:ce:25:46:ad:35
 
Step#2 – creating an OCI user and public key for the user.
 
Click on the OCI Hamburger menu > Identity > users > create user.
Then click on the ‘API keys’ in the resource section to add the public key contents ( oci_api_key_public.pem )

 
 


 
Once the public key got dragged and dropped there, the API Key fingerprint will be generated like this.
 


 
 
Step#3 – connect to the database and create credentials using the private key like this
 
demo-user@ATP19C> begin
  2     dbms_cloud.create_credential(
  3             credential_name =>'api_key_cred',
  4             user_ocid => 'ocid1.user.oc1..aaaaaaaambmzgftlbzv2krf2cm7dismh35jmdzhrdh3rx4z4434atun7dlsa',
  5             tenancy_ocid => 'ocid1.tenancy.oc1..aaaaaaaacogbjgqcpcgk4vtnwhi2binpz5buav3sidham4evvfu2fjixucaa',
  6             private_key =>'MIIEowIBAAKCAQEA0zUiWFszop1WIjWiKS3PzstuNbeSKffB3oyWuBID/xpQREKS
  7  0it1+YoXo8jFAee9Ep6idM97AMkOp1L1/FoSyHBzs4qHqE4srRAnSwNUzaox/ATZ
       ........
       ........
       ........
 30  khS6loArMh6vQOrpjmDjm3Mbu7kt4ROJHa2dwpSVKUiDmKXjMtb6',
 31             fingerprint =>'d8:b3:74:c1:5c:76:51:5a:0b:5d:89:ce:25:46:ad:35' );
 32  end;
 33  /
 
PL/SQL procedure successfully completed.
 
Step#4 – accessing the object storage file contents
 
demo-user@ATP19C> exec :filename := 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/idcglquusbz6/b/MY_DEMO_BUCKET/o/mycsvdemo.txt';
 
PL/SQL procedure successfully completed.
 
demo-user@ATP19C> exec :uri := 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/';
 
PL/SQL procedure successfully completed.
 
demo-user@ATP19C> exec :namespace := '********';
 
PL/SQL procedure successfully completed.
 
demo-user@ATP19C> select object_name, bytes
  2  from dbms_cloud.list_objects(
  3     'api_key_cred',
  4     :uri||:namespace||'/b/MY_DEMO_BUCKET/o/');
 
OBJECT_NAME                         BYTES
------------------------------ ----------
E1.DMP                              12288
E2.DMP                              12288
E3.DMP                              12288
File1.json                          55878
File2.json                          45992
File3.json                         484455
File4.json                         113873
File5.json                           2757
File6.json                          10612
mycsvdemo.txt                         933
 
10 rows selected.
 
Above all the steps are required to access the object storage contents using API Keys, but if the step (2) was missing then it will return an error like this
 
ORA-20401: Authorization failed for URI - https://objectstorage.us-ashburn-1.oraclecloud.com/n/********/b/MY_DEMO_BUCKET/o/
ORA-06512: at "C##CLOUD$SERVICE.DBMS_CLOUD", line 1679
ORA-06512: at "C##CLOUD$SERVICE.DBMS_CLOUD", line 9318
ORA-06512: at line 1


Wednesday, October 18, 2023

Load data from files in the cloud storage

DBMS_CLOUD package was introduced in Oracle Autonomous database to provide a simple way to interact with an object store from the database. It can be used to support data loading from various cloud service provides like OCI, Azure blob storage, Amazon S3, Git hub repository and Google cloud storage.
 
In the following example I will be using my Oracle Free tier account for demonstration to show how to do step by step configuration for copying data from a delimited file located in object storage to an existing table in the database.
 
Log into the Oracle cloud and from the “Top-left” hamburger menu select the “Storage > Object storage & Archive storage > Buckets “option, finally click on the “Create Bucket” button on the “Object storage” dashboard and enter the required details like ( name of the bucket, storage tier and key management, then click the “create” button )

 


 
Once the bucket has been provisioned, drill into it by clicking the bucket name.
 


 
Click the “upload” button, to upload the required file into the object storage bucket, once the file got uploaded click on the “kebab” menu on the right hand side of the uploaded item and select the “view object details” option, the resulting screen will display details about the uploaded object, including the URI path, which will be needed to access the object.
 


 
To grant the access to the bucket and its contents, we must create an access token. using the “Top-left” hamburger menu select the “ Identity & Security > Identity > users “ option, and click on the user that needs access to the bucket, then click on the “Auth Tokens” link in the “Resources” menu  on the bottom left of the screen, and click on the “Generate Token” button.
 


 
Enter the description of the token and click the “Generate Token” button. Click the “copy” link to record the token in clipboard, which will be utilized shortly to create the credentials in the database.
 


 
Now we will see how to use this generated Token to access the details in object storage from SQL Statements.
 
First we will create a schema user account with required set of privileges to it.

 
admin@ATP19C> create user demo_user identified by "********";
 
User created.
 
admin@ATP19C> grant connect,resource,unlimited tablespace to demo_user;
 
Grant succeeded.
 
admin@ATP19C> grant execute on dbms_cloud to demo_user;
 
Grant succeeded.
 
Then connect back to demo_user account and create a credential for object storage using create_credential procedure.
 
admin@ATP19C> conn demo_user/"********"@atp19c
Connected.
demo_user@ATP19C> begin
  2     dbms_cloud.create_credential(
  3             credential_name =>'my_demo_cred',
  4             username => 'your_oci_login_email_account',
  5             password => 'the_above_copied_userAuth_Token_Here');
  6  end;
  7  /
 
PL/SQL procedure successfully completed.
 
To access the objects in the object storage all we need its URL location.
 
demo_user@ATP19C> variable filename varchar2(800)
demo_user@ATP19C> variable namespace varchar2(80)
demo_user@ATP19C> variable uri varchar2(80)
demo_user@ATP19C>
demo_user@ATP19C> exec :filename := 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/*******/b/MY_DEMO_BUCKET/o/mycsvdemo.txt';
 
PL/SQL procedure successfully completed.
 
demo_user@ATP19C> exec :uri := 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/';
 
PL/SQL procedure successfully completed.
 
demo_user@ATP19C> exec :namespace := '********';
 
PL/SQL procedure successfully completed.
 
demo_user@ATP19C> select object_name, bytes
  2  from dbms_cloud.list_objects(
  3     'my_demo_cred',
  4     :uri||:namespace||'/b/MY_DEMO_BUCKET/o/');
 
OBJECT_NAME                         BYTES
------------------------------ ----------
File1.json                          55878
File2.json                          45992
File3.json                         484455
File4.json                         113873
File5.json                           2757
File6.json                          10612
mycsvdemo.txt                         933
 
7 rows selected.
 
The dbms_cloud package got several methods related to file handling in the object storage, ‘list_object” method is there to list all the files in the specific location. Likewise, to read the contents of the specific file we can use the “get_object” method – that returns the binary contents of the data.
 
demo_user@ATP19C> select dbms_cloud.get_object('my_demo_cred',:filename) from dual;
 
DBMS_CLOUD.GET_OBJECT('MY_DEMO_CRED',:FILENAME)
----------------------------------------------------------------------------------------------------------------------------------------------------------------
22454D504E4F222C22454E414D45222C224A4F42222C224D4752222C224849524544415445222C2253414C222C22434F4D4D222C22444550544E4F220D0A373336392C22534D495448222C22434C4552
4B222C373930322C31372D4445432D313938302031323A30303A303020414D2C3830302C2C32300D0A373439392C22414C4C454E222C2253414C45534D414E222C373639382C32302D4645422D313938
312031323A30303A303020414D2C313630302C3330302C33300D0A373532312C2257415244222C2253414C45534D414E222C373639382C32322D4645422D313938312031323A30303A303020414D2C31
3235302C3530302C33300D0A373536362C224A4F4E4553222C224D414E41474552222C373833392C30322D4150522D313938312031323A30303A303020414D2C323937352C2C32300D0A373635342C22
4D415254494E222C2253414C45534D414E222C373639382C32382D5345502D313938312031323A30303A303020414D2C313235302C313430302C33300D0A373639382C22424C414B45222C224D414E41
474552222C373833392C30312D4D41592D313938312031323A30303A303020414D2C323835302C2C33300D0A373738322C22434C41524B222C224D414E41474552222C373833392C30392D4A554E2D31
3938312031323A30303A303020414D2C323435302C2C31300D0A373738382C2253434F5454222C22414E414C595354222C373536362C31392D4150522D303038372031323A30303A303020414D2C3330
30302C2C32300D0A373833392C224B494E47222C22505245534944454E54222C2C31372D4E4F562D313938312031323A30303A303020414D2C353030302C2C31300D0A373834342C225455524E455222
2C2253414C45534D414E222C373639382C30382D5345502D313938312031323A30303A303020414D2C313530302C302C33300D0A373837362C224144414D53222C22434C45524B222C373738382C3233
2D4D41592D303038372031323A30303A303020414D2C313130302C2C32300D0A373930302C224A414D4553222C22434C45524B222C373639382C30332D4445432D313938312031323A30303A30302041
4D2C3935302C2C33300D0A373930322C22464F5244222C22414E414C595354222C373536362C30332D4445432D313938312031323A30303A303020414D2C333030302C2C32300D0A373933342C224D49
4C4C4552222C22434C45524B222C373738322C32332D4A414E2D313938322031323A30303A303020414D2C313330302C2C31300D0A
 
To return the data in the text format, we need to cast this raw data to text format using UTL_RAW API.
 
demo_user@ATP19C> select utl_raw.cast_to_varchar2(dbms_cloud.get_object('my_demo_cred',:filename)) from dual;
 
UTL_RAW.CAST_TO_VARCHAR2(DBMS_CLOUD.GET_OBJECT('MY_DEMO_CRED',:FILENAME))
-------------------------------------------------------------------------
"EMPNO","ENAME","JOB","MGR","HIREDATE","SAL","COMM","DEPTNO"
7369,"SMITH","CLERK",7902,17-DEC-1980 12:00:00 AM,800,,20
7499,"ALLEN","SALESMAN",7698,20-FEB-1981 12:00:00 AM,1600,300,30
7521,"WARD","SALESMAN",7698,22-FEB-1981 12:00:00 AM,1250,500,30
7566,"JONES","MANAGER",7839,02-APR-1981 12:00:00 AM,2975,,20
7654,"MARTIN","SALESMAN",7698,28-SEP-1981 12:00:00 AM,1250,1400,30
7698,"BLAKE","MANAGER",7839,01-MAY-1981 12:00:00 AM,2850,,30
7782,"CLARK","MANAGER",7839,09-JUN-1981 12:00:00 AM,2450,,10
7788,"SCOTT","ANALYST",7566,19-APR-0087 12:00:00 AM,3000,,20
7839,"KING","PRESIDENT",,17-NOV-1981 12:00:00 AM,5000,,10
7844,"TURNER","SALESMAN",7698,08-SEP-1981 12:00:00 AM,1500,0,30
7876,"ADAMS","CLERK",7788,23-MAY-0087 12:00:00 AM,1100,,20
7900,"JAMES","CLERK",7698,03-DEC-1981 12:00:00 AM,950,,30
7902,"FORD","ANALYST",7566,03-DEC-1981 12:00:00 AM,3000,,20
7934,"MILLER","CLERK",7782,23-JAN-1982 12:00:00 AM,1300,,10
 
Likewise to copy the data into another table, we got a method called “copy_data” to copy its content.
 
demo_user@ATP19C> create table emp_demo as
  2  select *
  3  from demo.emp
  4  where 1 = 0 ;
 
Table created.
 
demo_user@ATP19C> begin
  2     dbms_cloud.copy_data(
  3             table_name =>'emp_demo',
  4             credential_name =>'my_demo_cred',
  5             file_uri_list => :filename,
  6             format => json_object('delimiter' value ',',
  7                             'dateformat' value 'dd-mon-yyyy hh12:mi:ss am',
  8                             'quote' value '"','enablelogs' value false,
  9                             'skipheaders' value '1') );
 10  end;
 11  /
begin
*
ERROR at line 1:
ORA-20000: ORA-06564: object DATA_PUMP_DIR does not exist
ORA-06512: at "C##CLOUD$SERVICE.DBMS_CLOUD", line 1693
ORA-06512: at "C##CLOUD$SERVICE.DBMS_CLOUD", line 8286
ORA-06512: at "C##CLOUD$SERVICE.DBMS_CLOUD", line 8309
ORA-06512: at line 2
 
The above error is due to writing logs to default directory object “data_pump_dir” & since the schema owner don’t have any privilege to it, It got errored.
 
Once we grant the appropriate privileges

 
admin@ATP19C> grant write on directory data_pump_dir to demo_user;
 
Grant succeeded.
 
admin@ATP19C> grant read on directory data_pump_dir to demo_user ;
 
Grant succeeded.
 
The load got succeed
 
demo_user@ATP19C> begin
  2     dbms_cloud.copy_data(
  3             table_name =>'emp_demo',
  4             credential_name =>'my_demo_cred',
  5             file_uri_list => :filename,
  6             format => json_object('delimiter' value ',',
  7                             'dateformat' value 'dd-mon-yyyy hh12:mi:ss am',
  8                             'quote' value '"','enablelogs' value false,
  9                             'skipheaders' value '1') );
 10  end;
 11  /
 
PL/SQL procedure successfully completed.
 
demo_user@ATP19C> select * from emp_demo;
 
     EMPNO ENAME      JOB              MGR HIREDATE           SAL       COMM     DEPTNO
---------- ---------- --------- ---------- ----------- ---------- ---------- ----------
      7369 SMITH      CLERK           7902 17-DEC-1980        800                    20
      7499 ALLEN      SALESMAN        7698 20-FEB-1981       1600        300         30
      7521 WARD       SALESMAN        7698 22-FEB-1981       1250        500         30
      7566 JONES      MANAGER         7839 02-APR-1981       2975                    20
      7654 MARTIN     SALESMAN        7698 28-SEP-1981       1250       1400         30
      7698 BLAKE      MANAGER         7839 01-MAY-1981       2850                    30
      7782 CLARK      MANAGER         7839 09-JUN-1981       2450                    10
      7788 SCOTT      ANALYST         7566 19-APR-0087       3000                    20
      7839 KING       PRESIDENT            17-NOV-1981       5000                    10
      7844 TURNER     SALESMAN        7698 08-SEP-1981       1500          0         30
      7876 ADAMS      CLERK           7788 23-MAY-0087       1100                    20
      7900 JAMES      CLERK           7698 03-DEC-1981        950                    30
      7902 FORD       ANALYST         7566 03-DEC-1981       3000                    20
      7934 MILLER     CLERK           7782 23-JAN-1982       1300                    10
 
14 rows selected.
 
demo_user@ATP19C>