Tuesday, August 27, 2024

Data share - Part III

In this blogpost we will see about how to create, populate, and publish the data share. A data share is a named entity in the provider instance and It can be a group of datasets shared as a single entity. The share is a logical container that contains objects such as tables/view that we can share with the data recipient. An authorized data share recipient can access the share and all tables in it.
 
 
As a share provider create a named data share DEMO_SHARE using the DATA_SHARE_STORAGE_LINK refers to the storage link created in the previous blog post.
 
share-provider@ATP19C> begin
  2     dbms_share.create_share(
  3             share_name => 'DEMO_SHARE',
  4             share_type => 'VERSIONED',
  5             storage_link_name => 'DATA_SHARE_STORAGE_LINK');
  6  end;
  7  /
 
PL/SQL procedure successfully completed.
 
share-provider@ATP19C> select share_name, current_version
  2  from user_shares
  3  where share_name ='DEMO_SHARE';
 
SHARE_NAME           CURRENT_VERSION
-------------------- ---------------
DEMO_SHARE
 
before we publish the share, the CURRENT_VERSION will be null, after we publish the share, the column CURRENT_VERSION will be incremental value.
 
Add data source to the data share.
 
share-provider@ATP19C> begin
  2     dbms_share.add_to_share(
  3             share_name => 'DEMO_SHARE',
  4             table_name => 'BIG_TABLE',
  5             owner => user,
  6             share_table_name => 'BIG_TABLE');
  7  end;
  8  /
 
PL/SQL procedure successfully completed.
 
share-provider@ATP19C> select share_table_name, table_name, table_owner
  2  from user_share_tables
  3  where share_name ='DEMO_SHARE';
 
SHARE_TABLE_NAME     TABLE_NAME           TABLE_OWNER
-------------------- -------------------- --------------------
BIG_TABLE            BIG_TABLE            SHARE_PROVIDER
 
Upto this point, share and its table are stored in database only. the call to publish_share API will offload data to OCI buckets and make it accessible to recipients.
 
admin@ATP19C> exec dbms_share.update_default_share_property('JOB_TYPE','DBMS_CLOUD');
 
PL/SQL procedure successfully completed.
 
share-provider@ATP19C> exec dbms_share.publish_share( share_name => 'DEMO_SHARE' );
 
PL/SQL procedure successfully completed.
 
Call to the PUBLISH_SHARE API will offloads data to the Cloud Store and makes it accessible to recipients that you define and authorize later.
 
share-provider@ATP19C> select share_version, status
  2  from user_share_versions
  3  where share_name ='DEMO_SHARE';
 
SHARE_VERSION STATUS
------------- -----------
            1 EXPORTING
 
If the status shows exporting, then publishing process is not yet complete, we need to wait for few mins.
when we publish a "VERSIONED" share, the tool generates and stores the data share as parquet files
in the specified object storage bucket, any authenticated data share recipients can directly access the share in the bucket. And you don’t need to proceed further till the EXPORTING status change to CURRENT status.
 
share-provider@ATP19C> select share_version, status
  2  from user_share_versions
  3  where share_name ='DEMO_SHARE';
 
SHARE_VERSION STATUS
------------- -----------
            1 CURRENT
 
share-provider@ATP19C> variable uri varchar2(95)
share-provider@ATP19C> exec :uri := 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/idcglquusbz6/b/DATA_SHARE_BUCKET/o/';
 
PL/SQL procedure successfully completed.
 
share-provider@ATP19C>
share-provider@ATP19C> col object_name for a85
share-provider@ATP19C> select object_name, bytes
  2      from dbms_cloud.list_objects(
  3                 credential_name => 'SHARE_BUCKET_CREDENTIAL',
  4                 location_uri => :uri );
 
OBJECT_NAME                                                                                BYTES
------------------------------------------------------------------------------------- ----------
DEMO_SHARE_528631/SHARE_PROVIDER/BIG_TABLE/V1_1S_1_20240408T043228502295Z.parquet       24899814
 
This tool generates and stores the data share as parquet files in the specified bucket & any authenticated data share recipient can directly access the share in the bucket.
 

Tuesday, August 20, 2024

Data share - Part II

In this blogpost we will see about how to setup the share provider along with the necessary roles and privileges.
 
As an ADMIN user create a user called SHARE_PROVIDER, along with necessary privileges.
 
admin@ATP19C> create user share_provider identified by "Good2go1234!";
 
User created.
 
admin@ATP19C> grant connect, resource, unlimited tablespace, dwrole to share_provider ;
 
Grant succeeded.
 
admin@ATP19C>
 
then REST enable that schema using ORDS_ADMIN API
 
admin@ATP19C> begin
  2     ords_admin.enable_schema(
  3             p_schema =>'SHARE_PROVIDER',
  4             p_enabled => true,
  5             p_url_mapping_type => 'BASE_PATH',
  6             p_url_mapping_pattern => 'share_provider',
  7             p_auto_rest_auth => true);
  8  end;
  9  /
 
PL/SQL procedure successfully completed.
 
Then enable the data sharing access for the schema.
 
admin@ATP19C> begin
  2     dbms_share.enable_schema(
  3             schema_name =>'SHARE_PROVIDER',
  4             enabled=>true );
  5  end;
  6  /
 
PL/SQL procedure successfully completed.
 
Now login to the newly created user and run the below query to verify the required privileges to share objects
 
share-provider@ATP19C> select dbms_share.can_create_share(user) from dual;
 
DBMS_SHARE.CAN_CREATE_SHARE(USER)
---------------------------------
                                1
 
The value of 0 returned by the above query, indicates that the user doesn’t have privileges to share objects, and a value of 1 returned indicates the user have the required privileges to share objects.
 
Create an object storage bucket named DATA_SHARE_BUCKET to store the data share data in object storage and create a link to the object storage bucket and associate the access credentials with that bucket. Follow the steps listed here
to access data in object store, we need to enable database user to authenticate itself with the object store using your OCI object store account and credentials.
 
share-provider@ATP19C> variable uri varchar2(95)
share-provider@ATP19C> exec :uri := 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/idcglquusbz6/b/DATA_SHARE_BUCKET/o/';
 
PL/SQL procedure successfully completed.
 
share-provider@ATP19C>
share-provider@ATP19C> select object_name, bytes
  2  from dbms_cloud.list_objects(
  3             credential_name => 'SHARE_BUCKET_CREDENTIAL',
  4             location_uri => :uri );
 
no rows selected
 
create a named storage link that points to the object storage buckets URI.
 
share-provider@ATP19C> print uri
 
URI
------------------------------------------------------------------------------------------
https://objectstorage.us-ashburn-1.oraclecloud.com/n/idcglquusbz6/b/DATA_SHARE_BUCKET/o/
 
share-provider@ATP19C> begin
  2     dbms_share.create_cloud_storage_link(
  3             storage_link_name => 'data_share_storage_link',
  4             uri => :uri );
  5  end;
  6  /
 
PL/SQL procedure successfully completed.
 
share-provider@ATP19C>
share-provider@ATP19C> @printtbl ' select * from user_lineage_cloud_storage_links '
STORAGE_LINK_NAME             : "DATA_SHARE_STORAGE_LINK"
STORAGE_LINK_ID               : "173"
URI                           : "https://objectstorage.us-ashburn-1.oraclecloud.com/n/idcglquusbz6/b/DATA_SHARE_BUCKET/o/"
METADATA_PATH                 : ""STORAGE_LINK"."DATA_SHARE_STORAGE_LINK""
CREDENTIAL_NAME               : ""
METADATA                      : ""
APPLICATION_ID                : "6"
CREATED                       : "07-APR-24 05.20.25.028435 PM +05:30"
UPDATED                       : "07-APR-24 05.20.25.028435 PM +05:30"
-----------------
 
PL/SQL procedure successfully completed.
 
Associate the storage link – created above with the OCI native credentials.
 
share-provider@ATP19C> begin
  2     dbms_share.set_storage_credential(
  3             storage_link_name => 'data_share_storage_link',
  4             credential_name => 'SHARE_BUCKET_CREDENTIAL' );
  5  end;
  6  /
 
PL/SQL procedure successfully completed.
 
share-provider@ATP19C> @printtbl ' select * from user_lineage_cloud_storage_links '
STORAGE_LINK_NAME             : "DATA_SHARE_STORAGE_LINK"
STORAGE_LINK_ID               : "173"
URI                           : "https://objectstorage.us-ashburn-1.oraclecloud.com/n/idcglquusbz6/b/DATA_SHARE_BUCKET/o/"
METADATA_PATH                 : ""STORAGE_LINK"."DATA_SHARE_STORAGE_LINK""
CREDENTIAL_NAME               : "SHARE_BUCKET_CREDENTIAL"
METADATA                      : ""
APPLICATION_ID                : "6"
CREATED                       : "07-APR-24 05.20.25.028435 PM +05:30"
UPDATED                       : "07-APR-24 05.20.25.028435 PM +05:30"
-----------------
 
PL/SQL procedure successfully completed.
 
 

Wednesday, August 14, 2024

Data share - Part I

Data sharing enable you to share DATA with one or more customers, Oracle Autonomous database enables you to create share using data sharing tool (which was a mix of managed ORDS services and DBMS_SHARE API), the data sharing consists of two steps.
 
  •          The provider provides the data share for access.
  •          The consumer receives the access from the published shares.
 
The Oracle Data sharing in general is based on the open delta sharing protocol, providing a simple REST based API to share data in paraquet format. Data is made accessible by data sharing provider (such as Oracle Autonomous database) to data sharing recipient (such as Power BI, Tableau, Apache spark or Java)
 
In the traditional methods of data sharing users typically follow one of these approaches
 
  •          Send data via email.
  •          Share data thorough FTP server
  •          Use application specific API for data extraction.
  •          Utilize vendor specific tools to copy required data
 
While the traditional methods work in general, they come up with their own drawbacks.
 
  •          Managing separate process for data extraction – labour intensive operation
  •          Extracting and duplication data is prone to staleness.
  •         Architecture / process is difficult to maintain and hard to scale.
  •          Redundant data extraction can introduce format compatibility issues.
 
The Morden way of data sharing must be open, secure, real-time, vendor-free and avoid the pitfalls of extracting and duplicating data for individual consumers of data in collaborative environment. Delta Sharing is an open protocol for secure real-time data exchange of large datasets that satisfies all these criteria, supported by multiple clients and program languages, and vendor agnostic. The delta sharing protocol is aimed at solving these following problems.
 
  •          Share data without copying it to another system.
  •          Producer controls the state of data (version of data)
  •          Be an open cross-platform solution.
  •          Support a wide range of clients such as Power BI, Tableau, Apache Spark, pandas and Java
  •          Provide flexibility to consume data using the tools of choice for BI, machine learning and AI use cases.
  •          Provide strong security, auditing, and governance.
  •          Scale to massive data sets
 
At a high level the delta sharing protocol works as follows
 
  •          The share provider user creates and publishes a data share that can be shared with one or more recipients.
  •          The share provider user creates and authorizes recipients.
  •           Every recipient will get a personal activation link to download their own .JSON profile with the necessary information to access their data share.
  •          The recipient subscribes to the data share provider by using the .JSON configuration profile.
  •          The recipient retrieves data from the share.
The overall architectural approach for delta sharing protocol looks like this
 
 


 
In the next series of blogpost we will see about step-by-step approach for setting up delta sharing protocol using Autonomous database and managed ORDS.
 
 
 


Monday, August 5, 2024

Column level auditing - Oracle 23ai new feature

Prior to Oracle Database 23ai we don’t have the ability to audit selectively column in the table, we have to enable auditing for entire table even though we may be interested only in few columns that we would like to have enable auditing.
 
A new security feature introduced in Oracle 23ai is column level auditing feature, where we can enable auditing for just a (few)column in a table, without enabling audit at table level which would include all the columns in the table.
 
In Oracle database 23ai we can create unified auditing policies to adjust individual column in table/views. This feature enables us to configure more finer grained and focused audit policies ensuring that auditing is selective enough to reduce the creation of unnecessary audit records while at the same time ensuring all security requirements are met.
 
In this demo we will see about how to enable auditing for a single column, initially created for UPDATE statement and then altered to include SELECT statements, but only for a specific column in the table. Other columns in the table are not enable for auditing.
 
Create schema and grant the necessary privileges.
 
admin@ATP23ai> grant db_developer_role
  2     , select_catalog_role
  3     , audit system
  4     , unlimited tablespace
  5  to demo_user identified by "Good2go1234!";
 
Grant succeeded.
 
admin@ATP23ai> grant create session to app_user identified by "Good2go1234!";
 
Grant succeeded.
 
admin@ATP23ai> grant read on audsys.UNIFIED_AUDIT_TRAIL to demo_user,app_user;
 
Grant succeeded.
 
admin@ATP23ai>
 
create the table and insert some sample rows.
 
admin@ATP23ai> conn demo_user/"Good2go1234!"@atp23_vpn
Connected.
demo-user@ATP23ai> create table if not exists
  2  emp( empno number
  3     , ename varchar2(10)
  4     , hire_date date );
 
Table created.
 
demo-user@ATP23ai>
demo-user@ATP23ai> insert into emp(empno,ename,hire_date)
  2  values (1,'Tom',sysdate),
  3    (2,'Kyte',sysdate+1) ;
 
2 rows created.
 
demo-user@ATP23ai>
demo-user@ATP23ai> select * from emp;
 
     EMPNO ENAME      HIRE_DATE
---------- ---------- -----------
         1 Tom        05-AUG-2024
         2 Kyte       06-AUG-2024
 
Create the audit policy, the policy will audit any UPDATE statement on the HIRE_DATE column, followed by necessary grants to application schema (APP_USER)
 
demo-user@ATP23ai> create audit policy emp_update_policy
  2  actions update(hire_date)
  3  on emp;
 
Audit policy created.
 
demo-user@ATP23ai> audit policy emp_update_policy;
 
Audit succeeded.
 
demo-user@ATP23ai> grant select on emp to app_user;
 
Grant succeeded.
 
demo-user@ATP23ai> grant update(hire_date) on emp to app_user;
 
Grant succeeded.
 
Connect as application schema (APP_USER) and execute SELECT statements and verify the unified audit logs
 
app-user@ATP23ai> select * from demo_user.emp;
 
     EMPNO ENAME      HIRE_DATE
---------- ---------- -----------
         1 Tom        05-AUG-2024
         2 Kyte       06-AUG-2024
 
app-user@ATP23ai> col dbusername for a15
app-user@ATP23ai> col action_name for a20
app-user@ATP23ai> col event_timestamp for a35
app-user@ATP23ai> col sql_text for a50
app-user@ATP23ai>
app-user@ATP23ai> select dbusername
  2     , action_name
  3     , event_timestamp
  4     , sql_text
  5  from unified_audit_trail
  6  where object_name ='EMP'
  7  and object_schema = 'DEMO_USER';
 
no rows selected
 
 
No audit trial exists for SELECT statement issued by app_user, since the audit in-place is only for UPDATE statements.
 
Executing the update statement which modifies the HIRE_DATE column in EMP table.
 
app-user@ATP23ai> update demo_user.emp
  2  set hire_date = sysdate+ 1
  3  where ename ='Tom';
update demo_user.emp
                 *
ERROR at line 1:
ORA-41900: missing SELECT privilege on "DEMO_USER"."EMP"
Help: https://docs.oracle.com/error-help/db/ora-41900/
 
The predicates in the above SQL command will try to place a row level lock for the matching rows in the target table before progressing the updates and since we got only READ privilege on the target table we got the above “missing privilege” error and even those failed UPDATE statement also get audited.
 
app-user@ATP23ai> select dbusername
  2     , action_name
  3     , event_timestamp
  4     , sql_text
  5  from unified_audit_trail
  6  where object_name ='EMP'
  7  and object_schema = 'DEMO_USER';
 
DBUSERNAME      ACTION_NAME          EVENT_TIMESTAMP                     SQL_TEXT
--------------- -------------------- ----------------------------------- --------------------------------------------------
APP_USER        UPDATE               05-AUG-24 11.36.07.947411 AM        update demo_user.emp
                                                                         set hire_date = sysdate+ 1
                                                                         where ename ='Tom'
 
 
once we provide the required privilege, the UPDATE statement works fine and it get audited as well.
 
app-user@ATP23ai> conn demo_user/"Good2go1234!"@atp23_vpn
Connected.
demo-user@ATP23ai> grant select on emp to app_user;
 
Grant succeeded.
 
demo-user@ATP23ai> conn app_user/"Good2go1234!"@atp23_vpn
Connected.
app-user@ATP23ai> update demo_user.emp
  2  set hire_date = sysdate+ 1
  3  where ename ='Tom';
 
1 row updated.
 
app-user@ATP23ai> commit;
 
Commit complete.
 
app-user@ATP23ai>
app-user@ATP23ai> select dbusername
  2     , action_name
  3     , event_timestamp
  4     , sql_text
  5  from unified_audit_trail
  6  where object_name ='EMP'
  7  and object_schema = 'DEMO_USER';
 
DBUSERNAME      ACTION_NAME          EVENT_TIMESTAMP                     SQL_TEXT
--------------- -------------------- ----------------------------------- --------------------------------------------------
APP_USER        UPDATE               05-AUG-24 11.37.31.241909 AM        update demo_user.emp
                                                                         set hire_date = sysdate+ 1
                                                                         where ename ='Tom'
 
APP_USER        UPDATE               05-AUG-24 11.36.07.947411 AM        update demo_user.emp
                                                                         set hire_date = sysdate+ 1
                                                                         where ename ='Tom'
 
Now lets alter the audit policy to include the auditing of SELECT statement on the HIRE_DATE column.
 
demo-user@ATP23ai> alter audit policy emp_update_policy
  2  add actions select(hire_date)
  3  on emp;
 
Audit policy altered.
 
demo-user@ATP23ai> conn app_user/"Good2go1234!"@atp23_vpn
Connected.
app-user@ATP23ai> select dbusername
  2     , action_name
  3     , event_timestamp
  4     , sql_text
  5  from unified_audit_trail
  6  where object_name ='EMP'
  7  and object_schema = 'DEMO_USER';
 
DBUSERNAME      ACTION_NAME          EVENT_TIMESTAMP                     SQL_TEXT
--------------- -------------------- ----------------------------------- --------------------------------------------------
APP_USER        UPDATE               05-AUG-24 11.37.31.241909 AM        update demo_user.emp
                                                                         set hire_date = sysdate+ 1
                                                                         where ename ='Tom'
 
APP_USER        UPDATE               05-AUG-24 11.36.07.947411 AM        update demo_user.emp
                                                                         set hire_date = sysdate+ 1
                                                                         where ename ='Tom'
 
 
app-user@ATP23ai> select ename,hire_date from demo_user.emp;
 
ENAME      HIRE_DATE
---------- -----------
Tom        06-AUG-2024
Kyte       06-AUG-2024
 

app-user@ATP23ai> select dbusername
  2     , action_name
  3     , event_timestamp
  4     , sql_text
  5  from unified_audit_trail
  6  where object_name ='EMP'
  7  and object_schema = 'DEMO_USER';

DBUSERNAME  ACTION_NAME  EVENT_TIMESTAMP                SQL_TEXT
----------- ------------ ------------------------------ ------------------------------------------
APP_USER    SELECT       05-AUG-24 11.40.32.429573 AM   select ename,hire_date from demo_user.emp
APP_USER    UPDATE       05-AUG-24 11.37.31.241909 AM   update demo_user.emp
                                                        set hire_date = sysdate+ 1
                                                        where ename ='Tom'
APP_USER    UPDATE       05-AUG-24 11.36.07.947411 AM   update demo_user.emp
                                                        set hire_date = sysdate+ 1
                                                        where ename ='Tom'

Thursday, August 1, 2024

Oracle ADB Pre-Authenticated URL - Part IV

One of many enhancements Introduced in Oracle 23ai is the ability to stream external JSON sources into the database using the ORACLE_BIGDATA adapter in (inline) external tables. This enhancement can be combined with PAR URL to provide the SQL access to JSON documents. In this blogpost we will see about how that can be achieved.
 
We will start with generating a PAR URL for data source.
 
rajesh@ATP23ai> declare
  2     l_output clob;
  3  begin
  4     dbms_data_access.get_preauthenticated_url(
  5             schema_name => user,
  6             schema_object_name => 'EMP',
  7             expiration_minutes => 120 ,
  8             result => l_output );
  9
 10     dbms_output.put_line( l_output );
 11  end;
 12  /
{"status":"SUCCESS","id":"-L-SeZAK9oMOUudM10xVhXRyuUjeZOuWhYHCM7KbHpv886zK8HeWhYz3uR6xJnXb","preauth_url":"https://dataaccess.adb.us-ashburn-1.oraclecloudapps.com/adb/p/L4XY28c6hZnC
7cy_-eEl-2KsMAbmbH36ZPCPO0F7m8sU4nPbzuM4nFt-JnMqNGi57xl8TtOZcC0/data","expiration_ts":"2024-08-01T15:14:59.164Z","expiration_count":null}
 
PL/SQL procedure successfully completed.
 
Here is the sample data accessing the above URL endpoint (to confirm it works fine).
 
rajesh@ATP23ai> $ curl -i --location https://dataaccess.adb.us-ashburn-1.oraclecloudapps.com/adb/p/L4XY28c6hZnC7cy_-eEl-2KsMAbmbH36ZPCPO0F7m8sU4nPbzuM4nFt-JnMqNGi57xl8TtOZcC0/data
HTTP/1.1 200 OK
Date: Thu, 01 Aug 2024 13:15:41 GMT
Content-Type: application/json
Content-Length: 1862
Connection: keep-alive
Strict-Transport-Security: max-age=31536000;includeSubDomains
X-Content-Type-Options: nosniff
Request-Id: defaultCTKlCQLumkwA9YYjohdQQqrgmbSHOMX1oKoSqOfppIAjopr2Q5eDyA4r
 
{"items":[{"EMPNO":7369,"ENAME":"SMITH","JOB":"CLERK","MGR":7902,"HIREDATE":"1980-12-17T00:00:00","SAL":800,"DEPTNO":20},{"EMPNO":7499,"ENAME":"ALLEN","JOB":"SALESMAN","MGR":7698,"HIREDATE":"1981-02-20T00:00:00","SAL":1600,"COMM":300,"DEPTNO":30},{"EMPNO":7521,"ENAME":"WARD","JOB":"SALESMAN","MGR":7698,"HIREDATE":"1981-02-22T00:00:00","SAL":1250,"COMM":500,"DEPTNO":30},{"EMPNO":7566,"ENAME":"JONES","JOB":"MANAGER","MGR":7839,"HIREDATE":"1981-04-02T00:00:00","SAL":2975,"DEPTNO":20},{"EMPNO":7654,"ENAME":"MARTIN","JOB":"SALESMAN","MGR":7698,"HIREDATE":"1981-09-28T00:00:00","SAL":1250,"COMM":1400,"DEPTNO":30},{"EMPNO":7698,"ENAME":"BLAKE","JOB":"MANAGER","MGR":7839,"HIREDATE":"1981-05-01T00:00:00","SAL":2850,"DEPTNO":30},{"EMPNO":7782,"ENAME":"CLARK","JOB":"MANAGER","MGR":7839,"HIREDATE":"1981-06-09T00:00:00","SAL":2450,"DEPTNO":10},{"EMPNO":7788,"ENAME":"SCOTT","JOB":"ANALYST","MGR":7566,"HIREDATE":"0087-04-19T00:00:00","SAL":3000,"DEPTNO":20},{"EMPNO":7839,"ENAME":"KING","JOB":"PRESIDENT","HIREDATE":"1981-11-17T00:00:00","SAL":5000,"DEPTNO":10},{"EMPNO":7844,"ENAME":"TURNER","JOB":"SALESMAN","MGR":7698,"HIREDATE":"1981-09-08T00:00:00","SAL":1500,"COMM":0,"DEPTNO":30},{"EMPNO":7876,"ENAME":"ADAMS","JOB":"CLERK","MGR":7788,"HIREDATE":"0087-05-23T00:00:00","SAL":1100,"DEPTNO":20},{"EMPNO":7900,"ENAME":"JAMES","JOB":"CLERK","MGR":7698,"HIREDATE":"1981-12-03T00:00:00","SAL":950,"DEPTNO":30},{"EMPNO":7902,"ENAME":"FORD","JOB":"ANALYST","MGR":7566,"HIREDATE":"1981-12-03T00:00:00","SAL":3000,"DEPTNO":20},{"EMPNO":7934,"ENAME":"MILLER","JOB":"CLERK","MGR":7782,"HIREDATE":"1982-01-23T00:00:00","SAL":1300,"DEPTNO":10}],"hasMore":false,"limit":100,"offset":0,"count":14,"links":[{"rel":"self","href":"https://dataaccess.adb.us-ashburn-1.oraclecloudapps.com/adb/p/L4XY28c6hZnC7cy_-eEl-2KsMAbmbH36ZPCPO0F7m8sU4nPbzuM4nFt-JnMqNGi57xl8TtOZcC0/data"}]}
rajesh@ATP23ai>
 
 
then we will use Inline-external table feature to access the PAR URL using Oracle BIGDATA adaptors
 
rajesh@ATP23ai> variable n varchar2(180)
rajesh@ATP23ai> exec :n := 'https://dataaccess.adb.us-ashburn-1.oraclecloudapps.com/adb/p/L4XY28c6hZnC7cy_-eEl-2KsMAbmbH36ZPCPO0F7m8sU4nPbzuM4nFt-JnMqNGi57xl8TtOZcC0/data';
 
PL/SQL procedure successfully completed.
 
rajesh@ATP23ai>
rajesh@ATP23ai> select count(*)
  2  from external(
  3             (data json)
  4              type oracle_bigdata
  5              access parameters(
  6              com.oracle.bigdata.fileformat = jsondoc
  7              )
  8              location (:n)
  9      reject limit unlimited);
 
  COUNT(*)
----------
         1
 
rajesh@ATP23ai>
 
Notice, the type is ORACLE_BIGDATA, using the ORACLE_BIGDATA access driver, we can access data stored in object stores as if that data is stored in tables within Oracle database. Currently it supports access to Oracle (OCI) object storage, Azure blob storage and Amazon S3 storage.
 
com.oracle.bigdata.fileformatthis is one of the common access parameter the specifies the format of the file, the possible values are parquet, orc, textfile, avro, csv, jsondoc, jsontable
 
the value jsondoc reads the input as JSON file, the JSON values are mapped to the single JSON column that can be queried using SQL/JSON.
  
rajesh@ATP23ai> select count(*)
  2  from external(
  3             (data json)
  4              type oracle_bigdata
  5              access parameters(
  6              com.oracle.bigdata.json.path = '$.items[*]'
  7              com.oracle.bigdata.fileformat = jsondoc
  8              )
  9              location (:n)
 10      reject limit unlimited);
 
  COUNT(*)
----------
        14
 
com.oracle.bigdata.json.path – A json path expression which identifies a sequence of nested JSON values which will be mapped to table rows. In the above json document returned from while accessing PAR URL, we are asking to iterate each element of the array $.items[*] as a set of JSON values mapping to separate table rows.
 
Then using SQL/JSON path expression and dot notation syntax, we can have relational access to the JSON document like this
 
rajesh@ATP23ai> select t.DATA.EMPNO.number() as empno,
  2        t.DATA.ENAME.string() as ename,
  3        t.DATA.HIREDATE.date() as hire_dt
  4  from external(
  5             (data json)
  6              type oracle_bigdata
  7              access parameters(
  8              com.oracle.bigdata.json.path = '$.items[*]'
  9              com.oracle.bigdata.fileformat = jsondoc
 10              )
 11              location (:n)
 12      reject limit unlimited) t
 13  where t.DATA.DEPTNO.number() = 10;
 
     EMPNO ENAME           HIRE_DT
---------- --------------- -----------
      7782 CLARK           09-JUN-1981
      7839 KING            17-NOV-1981
      7934 MILLER          23-JAN-1982
 
 
Or even better, set the fileformat to JSONTABLE (read a JSON file, the JSON values are assumed to be JSON objects and the top level attributes are mapped to specific columns based on the column-name)
 
rajesh@ATP23ai> select empno,ename,job
  2  from external(
  3             (empno number
  4             , ename varchar2(10)
  5             , job varchar2(10)
  6             , mgr number
  7             , hiredate date
  8             , sal number
  9             , deptno number
 10             )
 11              type oracle_bigdata
 12              access parameters(
 13              com.oracle.bigdata.json.path = '$.items[*]'
 14              com.oracle.bigdata.fileformat = jsontable
 15              )
 16              location (:n)
 17      reject limit unlimited)
 18  where deptno = 10;
 
     EMPNO ENAME           JOB
---------- --------------- ----------
      7782 CLARK           MANAGER
      7839 KING            PRESIDENT
      7934 MILLER          CLERK
 
SQL is an extremely powerful language and that with mastery of just a few features, you can do things in a highly scalable fashion with very little code. Having SQL Access, to the JSON documents provided endless uses case and leverage many insights to data.