Tuesday, October 21, 2025

Parquet Files - Part II


After discussion about Parquet files a lot, it is time to come back to Oracle database and how to easily access and work with parquet files as tables in Oracle database. Accessing External data from within the database provides business benefits and parquet files are optimal storage format due to its reduced size compared to other formats. Now it is the time to see how to access external parquet files and work with its data using the database as a processing engine.
 
First we need to make the data accessible to the database, External tables are the way to go, since the parquet files are self-describing and contains schema information, defining an External tables should be straight forward.

 
demo-user@ATP19C> variable uri varchar2(100)
demo-user@ATP19C> exec :uri := 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/idcglquusbz6/b/MY_DEMO_BUCKET/o/BUCKET02/';
 
PL/SQL procedure successfully completed.
 
demo-user@ATP19C>
demo-user@ATP19C> select object_name, bytes
  2  from dbms_cloud.list_objects( 'MY_DEMO_CRED', :uri );
 
OBJECT_NAME                              BYTES
----------------------------------- ----------
java.exe_1_20251015T143444892671Z.p   36017158
arquet
 
demo-user@ATP19C> begin
  2     dbms_cloud.create_external_table(
  3             table_name =>'MY_DEMO_EXT'
  4             , credential_name =>'MY_DEMO_CRED'
  5             , file_uri_list => :uri||'java.exe_1_20251015T143444892671Z.parquet'
  6             , format => json_object('type' value 'parquet', 'schema' value 'first') );
  7  end;
  8  /
 
PL/SQL procedure successfully completed.
 
demo-user@ATP19C> set linesize 71
demo-user@ATP19C> desc my_demo_ext
 Name                                Null?    Type
 ----------------------------------- -------- -------------------------
 OWNER                                        VARCHAR2(32767)
 OBJECT_NAME                                  VARCHAR2(32767)
 SUBOBJECT_NAME                               VARCHAR2(32767)
 OBJECT_ID                                    NUMBER(38,20)
 DATA_OBJECT_ID                               NUMBER(38,20)
 OBJECT_TYPE                                  VARCHAR2(32767)
 CREATED                                      DATE
 LAST_DDL_TIME                                DATE
 TIMESTAMP                                    VARCHAR2(32767)
 STATUS                                       VARCHAR2(32767)
 TEMPORARY                                    VARCHAR2(32767)
 GENERATED                                    VARCHAR2(32767)
 SECONDARY                                    VARCHAR2(32767)
 NAMESPACE                                    NUMBER(38,20)
 EDITION_NAME                                 VARCHAR2(32767)
 SHARING                                      VARCHAR2(32767)
 EDITIONABLE                                  VARCHAR2(32767)
 ORACLE_MAINTAINED                            VARCHAR2(32767)
 APPLICATION                                  VARCHAR2(32767)
 DEFAULT_COLLATION                            VARCHAR2(32767)
 DUPLICATED                                   VARCHAR2(32767)
 SHARDED                                      VARCHAR2(32767)
 CREATED_APPID                                NUMBER(38,20)
 CREATED_VSNID                                NUMBER(38,20)
 MODIFIED_APPID                               NUMBER(38,20)
 MODIFIED_VSNID                               NUMBER(38,20)
 ID                                           NUMBER(38,20)
 
demo-user@ATP19C> select count(*) from my_demo_ext;
 
    COUNT(*)
------------
   1,000,000
 
demo-user@ATP19C>
 
 
if you got access to the Database action or the web UI page for Autonomous database then we can make use of the Data Load options under Data Studio in a user-friendly manner to load data.
 


 
 


 
 
After creating these external tables, you can use it like any other in Oracle database.
 
Also in certain cases where you don’t have privilege to create table, we can leverage inline-External table feature to access parquet files from object storage. To do that we need obtain information (like the number of columns and its data types) about Parquet files
 
The easiest way to get insights to Parquet files it to make use of Python based Parquet-tools , when you have Python 3 in place then it is super easy to install, just run the following command and it’s ready to use.

 
C:\Users\Rajeshwaran Jeyabal>pip3 install parquet-tools
Defaulting to user installation because normal site-packages is not writeable
Collecting parquet-tools
  Downloading parquet_tools-0.2.16-py3-none-any.whl.metadata (3.8 kB)
Collecting boto3<2.0.0,>=1.34.11 (from parquet-tools)
  Downloading boto3-1.40.52-py3-none-any.whl.metadata (6.7 kB)
…………….
…………….
…………….
…………….
 
Successfully installed boto3-1.40.52 botocore-1.40.52 colorama-0.4.6 halo-0.0.31 jmespath-1.0.1 log_symbols-0.0.14 numpy-2.3.3 pandas-2.3.3 parquet-tools-0.2.16 pyarrow-21.0.0 python-dateutil-2.9.0.post0 pytz-2025.2 s3transfer-0.14.0 six-1.17.0 spinners-0.0.24 tabulate-0.9.0 termcolor-3.1.0 thrift-0.16.0 tzdata-2025.2 urllib3-2.5.0
 
Now you got a simple and powerful tool, just run the “inspect” command to learn how the file is created and what compression is used, how many columns are there and how many row groups are there.
 
C:\Users\Rajeshwaran Jeyabal>parquet-tools.exe inspect "C:\Users\Rajeshwaran Jeyabal\Downloads\BUCKET02_java.exe_1_20251015T143444892671Z.parquet"
 
############ file meta data ############
created_by: parquet-mr version 1.12.2 (build 77e30c8093386ec52c3cfa6c34b7ef3321322c94)
num_columns: 27
num_rows: 1000000
num_row_groups: 174
format_version: 1.0
serialized_size: 419719
 
 
############ Columns ############
OWNER
OBJECT_NAME
SUBOBJECT_NAME
OBJECT_ID
DATA_OBJECT_ID
OBJECT_TYPE
CREATED
LAST_DDL_TIME
TIMESTAMP
STATUS
TEMPORARY
GENERATED
SECONDARY
NAMESPACE
EDITION_NAME
SHARING
EDITIONABLE
ORACLE_MAINTAINED
APPLICATION
DEFAULT_COLLATION
DUPLICATED
SHARDED
CREATED_APPID
CREATED_VSNID
MODIFIED_APPID
MODIFIED_VSNID
ID
 
############ Column(OWNER) ############
name: OWNER
path: OWNER
max_definition_level: 1
max_repetition_level: 0
physical_type: BYTE_ARRAY
logical_type: String
converted_type (legacy): UTF8
compression: SNAPPY (space_saved: 95%)
 
############ Column(OBJECT_NAME) ############
name: OBJECT_NAME
path: OBJECT_NAME
max_definition_level: 1
max_repetition_level: 0
physical_type: BYTE_ARRAY
logical_type: String
converted_type (legacy): UTF8
compression: SNAPPY (space_saved: 65%)
 
…………………..
…………………..
…………………..
…………………..
 
 
############ Column(ID) ############
name: ID
path: ID
max_definition_level: 1
max_repetition_level: 0
physical_type: FIXED_LEN_BYTE_ARRAY
logical_type: Decimal(precision=38, scale=20)
converted_type (legacy): DECIMAL
compression: SNAPPY (space_saved: 41%)
 
 
Using the above list of columns in Parquet files, we can create an Inline-External table like this
 
demo-user@ATP19C> with rws as (
  2  select *
  3  from external(
  4      ( OWNER    VARCHAR2(4000)
  5     , OBJECT_NAME    VARCHAR2(4000)
  6     , SUBOBJECT_NAME    VARCHAR2(4000)
  7     , OBJECT_ID    NUMBER(38,20)
  8     , DATA_OBJECT_ID    NUMBER(38,20)
  9     , OBJECT_TYPE    VARCHAR2(4000)
 10     , CREATED    DATE
 11     , LAST_DDL_TIME    DATE
 12     , TIMESTAMP    VARCHAR2(4000)
 13     , STATUS    VARCHAR2(4000)
 14     , TEMPORARY    VARCHAR2(4000)
 15     , GENERATED    VARCHAR2(4000)
 16     , SECONDARY    VARCHAR2(4000)
 17     , NAMESPACE    NUMBER(38,20)
 18     , EDITION_NAME    VARCHAR2(4000)
 19     , SHARING    VARCHAR2(4000)
 20     , EDITIONABLE    VARCHAR2(4000)
 21     , ORACLE_MAINTAINED    VARCHAR2(4000)
 22     , APPLICATION    VARCHAR2(4000)
 23     , DEFAULT_COLLATION    VARCHAR2(4000)
 24     , DUPLICATED    VARCHAR2(4000)
 25     , SHARDED    VARCHAR2(4000)
 26     , CREATED_APPID    NUMBER(38,20)
 27     , CREATED_VSNID    NUMBER(38,20)
 28     , MODIFIED_APPID    NUMBER(38,20)
 29     , MODIFIED_VSNID    NUMBER(38,20)
 30     , ID    NUMBER(38,20) )
 31      type oracle_bigdata
 32      access parameters(
 33          com.oracle.bigdata.fileformat = parquet
 34          com.oracle.bigdata.credential.name = MY_DEMO_CRED
 35          com.oracle.bigdata.removequotes = true
 36          com.oracle.bigdata.dateformat = 'YYYY-MM-DD hh24:mi:ss'
 37          )
 38  location ('https://objectstorage.us-ashburn-1.oraclecloud.com/n/idcglquusbz6/b/MY_DEMO_BUCKET/o/BUCKET02/java.exe_1_20251015T143444892671Z.parquet')
 39  ) )
 40  select trunc(created,'Y'), count(*)
 41  from rws
 42  group by trunc(created,'Y');
 
TRUNC(CREAT     COUNT(*)
----------- ------------
01-JAN-2022      142,042
01-JAN-2023       57,636
01-JAN-2024      271,683
01-JAN-2021       92,060
01-JAN-2020      436,579
 
demo-user@ATP19C>
 
 

Friday, October 17, 2025

Parquet Files - Part I

 
Apache Parquet is a popular columnar file format that stores data efficiently by employing column-wise compression, different encoding strategies, and compression based on data type and stores metadata alongside the data. Generally, the less data you have to read and transfer, the better your query performance and the lower your running costs. And obviously, the smaller the data footprint, the smaller the costs for storing it. Parquet helps us here quite a bit.
 
Lets see why Parquet offers the increase in performance of processing external data.
Parquet Files are organized in columns, which means only columns listed in the SQL statement need to be read by compute layer, very similar to Oracle storage index.

 



If you need to read a single column from a Parquet file, you only read the corresponding column chunks from all Row Groups.
 
All the technical details of how the parquet files are organized is gone more detailed in Paraquet open source standard specification, but each file is divided into chunks (called as row groups) and each of those chunks has metadata for each column (min/max value) , row groups are way to implement row/column storage, where rows are joint into row groups and within a row group stored by columns. A single row group contains data for all columns for a small number of rows. Pretty much close to what Oracle HCC and Storage index does. The optimizer can use these metadata and skip reading part of the file based on SQL Predicates.

 

 
 

For example the following query
 
Select * from <your_table_name> where col1 = 1;
 
Will only read one row group (because of the predicate col1 = 1 ) and will skip the other row group.
 
Paraquet is also self-explaining file format, so metadata is stitched together with the data, It means only having a file user could define a table, nothing extra was required. everything is compressed by default, since it is columnar file format we get good compression ratios.
 
Parquet is an open, widely adopted format. It has been available for quite some time and has become the default standard for storing data in a columnar format outside your database engine. So whenever your data does not require the security and robustness of being stored, managed, and protected by your database and you have a choice, you should consider storing data in Parquet, compared to other open formats like CSV, in this blogpost we will see the size of the output file produced by each file format compared to the Paraquet file sizes.
 
Got a table with 160MB of data.
 
demo-user@ATP19C> exec show_space('BIG_TABLE');
Unformatted Blocks .....................               0
FS1 Blocks (0-25) ......................               0
FS2 Blocks (25-50) .....................               0
FS3 Blocks (50-75) .....................               0
FS4 Blocks (75-100).....................              18
Full Blocks ............................          20,526
Total Blocks............................          20,920
Total Bytes.............................     171,376,640
Total MBytes............................             163
Unused Blocks...........................               0
Unused Bytes............................               0
Last Used Ext FileId....................           2,731
Last Used Ext BlockId...................         575,872
Last Used Block.........................              24
 
PL/SQL procedure successfully completed.
 
 
When exported to Object storage as CSV
 
demo-user@ATP19C> variable uri varchar2(100)
demo-user@ATP19C> exec :uri := 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/idcglquusbz6/b/MY_DEMO_BUCKET/o/BUCKET02/';
 
PL/SQL procedure successfully completed.
demo-user@ATP19C> begin
  2      dbms_cloud.export_data(
  3          credential_name =>'MY_DEMO_CRED'
  4          , file_uri_list => :uri
  5          , query => ' select * from big_Table '
  6          , format => json_object('type' value 'csv')
  7      ) ;
  8  end;
  9* /
 
PL/SQL procedure successfully completed.
 
demo-user@ATP19C> select count(*), round(sum(bytes)/1024/1024,2) size_mb
  2  from dbms_cloud.list_objects( 'MY_DEMO_CRED'
  3      , :uri )
  4* where object_name like '%.csv'    ;
 
COUNT(*) SIZE_MB
________ _______
      17  165.37
 
The total file size was close to 165 MB, when exported as JSON it was close to 577 MB
 
demo-user@ATP19C> begin
  2      dbms_cloud.export_data(
  3          credential_name =>'MY_DEMO_CRED'
  4          , file_uri_list => :uri
  5          , query => ' select * from big_Table '
  6          , format => json_object('type' value 'json')
  7      ) ;
  8  end;
  9* /
 
PL/SQL procedure successfully completed.
 
demo-user@ATP19C> select count(*), round(sum(bytes)/1024/1024,2) size_mb
  2  from dbms_cloud.list_objects( 'MY_DEMO_CRED'
  3      , :uri )
  4* where object_name like '%.json'   ;
 
COUNT(*) SIZE_MB
________ _______
      58     577
 
The XML was close to 420 MB
 
demo-user@ATP19C> begin
  2      dbms_cloud.export_data(
  3          credential_name =>'MY_DEMO_CRED'
  4          , file_uri_list => :uri
  5          , query => ' select * from big_Table '
  6          , format => json_object('type' value 'xml')
  7      ) ;
  8  end;
  9* /
 
PL/SQL procedure successfully completed.
 
demo-user@ATP19C> select count(*), round(sum(bytes)/1024/1024,2) size_mb
  2  from dbms_cloud.list_objects( 'MY_DEMO_CRED', :uri )
  3* where object_name like '%.xml' ;
 
COUNT(*) SIZE_MB
________ _______
      43  426.02
 
Where as Paraquet it was 34 MB
 

demo-user@ATP19C> begin

  2      dbms_cloud.export_data(
  3          credential_name =>'MY_DEMO_CRED'
  4          , file_uri_list => :uri
  5          , query => ' select * from big_Table '
  6          , format => json_object('type' value 'parquet')
  7      ) ;
  8  end;
  9* /
 
PL/SQL procedure successfully completed.
 
demo-user@ATP19C> select count(*), round(sum(bytes)/1024/1024,2) size_mb
  2  from dbms_cloud.list_objects( 'MY_DEMO_CRED', :uri )
  3* where object_name like '%.parquet' ;
 
COUNT(*) SIZE_MB
________ _______
       1     34.35
 
In the next blogpost we will see about how to read these parquet files from object storage to Oracle database using Autonomous database features.
 
 

Wednesday, July 30, 2025

OCI Resource Principal

 
 
Autonomous Database (ADB-S) users often interact with other resources in Oracle cloud infrastructure (OCI) to perform various common operations such as accessing the contents to and from the Object storage locations. In the past we have discussed about the Credential objects, using this an ADB-S instance can access the other resources (like OCI Buckets, Vault etc) , today we will discuss about OCI Resource principal, what it is and how to configure them.
 
OCI resource principal is principal type in Oracle Identity and Access Management (IAM) that eliminates the need to create and configure OCI user credential objects in the database. In other words, a resource principle uses a certificate that is frequently refreshed to sign the API calls to certain OCI services (e.g. Object Storage, Vault) and the authorization is established through dynamic groups and IAM policies.
 
 
In the remainder of this blogpost, I’m going to demonstrate how to create OCI resource principal and use that to access OCI object storage contents.
 
Create Dynamic Group and policy.
 
First, we need to create dynamic group and policy, to be able to use Resource principal authentication, that is we will be able to tell Identity and Access management (IAM) that a given ADB-S should be able to access OCI buckets.
 
1.        In the OCI console, go to Identity & Security >> Domains >> Dynamic Groups >> Create Dynamic Groups
2.       Since I want my ADB-S instance to this Dynamic Group, I need to add OCID of my database instance in the following rule.
 


 
Now that we have created a Dynamic group that includes our ADB-S instance, we can go ahead and create policy to allow this resource to access other resources that resides in each compartment / tenancy.
 
1)       In the OCI console, go to Identity & Security >> Policy >> create policy
2)       Add your policy statement in the plain text or using a policy builder
 


 
The above policy allows read and write access to OCI buckets.
 
 
Resource principal is not enabled by default, In order to be able to use resource principal in our ADB-S instance, we need to enable it using DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL() method.
 
admin@ATP19C> EXEC DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL();
 
PL/SQL procedure successfully completed.
 
The above step enables resource principal for ADMIN user, if you like other database users to call DBMS_CLOUD API using resource principal, then ADMIN user can enable resource principal authentication for other database users as well.
 
admin@ATP19C> EXEC DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL('DEMO_USER');
 
PL/SQL procedure successfully completed.
 
admin@ATP19C> EXEC DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL('DEMO');
 
PL/SQL procedure successfully completed.
 
To verify the Resource principal is enabled as follows, we can use the below query.
 
admin@ATP19C> select grantee
2  from all_tab_privs
3  where table_name ='OCI$RESOURCE_PRINCIPAL'
4  and grantor ='ADMIN';
 
GRANTEE
---------------------
C##CLOUD$SERVICE
GRAPH$METADATA
DEMO_USER
RAJESH
DEMO
 
 
As a final step of demonstration, we can use the resource principal to access the object storage contents.
 
demo-user@ATP19C> variable uri varchar2(100)
demo-user@ATP19C> exec :uri := 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/idcglquusbz6/b/MY_DEMO_BUCKET/o/EXPORT_DEMO/';
 
PL/SQL procedure successfully completed.
 
demo-user@ATP19C> select count(*)
  2  from dbms_cloud.list_objects(
  3          credential_name =>'OCI$RESOURCE_PRINCIPAL'
  4          , location_uri => :uri);
 
  COUNT(*)
----------
         1
 
As you might have noticed, OCI$RESOURCE_PRINCIPLE is the credential_name we need to specify in DBMS_CLOUD APIs whenever we want to use resource principal authentication.
 
To summarize, resource principal is a really neat Oracle IAM capability that enables your OCI resources to access various OCI services through dynamic groups and policies. Creating dynamic groups and policies can potentially be a one-time operation since you can define your dynamic group such that it includes all existing and future ADB-S instances in a given compartment. Whenever you provision a new ADB-S instance, all you have to do would be to enable resource principle for that instance via the DBMS_CLOUD_ADMIN API if the instance needs to access other OCI services or resources. Much simpler and easier than creating credential objects via auth tokens or OCI native authentication!

Thursday, January 2, 2025

Oracle ADB Pre-Authenticated URL - Part V

Pre-Authenticated URLs(PAR URLs) provides secure, time limited access to specific datasets on Autonomous platform, which can be shared with any REST clients, this feature allows the team to easily share sensitive data without complex permission, ensuring that this access is temporary and controlled. Also this PAR URLs can be invalidated at anytime, adding another layer of security to data sharing, for more information of this feature refer to the earlier blogpost on this topic.
 
Over the last few months, a series of new features focusing on improving user experience and functionality was added, these updates aimed to enhance convenience for both data providers – who create and share the data – and data consumers – who analyse and use the data. Each of these new features were created to provide more control, flexibility and efficiency while maintaining the highest standard of data security.
 
Data consumers can now enjoy, user-friendly web interface for viewing and interacting with shared data. This enhanced web output UI is highly functional and designed to simplify consumer experience.
 
Grouping and sorting: ability to group and sort the data directly from the web interface.
 
Providers can pick columns where GROUP BY is possible, and consumers easily can set it up, for example in the original dataset.
 


 
Will be shown like this


 
In addition to grouping and sorting, the improved filtering makes navigating large datasets much simpler, consumers can filter by various fields to narrow down exactly what they are looking for. The goal is to make data exploration as intuitive as possible, allowing consumer to focus on analysis rather than struggling with user interface.
 
The filter will be applied.
 

 
And the user will see the results like this.
 
 
 
 
With the updated PAR capabilities, time/access limited URL that only exposes the necessary data, keeping everything behind the door will be like this
 
demo@ATP19C> declare
  2     l_result long;
  3  begin
  4     dbms_data_access.get_preauthenticated_url(
  5             schema_name => user
  6             , schema_object_name => 'T'
  7             , expiration_count => 100
  8             , result => l_result
  9             , column_lists => '{ "group_by_columns":["PRODUCT_TYPE","STATUS"]
 10                     , "order_by_columns":["PRODUCT_ID"]
 11                     , "filter_columns":["CUSTOMER"] }');
 12             select json_query(l_result,'$' pretty) into l_result from dual;
 13             dbms_output.put_line( l_result );
 14  end;
 15  /
{
  "status" : "SUCCESS",
  "id" : "W7CFSsa69dZFkWIB11lg1wp5iehHXjESei4DfLSXnYwihT44VBYRLt9qMA0rEE1C",
  "preauth_url" : "https://dataaccess.adb.us-ashburn-1.oraclecloudapps.com/adb/p/4YXCQr5oxuEOqbpq
DeSc8VdhJf8UKVfhw6Sgi1y2xYLszanujxUkcXm1rb4GgMlVEu8h2SSgoiM/data",
  "expiration_ts" : "2025-04-03T01:59:11.904Z",
  "expiration_count" : 100
}
 
PL/SQL procedure successfully completed.
 
demo@ATP19C> select jt.*
  2  from json_table( dbms_data_access.list_active_urls ,'$[*]'
  3      columns (
  4           schema_name varchar2(10) path '$.schema_name'
  5          , object_name varchar2(10) path '$.schema_object_name'
  6          , expiration_count number path '$.expiration_count'
  7          , access_count number path '$.access_count'
  8          , column_lists varchar2(140) format json path '$.column_lists'
  9      )
 10  ) jt ;
 
SCHEM OBJECT_ EXPIRATION_COUNT ACCESS_COUNT COLUMN_LISTS
----- ------- ---------------- ------------ --------------------------------------------------------------------------------------------------------------------------------------------
DEMO  T                    100            0 {"order_by_columns":["PRODUCT_ID"],"group_by_columns":["PRODUCT_TYPE","STATUS"],"filter_columns":["PRODUCT_TYPE","STATUS","CUSTOMER"]}
 
If the consumer needs more access, it is not a problem – PAR URL can be easily extended. Previously providers were limited in how long they could make the data accessible, this often resulted in a rush to get the work done before the link get expired. Now, ADB allows us to extend the PAR validity with just a few API calls, this means longer hassle-free data sharing periods. This feature offers providers from regenerating URLs and offer much more flexibility.
 
demo@ATP19C> declare
  2     l_result long ;
  3  begin
  4     dbms_data_access.extend_url(
  5             id => 'W7CFSsa69dZFkWIB11lg1wp5iehHXjESei4DfLSXnYwihT44VBYRLt9qMA0rEE1C'
  6             , extend_expiration_count_by => 30
  7             , result => l_result );
  8     select json_query(l_result,'$' pretty) into l_result from dual;
  9     dbms_output.put_line( l_result );
 10  end;
 11  /
{
  "status" : "SUCCESS"
}
 
PL/SQL procedure successfully completed.
 
demo@ATP19C> select jt.*
  2  from json_table( dbms_data_access.list_active_urls ,'$[*]'
  3      columns (
  4           schema_name varchar2(10) path '$.schema_name'
  5          , object_name varchar2(10) path '$.schema_object_name'
  6          , expiration_count number path '$.expiration_count'
  7          , access_count number path '$.access_count'
  8          , column_lists varchar2(140) format json path '$.column_lists'
  9      )
 10  ) jt ;
 
SCHEM OBJECT_ EXPIRATION_COUNT ACCESS_COUNT COLUMN_LISTS
----- ------- ---------------- ------------ --------------------------------------------------------------------------------------------------------------------------------------------
DEMO  T                    130            0 {"order_by_columns":["PRODUCT_ID"],"group_by_columns":["PRODUCT_TYPE","STATUS"],"filter_columns":["PRODUCT_TYPE","STATUS","CUSTOMER"]}
 
demo@ATP19C>
 
However it should be noted that we cannot reduce the access, by supplying negative value to the parameter EXTEND_EXPIRATION_COUNT_BY, doing so will end up with error like this.
 
demo@ATP19C> declare
  2     l_result long ;
  3  begin
  4     dbms_data_access.extend_url(
  5             id => 'W7CFSsa69dZFkWIB11lg1wp5iehHXjESei4DfLSXnYwihT44VBYRLt9qMA0rEE1C'
  6             , extend_expiration_count_by => -70
  7             , result => l_result );
  8     select json_query(l_result,'$' pretty) into l_result from dual;
  9     dbms_output.put_line( l_result );
 10  end;
 11  /
{
  "status" : "FAILURE",
  "return_message" : "ORA-20001: extend_url attribute extend_expiration_count_by cannot be negative"
}
 
PL/SQL procedure successfully completed.
 
demo@ATP19C>