09-15-2025 03:25 PM - edited 09-15-2025 03:27 PM
Incorta Data Delivery streamlines the integration between Oracle EBS and data destinations like Google BigQuery, Snowflake or Databricks. In addition, it accelerates data mart deployment and automates data model design and source schema mapping with Incorta data applications. The data applications provide a new, faster, more practical option for analyzing and understanding your data, with no data modeling or extract, transform, and load (ETL) required.
Incorta's High-Level Data Delivery Process:
Follow these steps to install Incorta and configure the pipeline that moves data from Oracle EBS into BigQuery.
Incorta delivers reporting ready business views as a part of the Oracle EBS to BigQuery Data App offerings.
Following is the high-level Incorta Data Delivery process:
Schema |
Table Prefix |
EBS_AP_FACTS_ALL |
AP_ |
EBS_AR_FACTS_ALL |
AR_ |
EBS_COMMON_DIM_ALL |
CMN_ |
EBS_FA_FACTS_ALL |
FA_ |
EBS_GL_FACTS_ALL |
GL_ |
EBS_HCM_FACTS_ALL |
HCM_ |
EBS_HCM_COMMON_DIM_ALL |
HCM_CMN_ |
EBS_OM_FACTS_ALL |
OM_ |
EBS_PO_FACTS_ALL |
PO_ |
EBS_PRJ_FACTS_ALL |
PRJ_ |
This section of the document lists the objects included in the deployment.
Module |
Business Schema |
Common |
CommonDimensionDM |
Order To Cash |
OrderManagementDM ARCashReceiptsDM |
Procure to Pay |
PurchaseOrderDM PayablesDM InventoryDM |
General Ledger |
GeneralLedgerDM |
Fixed Assets |
FixedAssetDM |
Projects |
ProjectFinancialsDM |
HCM |
HRCommonDimensionDM WorkforceDM |
Business Process |
Module |
Base Physical Schemas |
Fact and Dimension Physical Schemas |
Procure To Pay(p2p) |
Advanced Payables (AP) Procurement (PO) Inventory(INV) |
EBS_CAL_COMMON EBS_Item_Common EBS_HR_Common EBS_FND_Common EBS_FIN_Common EBS_PARTY_COMMON EBS_AP EBS_PO EBS_INV |
EBS_COMMON_DIM_ALL EBS_AP_FACTS_ALL EBS_PO_FACTS_ALL |
Order to Cash(o2c) |
Account Receivables (AR) Order Management (OM) |
EBS_CAL_COMMON EBS_Item_Common EBS_HR_Common EBS_FND_Common EBS_FIN_Common EBS_PARTY_COMMON EBS_AR EBS_ONT EBS_AR_SNP EBS_AR_MV |
EBS_COMMON_DIM_ALL EBS_AR_FACTS_ALL EBS_OM_FACTS_ALL |
General Ledger |
General Ledger (GL) |
EBS_CAL_COMMON EBS_Item_Common EBS_HR_Common EBS_FND_Common EBS_FIN_Common EBS_PARTY_COMMON EBS_GL EBS_XLA EBS_XLA_MV |
EBS_COMMON_DIM_ALL EBS_GL_FACTS_ALL |
Fixed Assets |
Fixed Assets |
EBS_CAL_COMMON EBS_Item_Common EBS_HR_Common EBS_FND_Common EBS_FIN_Common EBS_PARTY_COMMON EBS_FA |
EBS_COMMON_DIM_ALL |
Projects (PPM) |
Projects (PPM) |
EBS_CAL_COMMON EBS_Item_Common EBS_HR_Common, EBS_FND_Common EBS_FIN_Common EBS_PARTY_COMMON EBS_PRJ |
EBS_COMMON_DIM_ALL EBS_PRJ_FACTS_ALL |
HCM Core |
HCM Core |
EBS_CAL_COMMON EBS_HR_Common EBS_HCM |
EBS_COMMON_DIM_ALL EBS_HCM_COMMON_DIM_ALL EBS_HCM_FACTS_ALL |
In the Incorta UI, navigate to Scheduler->Load Plans and schedule the load plan for your application.
By default, the load plans are set to incremental, allowing you to load just the latest data changes in Oracle EBS applications. You can schedule the timing of your incremental loads by navigating to Scheduler->Load Plans and selecting your Oracle EBS application load plan.
Click on the Change link to open the Schedule UI, which will allow you to schedule the interval for your Incremental loads.
The first time a load plan runs, it will execute a full load of each included schema, even if configured to run incrementally. This happens because there is no data in Incorta to append to, so it will pick up everything that is available per the definition of each dataset. This will take care of loading historical data. It is also possible to run load plans as full loads as well and you can find more information about Full Loads in Incorta’s documentation. Note that full loads will remove all data from the tables that are being populated before reloading them fully.
To monitor a load plan, navigate to Schema->Load Plan. You can check the row counts from the various stages of the load - Extracted, Rejected , Loaded, and Sent to Destination.
After the initial load, Incorta will continue to run the loads in Incremental mode. Incremental loads handle upserts, i.e. updated records and inserted records, which are pulled into Incorta and then immediately pushed to BigQuery whenever an incremental load executes.
If a load from a source table fails in Incorta, incremental data will not write to BigQuery. Incorta can send an alert to a designated email address to notify an administrator of a failure.
Configure the Incorta tenant to be able to send an email if you wish to receive alerts. Then follow these instructions to set up notifications for the Incorta schemas you wish to track.
Make sure to select Notify Upon Failure and populate the Recipients field with an appropriate email address to monitor load jobs.
Incorta will not attempt to reload the data until the next scheduled load in case of a load failure. You can review the logs in Incorta to determine the cause of the issue and take any necessary steps to address it. If further assistance is required, you can work with Incorta Support to resolve the issue. Once the issue is resolved, data will be caught up the next time a scheduled load occurs. Alternatively, you can manually run a load at the schema or individual table level to catch up on the data.
Here is an example of how one can implement Operating Unit (ORG_ID based) security for EBS data in BigQuery
SELECT distinct fpov.profile_option_value org_id, Fu.user_name, fu.EMAIL_ADDRESS FROM apps.hr_organization_units hou, apps.fnd_profile_options_vl fpo, apps.fnd_profile_option_values fpov, apps.fnd_responsibility_vl frv, apps.fnd_user_resp_groups furg, apps.fnd_user fu
WHERE
fpov.level_value = frv.responsibility_id AND fpo.profile_option_id = fpov.profile_option_id AND fpo.user_profile_option_name = 'MO: Operating Unit' AND fpov.profile_option_id = fpo.profile_option_id AND hou.organization_id = TO_NUMBER (fpov.profile_option_value) AND frv.responsibility_id = furg.responsibility_id AND furg.user_id = fu.user_id
CREATE ROW ACCESS POLICY ap_invoice_org_filter
ON `project.dataset.FactAPInvoiceDistributions` GRANT TO ('domain:example.com') FILTER USING (org_id IN
(
SELECT org_id
FROM
`project.dataset.user_org_access` WHERE EMAIL_ADDRESS = SESSION_USER()
)
);
Schema Name |
Table Name |
||
EBS_AP_FACTS_ALL |
AP_DimAPInvoiceHeaders |
||
EBS_AP_FACTS_ALL |
AP_FactAPInvoiceDistributions |
||
EBS_AP_FACTS_ALL |
AP_FactInvoiceHolds |
||
EBS_AP_FACTS_ALL |
AP_FactInvoicePayments |
||
EBS_AR_FACTS_ALL |
AR_DimARInvoiceHeader |
||
EBS_AR_FACTS_ALL |
AR_FactARAdjustments |
||
EBS_AR_FACTS_ALL |
AR_FactARPaymentSchedules |
||
EBS_AR_FACTS_ALL |
AR_FactCashReceipts |
||
EBS_AR_FACTS_ALL |
OM_FactARInvoiceDetails |
||
EBS_AR_FACTS_ALL |
OM_FactARInvoiceDistributions |
||
EBS_COMMON_DIM_ALL |
CMN_DimAccountHierarchy |
||
EBS_COMMON_DIM_ALL |
CMN_DimAccounts |
||
EBS_COMMON_DIM_ALL |
CMN_DimBalancingSegment |
||
EBS_COMMON_DIM_ALL |
CMN_DimBusinessUnit |
||
EBS_COMMON_DIM_ALL |
CMN_DimCodeCombinations |
||
EBS_COMMON_DIM_ALL |
CMN_DimCostCenter |
||
EBS_COMMON_DIM_ALL |
CMN_DimCustomer |
||
EBS_COMMON_DIM_ALL |
CMN_DimFiscalCalendar |
||
EBS_COMMON_DIM_ALL |
CMN_DimGLDailyRate |
||
EBS_COMMON_DIM_ALL |
CMN_DimGLPeriods |
||
EBS_COMMON_DIM_ALL |
CMN_DimGregorianCalendar |
||
EBS_COMMON_DIM_ALL |
CMN_DimInventoryOrganization |
||
EBS_COMMON_DIM_ALL |
CMN_DimItem |
||
EBS_COMMON_DIM_ALL |
CMN_DimLedgers |
||
EBS_COMMON_DIM_ALL |
CMN_DimOrganization |
||
EBS_COMMON_DIM_ALL |
CMN_DimPerson |
||
EBS_COMMON_DIM_ALL |
CMN_DimSuppliers |
||
EBS_COMMON_DIM_ALL |
CMN_DimSupplierSite |
||
EBS_FA_Facts_ALL |
FA_DimAsset |
||
EBS_FA_Facts_ALL |
FA_DimAssetCategory |
||
EBS_FA_Facts_ALL |
FA_DimAssetLocation |
||
EBS_FA_Facts_ALL |
FA_DimClearingAccount |
||
EBS_FA_Facts_ALL |
FA_DimCostAccount |
||
EBS_FA_Facts_ALL |
FA_DimExpenseAccount |
||
EBS_FA_Facts_ALL |
FA_DimFABook |
||
EBS_FA_Facts_ALL |
FA_DimFAPeriod |
||
EBS_FA_Facts_ALL |
FA_FactAdjustment |
||
EBS_FA_Facts_ALL |
FA_FactAssetBook |
||
EBS_FA_Facts_ALL |
FA_FactAssetDistribution |
||
EBS_FA_Facts_ALL |
FA_FactAssetRetirement |
||
EBS_FA_Facts_ALL |
FA_FactAssetTransaction |
||
EBS_GL_FACTS_ALL |
GL_DimJournalCategory |
||
EBS_GL_FACTS_ALL |
GL_DimJournalSource |
||
EBS_GL_FACTS_ALL |
GL_FactGLBalances |
||
EBS_GL_FACTS_ALL |
GL_FactJournalDetails |
||
EBS_GL_FACTS_ALL |
GL_FactXLAAPBridge |
||
EBS_GL_FACTS_ALL |
GL_FactXLAARBridge |
||
EBS_OM_FACTS_ALL |
OM_FactSalesOrderFullfillment |
||
EBS_OM_FACTS_ALL |
OM_FactSalesOrderHeader |
||
EBS_OM_FACTS_ALL |
OM_FactSalesOrderLine |
||
EBS_PO_FACTS_ALL |
PO_DimBuyer |
||
EBS_PO_FACTS_ALL |
PO_DimPurchaseOrderHeader |
||
EBS_PO_FACTS_ALL |
PO_FactInventoryOnHand |
||
EBS_PO_FACTS_ALL |
PO_FactPurchaseOrderDistribution |
||
EBS_PO_FACTS_ALL |
PO_FactPurchaseOrderReceipts |
||
EBS_PO_FACTS_ALL |
PO_FactPurchaseOrderRequisition |
||
EBS_PO_FACTS_ALL |
PO_FactPurchaseOrderShipment |
You may need to modify a predefined model to the specifications of your instance of Oracle EBS.
There is a data validation utility which helps to validate the counts and other metrics between Incorta and BigQuery. Here are
Please refer to the Google BigQuery Validation Framework document for installation, configuration and execution details.
When loading data incrementally, Incorta inserts new records and updates existing records that have already been loaded, but by default it does not delete records. Source systems like Oracle EBS, however, may allow users to delete data. This means that if records are deleted in the source, those records may still exist in Incorta. Fortunately, there are options for how to manage source deleted records in Incorta and subsequently BigQuery.
Refer to the below steps on additional setup to push the corresponding deleted rows to BigQuery
Starting with version 2024.7.3, Incorta has a feature that allows you to purge records from Incorta that have been deleted in the source system. The purge feature is implemented by creating an exclusion set to compare against the table in Incorta that has deletes to track and then running a purge job. The purge job will physically delete records from parquet and memory and then push the corresponding delete statements to BigQuery.
For each table that needs delete handling, the first step is to create a corresponding skinny table that contains only the primary key column(s) for the table and that is always loaded in full. It is recommended that these tables be placed into a separate schema whose only purpose is to hold these types of tables.
The second step is to create a Materialized View (MV) that will compare the PK table against the base table to create the exclusion set which represents the records that have been deleted in the source table. The MV can be placed in the schema with the PK table. Here is an example of the MV code you would write to create the table that holds your exclusion set.
SELECT
I.INVOICE_ID
FROM
EBS_AP.AP_INVOICES_ALL I ANTI
JOIN
PK_SCHEMA.AP_INVOICES_ALL_PK P ON I.INVOICE_ID = P.INVOICE_ID
Next, configure your table to use the exclusion set table that you have created by opening the Advanced settings tab.
Toggle on the Synchronizing delete operations option and select the exclusion set identifier schema and table. Define the column mappings between the target table and the exclusion set and save.
Most Oracle Cloud applications customers do not delete records from their tables, so delete handling is not enabled automatically. Incorta delivers pre-seeded schemas for delete handling with sample PK tables and Exclusion Set MVs that can be enabled and used as models for setting up delete handling with additional tables if needed.
To enable the delivered configuration for the tables used, follow the instructions in the Configure Delete Handling section above. To add delete handling for new tables, follow the instructions above starting with the Delete Handling Setup section and using the provided samples as models.