cancel
Showing results for 
Search instead for 
Did you mean: 
amit_kothari
Employee
Employee

Overview

Incorta Data Delivery streamlines the integration between Oracle EBS and data destinations like Google BigQuery, Snowflake or Databricks. In addition, it accelerates data mart deployment and automates data model design and source schema mapping with Incorta data applications. The data applications provide a new, faster, more practical option for analyzing and understanding your data, with no data modeling or extract, transform, and load (ETL) required.

Incorta's High-Level Data Delivery Process:

  1. Connect: Using a data application, establish a connection between Incorta and your data source.
  2. Schema Definition: Load the physical schema tables from your source (e.g., Oracle EBS).
  3. Data Delivery: Leverage Incorta's Data Destination feature to deliver data to the CDC or Reporting layer. Incorta manages all table creation and updates.
  4. Efficient Data Transfer: Incorta efficiently transfers data from its internal Parquet storage layer to the corresponding target tables.
  5. Data Synchronization: Incorta fully supports inserts, updates, and deletes from the source data into target, ensuring data synchronization.
  6. Visualization: Analyze and visualize the data using Looker or other preferred BI tools.

Installation and Configuration 

Follow these steps to install Incorta and configure the pipeline that moves data from Oracle EBS into BigQuery.

  1. Create a new Incorta cloud cluster 
  2. From the Incorta Applications marketplace install and configure the Oracle EBS to Google BigQuery data application. Refer to this guide.   
  3. Create and configure the BigQuery Data Destination in Incorta. Refer to this guide.
    1. Please note that the BigQuery dataset (schema) in the BigQuery project needs to be multi-region or the project and the Incorta cluster need to be in the same region.

Star Schema Offering 

Incorta delivers reporting ready business views as a part of the Oracle EBS to BigQuery Data App offerings.

  • Built on top of base fusion schemas using Materialized Views (MVs) on star schema like Business Schemas
  • Star schema data models in Incorta business schema for Order to Cash, Procure to Pay, General Ledger, HCM, Project modules for Oracle EBS whose data will be pushed to target.
  • A set of Business schemas (with the suffix DM) and Schemas will be delivered as part of this offering. The schemas have the Dimension and Fact tables, which will be delivered to target.

Data Delivery

Following is the high-level Incorta Data Delivery process:

  • The Incorta Oracle EBS to BigQuery data app should be installed & configured according to the steps above.
  • Note that the star schemas have a suffix of _ALL (fact and dimension).  Configure the data destination for all of them.  Refer to this document on how to configure a data destination
  • Make sure that the target schema value in the data destinations for all the _ALL schemas have the same value, for example INCORTA_EBS_DATA_APP, and have the Target Table Prefix as stated in the table below. This will ensure that all the tables go into one schema and that the Looker blocks will be able to query those tables. Here are the prefixes to be used - also refer to the sample screenshot below.

Schema

Table Prefix

EBS_AP_FACTS_ALL

AP_

EBS_AR_FACTS_ALL

AR_

EBS_COMMON_DIM_ALL

CMN_

EBS_FA_FACTS_ALL

FA_

EBS_GL_FACTS_ALL

GL_

EBS_HCM_FACTS_ALL

HCM_

EBS_HCM_COMMON_DIM_ALL

HCM_CMN_

EBS_OM_FACTS_ALL

OM_

EBS_PO_FACTS_ALL

PO_

EBS_PRJ_FACTS_ALL

PRJ_

amit_kothari_0-1757975243694.png

 

  • Create a Load Plan with base schemas and schemas of fact and dimension tables.
  • The Load Plan will ingest from the source and deliver to the target.
  • Visualize the data with sample views in Incorta or with Looker or another BI tool in the target (BigQuery).
  • Refer to the doc for an explanation of the Facts, Dimensions and sample BigQuery Views which can be deployed.

Incorta Objects

This section of the document lists the objects included in the deployment.

Business Schema

Module

Business Schema

Common

CommonDimensionDM

Order To Cash

OrderManagementDM

ARCashReceiptsDM

Procure to Pay

PurchaseOrderDM

PayablesDM

InventoryDM 

General Ledger

GeneralLedgerDM

Fixed Assets

FixedAssetDM

Projects

ProjectFinancialsDM

HCM

HRCommonDimensionDM

WorkforceDM

Schemas

Business Process

Module

Base Physical Schemas 

Fact and Dimension Physical Schemas

Procure To Pay(p2p)

Advanced Payables (AP)

Procurement (PO)

Inventory(INV)

EBS_CAL_COMMON

EBS_Item_Common

EBS_HR_Common

EBS_FND_Common

EBS_FIN_Common

EBS_PARTY_COMMON

EBS_AP

EBS_PO

EBS_INV

EBS_COMMON_DIM_ALL

EBS_AP_FACTS_ALL

EBS_PO_FACTS_ALL

Order to Cash(o2c)

Account Receivables (AR)

Order Management (OM)

EBS_CAL_COMMON

EBS_Item_Common

EBS_HR_Common

EBS_FND_Common

EBS_FIN_Common

EBS_PARTY_COMMON

EBS_AR

EBS_ONT

EBS_AR_SNP

EBS_AR_MV

EBS_COMMON_DIM_ALL

EBS_AR_FACTS_ALL

EBS_OM_FACTS_ALL

General Ledger

General Ledger (GL)

EBS_CAL_COMMON

EBS_Item_Common

EBS_HR_Common

EBS_FND_Common

EBS_FIN_Common

EBS_PARTY_COMMON

EBS_GL

EBS_XLA

EBS_XLA_MV

EBS_COMMON_DIM_ALL

EBS_GL_FACTS_ALL

Fixed Assets

Fixed Assets

EBS_CAL_COMMON

EBS_Item_Common

EBS_HR_Common

EBS_FND_Common

EBS_FIN_Common

EBS_PARTY_COMMON

EBS_FA

EBS_COMMON_DIM_ALL
EBS_FA_FACTS_ALL

Projects (PPM)

Projects (PPM)

EBS_CAL_COMMON

EBS_Item_Common

EBS_HR_Common,

EBS_FND_Common

EBS_FIN_Common

EBS_PARTY_COMMON

EBS_PRJ

EBS_COMMON_DIM_ALL

EBS_PRJ_FACTS_ALL

HCM Core

HCM Core

EBS_CAL_COMMON

EBS_HR_Common

EBS_HCM

EBS_COMMON_DIM_ALL

EBS_HCM_COMMON_DIM_ALL

EBS_HCM_FACTS_ALL

Loading Data into Incorta 

In the Incorta UI, navigate to Scheduler->Load Plans and schedule the load plan for your application.

By default, the load plans are set to incremental, allowing you to load just the latest data changes in Oracle EBS  applications.  You can schedule the timing of your incremental loads by navigating to Scheduler->Load Plans and selecting your Oracle EBS  application load plan.  

Click on the Change link to open the Schedule UI, which will allow you to schedule the interval for your Incremental loads.

Load History

The first time a load plan runs, it will execute a full load of each included schema, even if configured to run incrementally.  This happens because there is no data in Incorta to append to, so it will pick up everything that is available per the definition of each dataset.  This will take care of loading historical data.  It is also possible to run load plans as full loads as well and you can find more information about Full Loads in Incorta’s documentation.  Note that full loads will remove all data from the tables that are being populated before reloading them fully.

To monitor a load plan, navigate to Schema->Load Plan. You can check the row counts from the various stages of the load - Extracted, Rejected , Loaded, and Sent to Destination.

Incremental Loads

After the initial load, Incorta will continue to run the loads in Incremental mode.  Incremental loads handle upserts, i.e. updated records and inserted records, which are pulled into Incorta and then immediately pushed to BigQuery whenever an incremental load executes.

Load Performance Considerations & Error handling

  • If the data is simply being passed through to BigQuery, the schema tables can be set to non-optimized mode. This improves performance by skipping the post-load step.
  • In the CMC, allocate more on-heap and off-heap memory to the Loader Service than to the Analytics Service, enabling greater capacity for parallel data loading and transmission. For example, allocate 80% of total memory to loader and 20% to analytics.
  • MV failures: If some of the MVs fail then please open the MV and add the MV properties for cores and memory, refer to this article for more information.

Load Failure Notification

If a load from a source table fails in Incorta, incremental data will not write to BigQuery.  Incorta can send an alert to a designated email address to notify an administrator of a failure.  

Configure the Incorta tenant to be able to send an email if you wish to receive alerts.  Then follow these instructions to set up notifications for the Incorta schemas you wish to track.

Make sure to select Notify Upon Failure and populate the Recipients field with an appropriate email address to monitor load jobs.

Incorta will not attempt to reload the data until the next scheduled load in case of a load failure.  You can review the logs in Incorta to determine the cause of the issue and take any necessary steps to address it.  If further assistance is required, you can work with Incorta Support to resolve the issue.  Once the issue is resolved, data will be caught up the next time a scheduled load occurs.  Alternatively, you can manually run a load at the schema or individual table level to catch up on the data.

Securing Data in BigQuery

Here is an example of how one can implement Operating Unit (ORG_ID based)  security for EBS data in BigQuery

  • In Incorta Create a table named ‘user_org_access’ with EBS data source with the following sql
SELECT  distinct fpov.profile_option_value org_id,                 Fu.user_name, fu.EMAIL_ADDRESS     FROM apps.hr_organization_units hou,          apps.fnd_profile_options_vl fpo,          apps.fnd_profile_option_values fpov,          apps.fnd_responsibility_vl frv,          apps.fnd_user_resp_groups furg,          apps.fnd_user fu 
WHERE
         fpov.level_value = frv.responsibility_id      AND fpo.profile_option_id = fpov.profile_option_id      AND fpo.user_profile_option_name = 'MO: Operating Unit'      AND fpov.profile_option_id = fpo.profile_option_id      AND hou.organization_id = TO_NUMBER (fpov.profile_option_value)      AND frv.responsibility_id = furg.responsibility_id      AND furg.user_id = fu.user_id
  • Load the table and move the data to BigQuery
  • Create a RLS policy in Bigquery for the AP fact table based on the above table, for more info refer to this doc, now all queries in BigQuery against the fact table will have a runtime security filter attached - 
CREATE ROW ACCESS POLICY ap_invoice_org_filter 
ON `project.dataset.FactAPInvoiceDistributions` GRANT TO ('domain:example.com') FILTER USING (org_id IN 
(
     SELECT org_id 
FROM
   `project.dataset.user_org_access`    WHERE EMAIL_ADDRESS = SESSION_USER() 
)
);

Explanation of Dimension/Facts

Schema Name

Table Name

   

EBS_AP_FACTS_ALL

AP_DimAPInvoiceHeaders

   

EBS_AP_FACTS_ALL

AP_FactAPInvoiceDistributions

   

EBS_AP_FACTS_ALL

AP_FactInvoiceHolds

   

EBS_AP_FACTS_ALL

AP_FactInvoicePayments

   

EBS_AR_FACTS_ALL

AR_DimARInvoiceHeader

   

EBS_AR_FACTS_ALL

AR_FactARAdjustments

   

EBS_AR_FACTS_ALL

AR_FactARPaymentSchedules

   

EBS_AR_FACTS_ALL

AR_FactCashReceipts

   

EBS_AR_FACTS_ALL

OM_FactARInvoiceDetails

   

EBS_AR_FACTS_ALL

OM_FactARInvoiceDistributions

   

EBS_COMMON_DIM_ALL

CMN_DimAccountHierarchy

   

EBS_COMMON_DIM_ALL

CMN_DimAccounts

   

EBS_COMMON_DIM_ALL

CMN_DimBalancingSegment

   

EBS_COMMON_DIM_ALL

CMN_DimBusinessUnit

   

EBS_COMMON_DIM_ALL

CMN_DimCodeCombinations

   

EBS_COMMON_DIM_ALL

CMN_DimCostCenter

   

EBS_COMMON_DIM_ALL

CMN_DimCustomer

   

EBS_COMMON_DIM_ALL

CMN_DimFiscalCalendar

   

EBS_COMMON_DIM_ALL

CMN_DimGLDailyRate

   

EBS_COMMON_DIM_ALL

CMN_DimGLPeriods

   

EBS_COMMON_DIM_ALL

CMN_DimGregorianCalendar

   

EBS_COMMON_DIM_ALL

CMN_DimInventoryOrganization

   

EBS_COMMON_DIM_ALL

CMN_DimItem

   

EBS_COMMON_DIM_ALL

CMN_DimLedgers

   

EBS_COMMON_DIM_ALL

CMN_DimOrganization

   

EBS_COMMON_DIM_ALL

CMN_DimPerson

   

EBS_COMMON_DIM_ALL

CMN_DimSuppliers

   

EBS_COMMON_DIM_ALL

CMN_DimSupplierSite

   

EBS_FA_Facts_ALL

FA_DimAsset

   

EBS_FA_Facts_ALL

FA_DimAssetCategory

   

EBS_FA_Facts_ALL

FA_DimAssetLocation

   

EBS_FA_Facts_ALL

FA_DimClearingAccount

   

EBS_FA_Facts_ALL

FA_DimCostAccount

   

EBS_FA_Facts_ALL

FA_DimExpenseAccount

   

EBS_FA_Facts_ALL

FA_DimFABook

   

EBS_FA_Facts_ALL

FA_DimFAPeriod

   

EBS_FA_Facts_ALL

FA_FactAdjustment

   

EBS_FA_Facts_ALL

FA_FactAssetBook

   

EBS_FA_Facts_ALL

FA_FactAssetDistribution

   

EBS_FA_Facts_ALL

FA_FactAssetRetirement

   

EBS_FA_Facts_ALL

FA_FactAssetTransaction

   

EBS_GL_FACTS_ALL

GL_DimJournalCategory

   

EBS_GL_FACTS_ALL

GL_DimJournalSource

   

EBS_GL_FACTS_ALL

GL_FactGLBalances

   

EBS_GL_FACTS_ALL

GL_FactJournalDetails

   

EBS_GL_FACTS_ALL

GL_FactXLAAPBridge

   

EBS_GL_FACTS_ALL

GL_FactXLAARBridge

   

EBS_OM_FACTS_ALL

OM_FactSalesOrderFullfillment

   

EBS_OM_FACTS_ALL

OM_FactSalesOrderHeader

   

EBS_OM_FACTS_ALL

OM_FactSalesOrderLine

   

EBS_PO_FACTS_ALL

PO_DimBuyer

   

EBS_PO_FACTS_ALL

PO_DimPurchaseOrderHeader

   

EBS_PO_FACTS_ALL

PO_FactInventoryOnHand

   

EBS_PO_FACTS_ALL

PO_FactPurchaseOrderDistribution

   

EBS_PO_FACTS_ALL

PO_FactPurchaseOrderReceipts

   

EBS_PO_FACTS_ALL

PO_FactPurchaseOrderRequisition

   

EBS_PO_FACTS_ALL

PO_FactPurchaseOrderShipment

   

Steps to Customize the Models

You may need to modify a predefined model to the specifications of your instance of Oracle EBS.

  • Add new columns to an existing model
    • Edit the correct business schema with the DM suffix, for example OrderManagementDM
    • Add new fields to the correct business view within that, making sure to check the query plan and that the view is validated with a green check mark and then save.
    • Edit the correct MV schema, for example EBS_OM_FACTS_ALL.  Open the MV and add the new column, validate and then save the MV.
    • Load the MV and validate the data
  • Add a new Dimension to an existing model
    • Edit the correct business schema with the DM suffix
    • Add a new view and drag in new fields from any existing physical schemas
    • Make sure that the Primary Key (PK) column is correctly named, for example INVOICE_ID
    • Check the query plan and that the view is validated with a green check mark and then save
    • Now edit the correct MV schema, for example EBS_OM_FACTS_ALL.  Add a new  MV and add a sql query to the new business view, validate and then save the MV.
    • Load the MV and validate the data
  • Adding new Fact to an existing model
    • Edit the correct business schema with the DM suffix
    • Add a new view and drag in new fields from any existing physical schema. 
    • Add the PK column, dimension keys, LAST_UPDATE_DATE and any metrics and attributes columns that are required
    • Check the query plan and that the view is validated with a green check mark and then save
    • Now edit the correct MV schema for eg EBS_OM_FACTS_ALL , add a new  MV and add sql query on the new business view, Add incremental logic to the MV. Validate and then save the MV
    • Load the MV and validate the data

Data Validation

There is a data validation utility which helps to validate the counts and other metrics between Incorta and BigQuery.  Here are 

Please refer to the Google BigQuery Validation Framework document for installation, configuration and execution details.

Handling Source Deletes

When loading data incrementally, Incorta inserts new records and updates existing records that have already been loaded, but by default it does not delete records. Source systems like Oracle EBS, however, may allow users to delete data.  This means that if records are deleted in the source, those records may still exist in Incorta.  Fortunately, there are options for how to manage source deleted records in Incorta and subsequently BigQuery.

Steps

  1. Refer to article
  2. Create a new Incorta schema for these PK tables and add the PK table using the Schema wizard with this data connection. Keep in mind that the PK table will show all the columns but when you do the load Incorta will only load the PK columns.
  3. Always do a full load of the PK table. Make sure that the Incremental toggle is turned off for this table and ‘Load Last Batch’ is selected 

Refer to the below steps on additional setup to push the corresponding deleted rows to BigQuery

Starting with version 2024.7.3, Incorta has a feature that  allows you to purge records from Incorta that have been deleted in the source system.  The purge feature is implemented by creating an exclusion set to compare against the table in Incorta that has deletes to track and then running a purge job.  The purge job will physically delete records from parquet and memory and then push the corresponding delete statements to BigQuery.

Delete Handling Setup

For each table that needs delete handling, the first step is to create a corresponding skinny table that contains only the primary key column(s) for the table and that is always loaded in full.  It is recommended that these tables be placed into a separate schema whose only purpose is to hold these types of tables.  

 

The second step is to create a Materialized View (MV) that will compare the PK table against the base table to create the exclusion set which represents the records that have been deleted in the source table.  The MV can be placed in the schema with the PK table.  Here is an example of the MV code you would write to create the table that holds your exclusion set.

SELECT
   I.INVOICE_ID 
FROM
     EBS_AP.AP_INVOICES_ALL I        ANTI 
   JOIN
      PK_SCHEMA.AP_INVOICES_ALL_PK P         ON I.INVOICE_ID = P.INVOICE_ID

Best Practice

  • Say we have a base schema S1 with 5 tables which need delete handling
  • Create a new schema S2 with PK tables (PK table always loaded fully) for each of the S1 tables which need delete handling
  • In S2 add the MV per S1 table, this MV has the exclusion dataset
  • Note:  S2 can be fully non optimized as we do need to load any data to memory for those tables, that will save post load time
  • We will create 2 LPs, L1 runs before L2
  • L1 -> Will have two groups
    • Group 1 -> S2 with full load type
    • Group 2 -> S1 with purge load type
  • L2 -> S1 with incremental load type

Configure Delete Handling

Next, configure your table to use the exclusion set table that you have created by opening the Advanced settings tab.

Toggle on the Synchronizing delete operations option and select the exclusion set identifier schema and table. Define the column mappings between the target table and the exclusion set and save.

Purge Deleted Records

  1. The next step to delete records from Incorta is to run a purge job.  This can be run table by table as needed or can be scheduled from a separate Load Plan with Data purge option.  The purge job will remove records from Incorta .
  1. Once the above Load plan runs please run a regular incremental Load plan to sync up the corresponding table in the BQ CDC layer.

Pre-Seeded Delete Handling Configuration

Most Oracle Cloud applications customers do not delete records from their tables, so delete handling is not enabled automatically.  Incorta delivers pre-seeded schemas for delete handling with sample PK tables and Exclusion Set MVs that can be enabled and used as models for setting up delete handling with additional tables if needed.

To enable the delivered configuration for the tables used, follow the instructions in the Configure Delete Handling section above.  To add delete handling for new tables, follow the instructions above starting with the Delete Handling Setup section and using the provided samples as models.

Appendix

Best Practices Index
Best Practices

Just here to browse knowledge? This might help!

Contributors
Version history
Last update:
‎09-15-2025 03:27 PM
Updated by: