For other A-Team articles about BICS and Data Sync, click here
Introduction
Version 2.2 of the Data Sync tool was released September 2016 and added the ability to connect to a number of different data sources via JDBC.
Setting up these Data Sources in Data Sync varies by the source. Rather than create a single article to cover them all, 3 have been created. Select the appropriate article to learn more about the steps to set up that Data Source.
This article covers the steps to extract data from Oracle Service Cloud (RightNow) and load that into BI Cloud Service (BICS).
Data Source | Article With Details |
Service Cloud / RightNow | this article |
SalesForce | link to article |
Greenplum, Hive, Impala, Informix, Mongodb. Postgresql. Redshift, Sybase, and Instruction for other JDBC data sources where the JDBC driver is available | link to article |
Main Article
Downloading Latest Version of Data Sync Tool
Be sure to download and install the latest version of the Data Sync Tool from OTN through this link.
For further instructions on configuring Data Sync, see this article. If a previous version of Data Sync is being upgraded, use the documentation on OTN.
Data Load Methods
The Data Sync tool offers 3 different methods of extracting data from Oracle Service Cloud:
- 1. Loading the data from a report created in the Oracle Service Cloud Report Explorer desktop tool
- 2. Loading the data directly from individual objects available in Oracle Service Cloud
- 3. Loading the data directly from a ROQL query
Each method will be discussed during this article.
Setting up the Oracle Service Cloud Connection in Data Sync
1. In the Data Sync tool, create a new Connection:
2. In the ‘Connection Type’ selection, use ‘Oracle Service Cloud (RightNow), enter a suitable user name and password, and URL for the RightNow environment:
3. ‘Test’ the connection to confirm connectivity:
Data Loading
As mentioned, there are 3 different ways that data can be loaded into BICS from Oracle Service Cloud using the Data Sync Tool. Each will be discussed in detail in this section.
In each method discussed, filters can be used to restrict the data set size.
Method 1: Loading data from a report created in the Oracle Service Cloud ‘Report Explorer’ tool
This method provides the ability to access all available objects in Oracle Service Cloud. It goes against the database tables in Service Cloud, and not against the semantic layer that the other methods work against. This means that only the fields that are needed can be extracted, and only the tables needed are traversed by the query. This makes the query against Service Cloud more efficient and possibly better performing.
The process involves creating 2 reports in Oracle Service Cloud.
The first contains the data that the extract will use and that will be loaded into BICS (the ‘Data Report’), and the second to provide metadata about the data to help Data Sync run incremental loads, and to ‘chunk’ the data into more manageable extracts (the ‘Helper Report’).
1. In the Oracle Service Cloud ‘Report Explorer’ create the ‘Data Report’ pulling in the fields that will be required in BICS. In this example some fields are pulled from the ‘incidents’ table.
2. To enable the Data Sync tool to chunk data into more manageable extracts, and to filter data on dates for incremental updates – Two filters MUST be defined in each ‘Data Report’.
One filter, that should be based on an ID, is used by data sync to count how many rows will be returned. From this it can decide how to break the extracts down into smaller chunks. For instance, if a query is going to return 2 million rows, that may take some time. Going across network and cloud environments, it’s possible that the connection may be interrupted, or perhaps a time-out, or memory usage threshold could be reached.
Data Sync will, by default, break those 2 million rows into 200 ‘chunks’ of 10,000 rows of data (this value can be changed). The first connection will request the first 10,000 rows based on this Filter ID field. Once loaded, the next 10,000 rows will be requested, and so on. This puts less strain on the Service Cloud, and reduces the possibility of interruptions or time-outs.
The second filter should be based on a date field that can be use to help identify new or changed records since the previous load.
The names of these 2 filters is important. They have to be named:
INCREMENTAL_LOAD – for the date based filter, and
RANGE_COUNT – for the ID based filter.
In the Filter Expression editor in Service Cloud, the filters will look like this:
In this example, the INCREMENTAL_LOAD filter references the ‘incidents.updated’ field. Confirm that the ‘Make this filter selectable at run time’ remains checked:
For the RANGE_COUNT filter, the ‘incidents.i_id’ field is used from the ‘incidents’ table.
3. Save the main ‘Data’ Report, and then create the ‘Helper’ Report
This needs to have 3 fields.
One that calculates the minimum value of the ID that will be extracted from the report. Another that calculates the maximum value, and a third that calculates the total number of rows that will be returned – the count.
The names of these fields are important. Use ‘MIN_VALUE’, ‘MAX_VALUE’, and ‘COUNT’.
Below are the 3 field definitions used in this example report, using the ‘Min’, ‘Max’ and ‘Count’ functions available in the Service Cloud Report Explorer.
As before, a filter is required. For the ‘helper’ report, only the INCREMENTAL_LOAD filter is required. That way the Data Sync tool can request the metadata only for the date range that it needs to extract for.
and use the SAME date field as from the Data query. In this case the ‘incidents.updated’ field.
Run the helper report to confirm it works. In this example it is showing that the data to be returned has 1,494 rows (COUNT), with the minimum ID in the data being 12278 (MIN_VALUE) and the maximum ID being 15750 (MAX_VALUE).
Save the report.
4. Service Cloud creates a unique ID for each report. That ID will be needed in Data Sync to reference the 2 reports.
To find the ID, right click on the report and select ‘View Definition’:
In this example the ‘Data’ query, the ‘Incidents_Data’ report has an internal ID of 100804.
Do the same for the ‘helper’ report. In this case, the internal ID is 100805.
These IDs will be needed in the next step.
5. In the Data Sync tool, under ‘Project’, ‘Pluggable Source Data’, select ‘Manual Entry’.
Chose a Logical Name for the data source, and a Target Name. Note the Target Name will be the new table created in BICS. If an existing table already exists, be sure to enter the name correctly. Make sure the DB_Connection is set to the Service Cloud / RightNow connection created earlier.
In the ‘Message’ box that appears next, make sure ‘Analytics Reports’ is selected in the ‘Data from:’ selector. The message will be updated to display additional information about this import method.
In the final screen, the ‘Data’ report ID, from step (4), needs to be entered as the ‘Analytics Report ID’, the ‘Helper’ report ID in the ‘Helper Analytics Report ID’. The ‘Numeric ID’ needs to be the logical name of the field used in the ‘Data’ report that contains the main ID field for the report. In this case, that field is ‘Incident ID’. Be aware that this field is case sensitive and needs to exactly match the name of the report field. The final field, ‘Maximum number of rows to read at a time’, is the ‘chunking’ size. By default this is 10,000. This can be changed if needed.
6. To set up ‘Incremental’ loads, select the data source that was just created, and in the attributes section, select the value in the ‘Load Strategy’ box. This will bring up the various load strategies allowed.
Select ‘Update table’
For the User Key to identify the unique records, select the ID that can be used to identify unique records for updating. In this example ‘incident ID’ is used.
for the Filter, use the Date column that will identify changed data. In this example ‘Date Last Updated’ is used;
7. Run the job and confirm it works.
-
Method 2 – Loading the data directly from individual objects available in Oracle Service Cloud
For cases where an Object exposed in the semantic layer of Service Cloud contains all the data that is needed, then this approach may be the best approach.
1. In Data Sync, select ‘Project’ / ‘Pluggable Source Data’ and then ‘Data from Object(s)’
Leave ‘Discover objects by listing’ selected and click ‘OK’.
2. In the next screen, make sure ‘RightNow’ is selected as the Source, and then hit the ‘Search’ button to pull back a list of all the objects available.
Select the Object(s) that are to be included (in this case ‘Billing Payments’), and then the ‘Import’ button.
A message will be displayed providing more details of this method. Click ‘OK’.
Select the data source that was created, and select the ‘Pluggable Attributes’ section.
4. Three options are shown. The ‘Numeric Column’ and ‘Maximum number of rows to read at a time’ are mandatory.
The ROQL Query Condition field is optional. This field can be used to filter the data returned. For instance, if the Billing.Payments object contains many years of history, but for BICS we are only interested in data changed from 2014 onwards, then a ROQL statement of ‘updatedtime > ‘2014-01-01T00:00:00Z’ may be used to restrict the data returned. This is nothing to do with incremental loading. This filter will be used every time a job is run, so no data from before this date will every be extracted from Service Cloud.
The ‘Numeric Column’ needs to be an ID field from the Billing.Payments object. In this case there is a field called ‘id’. This is case sensitive.
The final column is the ‘chunking’ size to be used. This defaults to 10,000, but can be changed if required.
5. As in the previous load example, to set up incremental updates, go to the ‘Edit’ tab, and select ‘Update table’ as the Load Strategy:
and select the appropriate value for the unique ‘User Keys’ and date value for the ‘Filter’ to allow Data Sync to identify rows changed since the last extract.
6. Run the job to confirm it works.
-
Method 3 – Loading the data directly from a ROQL query
ROQL stands for ‘RightNow Object Query Language’. It has some similarities to SQL and is the query language used to run against the semantic reporting layer in Service Cloud.
In this example the following ROQL query will be used.
SELECT * FROM incidents WHERE updatedtime > ‘2014-01-01T00:00:00Z’
1. In the Data Sync tool, select ‘Project’, ‘Pluggable Source Data’ and then ‘Manual Entry’:
Enter a logical name for the data source, and a target name. This should be the existing BICS table that will be loaded, or the name of the new table that will be created:
Make sure the ‘Data From’ box is set to ‘ROQL’ then hit ‘OK’:
2. In addition to the ROQL query (ROQL Tabular Query), a statement is required to calculate the MAX, MIN, and COUNT of the identity field (in this case ID), as well as the name of the Query Object – in this case ‘incidents’, and the Numeric Column, in this case ‘id’. NOTE – these last two are case sensitive.
The chunking size (‘Maximum number of rows to read at a time’) can be adjusted if necessary.
Click ‘OK’, and the Pluggable Source Data object is created.
3. As before, to set up incremental loads, select the Data Source, then update the load strategy.
And select an appropriate key (an ID) and a filter (an update date / time),
4. Run the job to confirm it works.
Summary
This article walked through the steps to configure the Data Sync tool to be able to connect and extract data from Oracle Service Cloud / RightNow. It covered 3 different approaches.
NOTE – Service Cloud has inbuilt restrictions for extracting data. These restrictions are intended to protect the underling database to prevent a single query using up too many database resources. The Data Sync tool has built in automatic error handling to accommodate this. If the error is encountered while requesting data, then the Data Sync tool will recursively retry the data request, but adding further filters to reduce the data set being returned. At the time of writing, this recursive error-handling is build into methods (2) and (3) outlined in the article. It will shortly (within a few weeks) be added for Method (1) as well.
For further information on the Data Sync Tool, and also for steps on how to upgrade a previous version of the tool, see the documentation on OTN. That documentation can be found here.
For other A-Team articles about BICS and Data Sync, click here
All content listed on this page is the property of Oracle Corp. Redistribution not allowed without written permission