Quantcast
Channel: ATeam Chronicles
Viewing all 376 articles
Browse latest View live

Uploading a file to Oracle storage cloud service using REST API

$
0
0

Introduction

This is the second part of a two part article which demonstrates how to upload data in near-real time from an on-premise oracle database to Oracle Storage Cloud Service.

In the previous article of this series, we demonstrated Oracle GoldenGate functionality to write to a flat file using Apache Flume File Roll Sink. If you would like to read the first part in this article series please visit Oracle GoldenGate : Apply to Apache Flume File Roll Sink

In this article we will demonstrate using the cURL command which will upload the flat file to Oracle Storage Cloud Service.

We used the Oracle Big Data Lite Virtual Machine as the test bed for this article. The VM image is available for download on the Oracle Technology Network website.

Main Article

There are various tools available to access Oracle Storage Cloud Service. According to Best Practices – Data movement between Oracle Storage Cloud Service and HDFS , cURL REST interface is appropriate for this requirement.

cURL REST Interface

REST API

REST API is used to manage containers and objects in the Oracle Storage Cloud Service instance. Anyone can access the REST API from any application or programming platform that understands the Hypertext Transfer Protocol (HTTP) and has Internet connectivity.

cURL is one of the tools used to access the REST interface. cURL is an open source tool used for transferring data which supports various protocols including HTTP and HTTPS. cURL is typically available by default on most UNIX-like hosts. For information about downloading and installing cURL, see Quick Start.

Oracle Storage Cloud Service ( OSCS )

Oracle Storage Cloud Service enables applications to store and manage contents in the cloud. Stored objects can be retrieved directly by external clients or by applications running within Oracle Cloud (For example: Big Data Preparation Cloud Service).

A container is a storage compartment that provides a way to organize the data stored in Oracle Storage Cloud Service. Containers are similar to directories, but with a key distinction; unlike directories, containers cannot be nested.

Prerequisites

First, we need access to the Oracle Storage Cloud Service and information about the Oracle Cloud user name, password, and identity domain.

credentials

Requesting an Authentication Token

Oracle Storage Cloud Service requires authentication for any operation against the service instance. Authentication is performed by using an authentication token. Authentication tokens are requested from the service by authenticating user credentials with the service. All provisioned authentication tokens are temporary and will expire in 30 minutes. We will include a current authentication token with every request to Oracle Storage Cloud Service.

Request an authentication token by running the following cURL command:

curl -v -s -X GET -H ‘X-Storage-User: <my identity domain>:<Oracle Cloud user name>’ -H ‘X-Storage-Pass: <Oracle Cloud user password>’ https://<myIdentityDomain>.storage.oraclecloud.com/auth/v1.0

We ran the above cURL command. The following is the output of this command, with certain key lines highlighted. Note that if the request includes the correct credentials, it returns the HTTP/1.1 200 OK response.

 

OSCS_Auth_token

 

From the output of the command we just ran, note the following:

– The value of the X-Storage-Url header.

This value is the REST endpoint URL of the service. This URL value will be used in the next step to create the container.

-The value of the X-Auth-Token header.

This value is the authentication token, which will be used in the next step to create the container. Note that the authentication token expires after 30 minutes, after the token expires you should request a fresh token.

Creating A Container

Run the following cURL command to create a new container:

curl -v -s -X PUT -H “X-Auth-Token: <Authentication Token ID>” https://storage.oraclecloud.com/v1/Storage-myIdentityDomain/myFirstContainer

– Replace the value of the X-Auth-Token header with the authentication token that you obtained earlier.
– Change https://storage.oraclecloud.com/v1/Storage-myIdentityDomain to the X-Storage-Url header value that you noted while getting an authentication token.
– And change myFirstContainer to the name of the container that you want to create.

Verifying that A Container is created

 Run the following cURL command:

curl -v -s -X GET -H “X-Auth-Token: <Authentication Token ID>” https://storage.oraclecloud.com/v1/Storage-myIdentityDomain/myFirstContainer

If the request is completed successfully, it returns the HTTP/1.1 204 No Content response. This response indicates that there are no objects yet in new container.

In this exercise, as we are not creating a new container. We will use an existing container to upload the file. So we don’t need to verify the container creation .

Uploading an Object

Once Oracle GolgenGate completes writing the records to a file at /u01/ogg-bd/flumeOut directory, the cURL program reads the file present at /u01/ogg-bd/flumeOut directory. Then it uploads the file to create an object in the  container myFirstContainer. Any user with the Service Administrator role or a role that is specified in the X-Container-Write ACL of the container can create an object.

We ran the following cURL command:

curl -v -X PUT -H “X-Auth-Token:  <Authentication Token ID>”-T myfile https://<MyIdentityDomain>.storage.oraclecloud.com/v1/Storage-myIdentityDomain/myFirstContainer/myObject

When running this command we…
– Replaced the value of the X-Auth-Token header with the authentication token that we obtained earlier.
– Changed https://<MyIdentiryDomain>.storage.oraclecloud.com/v1/Storage-myIdentityDomain to the X-Storage-Url header value that we noted while getting an authentication token.
– Changed myFirstContainer to the name of the container that we want to create.
– Changed myfile  to the full path and name of the file that we want to upload
– Changed myObject to the name of the object that we want to create in the container

If the request is completed successfully, it returns the HTTP/1.1 201 Created response, as shown in the following output. We verified the full transfer by comparing “Content-Length” value.

 

Upload_to_OSCS

 

We also verified the proper transfer of the file to Oracle Storage Cloud Service using Big Data Preparation Cloud Service.

BDPCS_Source

Summary

In this article we demonstrated the functionality of REST API which uploads the data from the On Premise Big Data Lite VM to Oracle Storage Cloud Service.  After combining both articles we demonstrated the functionality of moving data on near real-time from the On-premise Oracle database to Oracle Storage Cloud Service using Oracle Golden Gate and REST API.


Bulk import of sales transactions into Oracle Sales Cloud Incentive Compensation using Integration Cloud Service

$
0
0

Introduction

Sales Cloud Incentive Compensation application provides API to import sales transactions in bulk. These could be sales transactions exported out of an ERP system. Integration Cloud Service (ICS) offers extensive data transformation and secure file transfer capabilities that could be used to orchestrate, administer and monitor file transfer jobs. In this post, let’s look at an ICS implementation to transform and load sales transactions into Incentive Compensation. Instructions provided in this post are applicable to Sales Cloud Incentive Compensation R11 and ICS R16.4.1 or higher.

Main Article

Figure 1 provides an overview of the solution described in this post. A text file contains sales transactions, in CSV format, exported out of ERP Cloud. ICS imports the file from a file server using SFTP, transforms the data to a format suitable for Incentive Compensation and submits an import job to Sales Cloud. The data transfer is over encrypted connections end-to-end. ICS is Oracle’s enterprise-grade iPaaS offering, with adapters for Oracle SaaS and other SaaS applications and native adapters that allow connectivity to most cloud and on-premise applications. To learn more about ICS, refer to documentation at this link.

Figure1

Figure 1 – Overview of the solution

Implementation of the solution requires the following high-level tasks.

For the solution to work, ICS should be able to connect with Sales Cloud and File Server.  ICS agents can easily enable connectivity, if one of these systems are located on-premise, behind a firewall.

Configuring a file server to host ERP export file and enable SFTP

A File Server is an optional component of the solution. If the source ERP system that produces CSV file allows Secure FTP access, ICS could connect to it directly. Otherwise, a file server could host exported files from ERP system. One way to quickly achieve this is to provision a compute note on Oracle Public Cloud and enable SFTP access to a staging folder with read/write access to ERP system and ICS.

Defining data mapping for file-based data import service

File-based data import service requires that each import job specify a data mapping. This data mapping helps the import service assign the fields in input file content to fields in Incentive Compensation Transaction object. There are two ways to define such mapping.

  • Import mapping from a Spreadsheet definition
  • Define a new import by picking and matching fields on UI

Here are the steps to complete import mapping:

  • Navigate to “Setup and Maintenance”.

Figure2

  • Search for “Define File Import” task list.

Figure3

  • Click on “Manage File Import Mappings” task from list.

Figure4

  • On next page, there are options to look-up existing mapping or create a new one for specified object type. The two options, import from file or create a new mapping are highlighted.

Figure5

  • If you have a Excel mapping definition, then click on “Import Mapping” , provide information and click “OK”.

Figure6

  • Otherwise, click a new mapping by clicking on “Actions”->”Create”.

Figure7

  • Next page allows field-by-field mapping, between the CSV file’s fields and fields under “Incentive Compensation Transactions”.

Figure8

The new mapping is now ready for use.

Identifying Endpoints

Importing sales transaction require a file import web service and another optional web service to collect transactions.

  • Invoke file-based data import and export service with transformed and encoded file content.
  • Invoke ‘Manage Process Submission’ service with a date range for transactions.

File-based data import and export service could be used to import and data out of all applications on Sales Cloud. For the solution we’ll use “submitImportActivity” operation.  WSDL is typically accessible at this URL for Sales Cloud R11.

https://<Sales Cloud CRM host name>:<CRM port>/mktImport/ImportPublicService

The next task could be performed by logging into Incentive Compensation application or by invoking a web service. ‘Manage Process Submission’ service is specific to Incentive Compensation application. The file-based import processes input and loads the records into staging tables.  ‘submitCollectionJob’ operation of ‘Manage Process Submission’ service initiates the processing of the staged records into Incentive Compensation. This service is typically accessible at this URL. Note that this action can also be performed in Incentive Compensation UI, as described in the final testing section of this post.

https://<IC host name>:<IC port number>/publicIncentiveCompensationManageProcessService/ManageProcessSubmissionService

Implementing an ICS Orchestration

An ICS orchestration glues the other components together in a flow. ICS orchestrations provide flexible ways to invoke, such as a scheduled triggers or an API interface. Orchestrations can perform variety of tasks and implement complex integration logic. For the solution described in this post, ICS needs to perform the following tasks:

  • Connect to file server and import files that matches specified filename pattern.
  • Parse through file contents and for each record, transform the record to the format required by Incentive Compensation.
  • Convert the transformed file contents to Base64 format and store in a string variable.
  • Invoke File-based data import web service, with Base64-encoded data.Note this service triggers import process by does not wait for its completion.
  • Optionally, the service could invoke “Manage Submission Service” after a delay to ensure that the file-based import completed in Sales Cloud.

For sake of brevity, only the important parts of the orchestration are addressed in detailhere. Refer to ICS documentation for more information on building orchestrations.

 

FTP adapter configuration

FTP adapters could be used with ‘Basic Map my data’ or Orchestration patterns. To create a new FTP connection, navigate to “Connections” tab, click on “New Connection” and choose FTP as type of connection.

Under “Configure Connection” page, set “SFTP” drop down to “Yes”. FTP adapter allows login through SSL certificate or username and password.

Figure9

In “Configure Security” page, provide credentials, such as username password or password for a SSL certificate. FTP adapter also supports PGP encryption of content.

Figure10

Transforming the source records to destination format

Source data from ERP could be in a different format than the format required by target system. ICS provides a sophisticated mapping editor to map fields of source record to target record. Mapping could be as easy as drag & drop of fields from source to target, or could be set using complex logic using XML style sheet language (XSLT).  Here is a snapshot of the mapping used for transformation, primarily to convert date string from one format to another.

Figure15

Mapping for SOURCE_EVENT_DATE requires a transformation, which is done using transformation editor, as shown.

Figure16

Converting file content to a Base64-encoded string

File-based data import service requires the content of a CSV file to be Base64-encoded. This encoding could be done using simple XML schema to be used in the FTP invoke task of the orchestration. Here is the content of the schema.

<schema targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/opaque/" xmlns="http://www.w3.org/2001/XMLSchema">
<element name="opaqueElement" type="base64Binary"/>
</schema>

To configure a new FTP connection, drag and drop a connection, configured previously.
Figure11

Select operations settings as shown.
Figure12

Choose options to select an existing schema.

Figure13

Pick the schema file containing the schema.

Figure14The FTP invoke is ready to get a file via SFTP and return the contents to the orchestration as a Base64-encoded string. Map the content as to a field in SOAP message to be sent to Incentive Compensation.

Testing the solution

To test the solution place a CSV formatted file at the stageing folder on file server. Here is sample content from source file.

SOURCE_TRX_NUMBER,SOURCE_EVENT_DATE,CREDIT_DATE,ROLLUP_DATE,TRANSACTION_AMT_SOURCE_CURR,SOURCE_CURRENCY_CODE,TRANSACTION_TYPE,PROCESS_CODE,BUSINESS_UNIT_NAME,SOURCE_BUSINESS_UNIT_NAME,POSTAL_CODE,ATTRIBUTE21_PRODUCT_SOLD,QUANTITY,DISCOUNT_PERCENTAGE,MARGIN_PERCENTAGE,SALES_CHANNEL,COUNTRY
TRX-SC1-000001,1/15/2016,1/15/2016,1/15/2016,1625.06,USD,INVOICE,CCREC,US1 Business Unit,US1 Business Unit,90071,SKU1,8,42,14,DIRECT,US
TRX-SC1-000002,1/15/2016,1/15/2016,1/15/2016,1451.35,USD,INVOICE,CCREC,US1 Business Unit,US1 Business Unit,90071,SKU2,15,24,13,DIRECT,US
TRX-SC1-000003,1/15/2016,1/15/2016,1/15/2016,3033.83,USD,INVOICE,CCREC,US1 Business Unit,US1 Business Unit,90071,SKU3,13,48,2,DIRECT,US

After ICS fetches this file and transforms content, it invokes file-based data import service, with the payload shown below.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/oracle/apps/marketing/commonMarketing/mktImport/model/types/" xmlns:mod="http://xmlns.oracle.com/oracle/apps/marketing/commonMarketing/mktImport/model/">
 <soapenv:Header/>
 <soapenv:Body>
 <typ:submitImportActivity>
 <typ:importJobSubmitParam>
 <mod:JobDescription>Gartner demo import</mod:JobDescription>
 <mod:HeaderRowIncluded>Y</mod:HeaderRowIncluded>
 <mod:FileEcodingMode>UTF-8</mod:FileEcodingMode>
 <mod:MappingNumber>300000130635953</mod:MappingNumber>
 <mod:ImportMode>CREATE_RECORD</mod:ImportMode>
 <mod:FileContent>U09VUkNFX1.....JUkVDVCxVUw==</mod:FileContent>
 <mod:FileFormat>COMMA_DELIMITER</mod:FileFormat>
 </typ:importJobSubmitParam>
 </typ:submitImportActivity>
 </soapenv:Body>
</soapenv:Envelope>


At this point, import job has been submitted to Sales Cloud. Status of file import job could be tracked on Sales Cloud, under ‘Set and Maintenance’. by opening “Manage File Import Activities”. As shown below, there are several Incentive Compensation file imports have been submitted, in status ‘Base table upload in progress’.

Figure17

Here is a more detailed view of one job, opened by clicking on status column of the job. This job has imported records into a staging table.

Figure18

To complete the job and see transactions in Incentive Compensation, follow one of the these two methods.

  • Navigate to “Incentive Compensation” -> “Credits and Earnings” and click on “Collect Transactions” to import data
  • OR, Invoke ‘Manage Process Submission’ service with payload similar to sample snippet below.
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/incentiveCompensation/cn/processes/manageProcess/manageProcessSubmissionService/types/">
   <soapenv:Header/>
   <soapenv:Body>
      <typ:submitCollectionJob>
         <typ:scenarioName>CN_IMPORT_TRANSACTIONS</typ:scenarioName>
         <typ:scenarioVersion>001</typ:scenarioVersion>
         <typ:sourceOrgName>US1 Business Unit</typ:sourceOrgName>
         <typ:startDate>2016-01-01</typ:startDate>
         <typ:endDate>2016-01-31</typ:endDate>
         <typ:transactionType>Invoice</typ:transactionType>
      </typ:submitCollectionJob>
   </soapenv:Body>
</soapenv:Envelope>

Finally, verify that transactions are visible under Incentive Compensation, by navigating to “Incentive Compensation” -> “Credits and Earnings”, from home page and by clicking on “Manage Transactions”.

Figure19

Summary

This post explained a solution to import transactions into Incentive Compensation using web services provided by Sales Cloud and Incentive Compensation application. It also explained several features of Integration Cloud Service utilized to orchestrate the import. The solution discussed in this post is suitable for Sales Cloud R11 and ICS R16.4.1. Subsequent releases of these products might offer equivalent or better capabilities out-of-box. Refer to product documentation for later versions before implementing a solution based on this post.

 

 

Integrating Big Data Preparation (BDP) Cloud Service with Business Intelligence Cloud Service (BICS)

$
0
0

Introduction

This article presents an overview of how to integrate Big Data Preparation Cloud Service (BDP) with Business Intelligence Cloud Service (BICS).  BDP is a big data cloud service designed for customers interested on cleansing, enriching, and transforming their structured and unstructured business data.  BICS is a business intelligence cloud service designed for customers interested on gaining insights into their data with interactive visualizations, data model designs, reports and dashboards.  BDP and BICS are both cloud services under the Oracle Platform as a Service (PaaS).

Users can upload data into BICS using various tools and technologies such as Data Sync, Oracle Application Express, PL/SQL, BICS REST APIs, Oracle Data Integrator, among other tools.  The BICS REST APIs allows users to programmatically load large volumes of data from both on-premise and cloud data sources into the cloud database service connected to BICS.

In BDP, users can define BICS as a target, and publish the results of a BDP transform script into the cloud database service connected to BICS.  BDP uses the BICS REST APIs to accomplish the integration with BICS.  BDP users do not need to write REST APIs programs or learn how to use the BICS REST APIs – the only requirement is to create a BDP connection to access the cloud database service of BICS.  BDP transform scripts can be executed as BDP policies, and the results can be published directly into BICS.

The next sections of this article demonstrate how to create, execute, and publish the results of a BDP transform script into BICS.

 

Integrating Big Data Preparation (BDP) Cloud Service with Business Intelligence Cloud Service (BICS)

 

Figure 1, below, illustrates the BDP main dashboard, which includes a list of metrics such as the number of executed jobs, total configured sources, and the number of rows processed and transformed by the BDP cloud service.  A Quick Start option is also available for easy-access when creating sources, transform scripts, and uploading data into BDP.

 

Figure 1 - BDP Dashboard & Overview Page

Figure 1 – BDP Dashboard & Overview Page

The next sections of this article discuss the following concepts:

 

  • How to create a connection in BDP to access BICS.  In BDP, a connection is known as a BDP Source, and the BDP Source can be used as either a source or target connection. In this article, a BDP Source will be created and used as a target connection to publish into BICS the data results of running a BDP policy.
  • How to create a BDP Transform Script that uses two source datasets.  In BDP, this is known as blending two source datasets or files.
  • How to create and execute a BDP Policy.  BDP policies are used in BDP to configure the executions of BDP transform scripts.  In this article, a BDP policy will be created to run a BDP transform script and publish the data results into BICS.
  • How to view and use the published BDP dataset in the BICS data model and the BICS Visual Analyzer.

The first step to integrate BDP with BICS is to create a BDP Source connection.  Use the Quick Start menu to create a new BDP Source, and select CREATE SOURCEFigure 2, below, illustrates how to create this BDP Source connection.  Enter the name of the BDP Source, and select Oracle BICS as the connection type.

 

Figure 2 - Creating a BDP Source - Oracle BICS

Figure 2 – Creating a BDP Source – Oracle BICS

Enter the Service URL, Username, Password, and Domain of the BICS cloud service as shown on Figure 3, below.  The Service URL is the Service Instance URL of the BICS cloud service.

 

Figure 3 - Creating a BDP Source - Oracle BICS Definition

Figure 3 – Creating a BDP Source – Oracle BICS Definition

 

Test the new BDP Source using the Test option as shown on Figure 3, above.  Save the new BDP Source once the test connection is successful.

The next step is to create a BDP transform script.  Use the Quick Start menu to create a new BDP transform script, and select CREATE TRANSFORMFigure 4, below, illustrates an example of a new transform script called A_TEAM_CUSTOMERS.

 

Figure 4 - Creating a BDP Transform - Customer Accounts

Figure 4 – Creating a BDP Transform – Customer Accounts

On Figure 4, above, the BDP Source for this transform script is called A_TEAM_STORAGE.  This BDP Source has been previously defined by a BDP user.  The source type for this BDP Source is Oracle Cloud Storage.  A structured XSL file called ATeamCustomerAccounts.xls has been previously imported into this BDP Source.  This XLS file is used as the source dataset for this new BDP transform script.

 

 

Once the new BDP transform script is defined, it is then submitted to the BDP engine for data ingestion, data profiling, data de-duplication, and detection of sensitive data.  The BDP engine displays a notification on screen when the BDP transform script is ready for transformations.

Figure 5, below, shows the transform script called A_TEAM_CUSTOMERS after the BDP ingesting process.  A series of transformations have been added by the BDP user.  These transformations are illustrated on the Transform Script section as follow:

 

  • Columns Col_0001, Col_0005, and date_02 have been renamed – respectively – to cust_num, middle_initial, and exp_dt.
  • Column City has been enriched with a new data element (column) called Population.
  • The email domain has been extracted from column email.
  • Columns Col_0013, Col_0014, and Col_0019 have been removed from the transform script.
  • Columns us_phone and exp_dt have been reformatted to (999) 999-9999 and MM-dd-yyyy, respectively.
  • Columns credit_card and us_ssn (social security number) have been obfuscated to the first 12 and 5 digits, respectively.  BDP has detected that these two columns contain sensitive data.

 

Figure 5 - Creating a BDP Transform - Transform Script

Figure 5 – Creating a BDP Transform – Transform Script

The BDP section called Recommendations, on Figure 5, above, can be used to enrich the transform script with additional data elements – a feature that is part of the BDP cloud service.  For instance, the transformation script can be enriched with new columns such as country_name, capital, continent, and square_km, among others.

The Column Profile section, on Figure 5, above, provides a set of metrics for each of the columns found on the source file.  In this example, the column called first_name has been profiled as follow:

 

  • A total of 15,000 rows where found on the source file.
  • A total of 2,101 distinct values, 14.01% of total rows, were found on this column.
  • A total of 12,899 duplicate names, 85.99% of total rows, were found on this column.
  • A total of 10 distinct patterns were found on this column.
  • The type for this column is TEXT.
  • The bubble graph illustrates the most common data values for this column: Mich, Robert, Mary, James, and John are the most common first names.

In BDP, users can join an existing dataset of a transform script with one additional file – this feature is known as Blending.    To blend the existing dataset of a transform script with another file, select the Blend option located on the Transform Script page.  Figure 6, below, shows how to add a file.

 

Figure 6 - Creating a BDP Transform - Adding File to Blend

Figure 6 – Creating a BDP Transform – Adding File to Blend

In this example, on Figure 6, above, a json file called ATeamCustomerTransactionsLog.json will be blended with an existing dataset of a transform script.  This json file contains customer transaction logs, which will be used to enrich and add additional data columns to the BDP transform script.

Once the additional file is added to the BDP transform script, the BDP engine analyzes the new file, and recommends a set of blending conditions.  Figure 7, below, shows the recommended blending conditions or blending keys for these two datasets.

 

Figure 7 - Creating a BDP Transform - Blending Configuration

Figure 7 – Creating a BDP Transform – Blending Configuration

In this example, on Figure 7, above, the BDP engine has recommended to use the cust_num column, which has been found on both datasets, as the blending condition.  BDP has an underlying discovery engine that provides blending key recommendations based on data profiling.  When blending two datasets, BDP users can choose one of three types of output options:

 

  • Rows matching both datasets – All rows on both datasets must match the blending condition.  Those rows that do not match the blending condition – on either dataset – will be removed from the blended dataset.
  • Left Join – All rows from the first dataset will be included on the blended dataset even if the rows from the first dataset do not meet the blending condition.
  • Right Join – All rows from the second dataset will be included in the blended dataset even if the rows from the second dataset do not meet the blending condition.

Once the blending configuration is ready for submission, the BDP engine performs the blending operation.  A message will be displayed on screen when the blending operation is complete and the transform script is ready for additional modifications.  The BDP transform script will show a combined set of columns from both the first dataset and the second dataset.  BDP users will be able to perform additional transformations or accept additional recommendations on this blending BDP transform script.

In order to execute a BDP transform script, BDP users must create a BDP PolicyFigure 8, below, shows the configuration of a BDP Policy.

 

Figure 8 - Creating a BDP Policy - Policy Details

Figure 8 – Creating a BDP Policy – Policy Details

When configuring a BDP Policy, as shown on Figure 8, above, BPD users must specify the following parameters:

 

  • Name of the BDP Policy.
  • Name of the BDP transform script.
  • If the transform script is a blending script, two source datasets are required: Source 1 and Source 2.
  • Name of the Target output.  In this example, the target output is BICS.  This is the BDP BICS Source created on a previous section of this article.
  • The scheduling information such as Time, Start Date, and End Date are required parameters as well.

The BDP Policy can be executed by selecting the Run option from the Policies screen, as shown on Figure 9, below:

 

Figure 9 - Creating a BDP Policy - Running Policy

Figure 9 – Creating a BDP Policy – Running Policy

Once the BDP Policy is submitted for execution, BDP users can monitor the progress of its execution using the BDP Job Details screen.  Figure 10, below, shows an example.

 

Figure 10 - Running a BDP Policy - Job Details

Figure 10 – Running a BDP Policy – Job Details

The Job Details screen, on Figure 10, above, shows the job Id and policy name:  4798122, and A_TEAM_CUSTOMERS, respectively.  The status of the execution of this policy is succeeded – the policy has been executed successfully.  The Metrics section shows a total of 15K rows – this is the number of rows that met the blending condition.  A total of 15K rows were transformed, and no errors were found during the execution of the policy.  Additional execution metrics can be found under sections:  Ingest Metrics, Prepare Metrics, and Publish Status.

When BDP executes a policy that uses BICS as a target, the BICS RESTful APIs are invoked, and the result-set of the BDP policy gets published into BICS.  BDP uses the name of the BDP policy to create a table in the database that is connected to BICS.

Figure 11, below, illustrates the name of the table, A_TEAM_CUSTOMERS, created by BDP during the execution of the BDP policy.

 

Figure 11 - Integrating BICS with BDP - Inspecting the BDP Data

Figure 11 – Integrating BICS with BDP – Inspecting the BDP Data

The newly published table can be seen on the BICS data model module, as shown on Figure 11, above.  In this example, on Figure 11, above, some of the dataset columns are illustrated:

 

  • The credit card number (CREDIT_CARD) and the social security (US_SSN) have been obfuscated.
  • The US phone (US_PHONE) and the credit card expiration date (EXP_DT) has been reformatted.

In BICS, this new dataset can be used to create warehouse facts and dimensions.  Furthermore, BICS users can expand the use of this dataset to other BICS features such as BICS Visual Analyzer.  Figure 12, below, shows an example of how this dataset, A_TEAM_CUSTOMERS, is used in Visual Analyzer:

 

Figure 12 - Integrating BICS with BDP - Creating a Project in BICS Visual Analyzer

Figure 12 – Integrating BICS with BDP – Creating a Project in BICS Visual Analyzer

In this example, on Figure 12, above, a set of metrics, the A-Team Metrics, have been created on BICS Visual Analyzer.  A new tile chart, Revenue Amount By State, has been created as well.  This tile chart uses an aggregated value, REVENUE_AMT, to sum revenue by state.  The source of the REVENUE_AMT column comes from the blended dataset – a source column from the blend file, ATeamCustomerTransactionsLog.json, the customer transaction log file.  The source of the state column comes from the blended dataset as well – a source column from the first dataset, ATeamCustomerAccounts.xls – the customer accounts file.

 

Conclusion

 

BDP users can publish the data produced by BDP policies directly into BICS without writing additional programs or having to learn BICS RESTful APIs.  In BICS, the data results of an executed BDP policy can be modeled as facts and dimensions.  BICS users can then create dashboards and reports with data that has been transformed by BDP.

 

For more ODI best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for ODI.

 

ODI Related Articles

Oracle Big Data Preparation (BDP) Cloud Service Introduction – Video

Oracle Big Data Preparation (BDP) Cloud Service Quick Introduction – Video

Oracle BI Cloud Service REST APIs

Loading Data in Oracle Database Cloud Service

Extracting Data from BICS / Apex via RESTful Webservices

Integrating Oracle Data Integrator (ODI) On-Premise with Cloud Services

 

 

Automated Deployment to SOA Cloud Services using Developer Cloud Services

$
0
0

Introduction

The process of deploying SOA Projects to Oracle SOA Cloud Services (SOACS) can be significantly simplified and streamlined using the Oracle Developer Cloud Services (DevCS) by facilitating the inbuilt Maven and GIT repositories. This article is a walk-through on how create a SOA Project in JDeveloper and get it deployed on SOACS using DevCS. It is based on Windows, but other operating systems work in a very similar way. The following graphic shows the simplified process.

 

flow

Prerequisites

JDeveloper: Download and install SOA Suite 12.1.3 QuickStart from OTN. This contains the JDeveloper version required for this example. Please note that this example will work for SOA 12.2.1 as well, but you will additionally need to follow the steps described in My Oracle Support (MOS) note2186338.1

Maven: Download and install Maven from here.

Access to a Oracle SOA Cloud Services 12.1.3 instance. Details can be found here.

Access to Oracle Developer Cloud Services – this comes as part of the SOA Cloud Services subscription. You will receive an email with details on how to login with the subscription activation.

Creating the GIT repository

Login to the Oracle Developer Cloud Services and Select Create New Project. I am going with a Private Project, but using the Shared project is identical. Make sure to select an Empty Project and select your favourite Wiki Markup. More details can be found here.

image1  image2

A few seconds of provisioning later the Project will be accessible – navigate to the Code Section of the Developer Services Project and hit the New Repository button. This will start the process to create a new git repository. Make sure not to initialize it, but simply create an Empty Repository.

image4

Every git repository has unique URLs to access it. This article uses the HTTP authentication option, if you are using SSH, create an SSH key and add it to Oracle Developer Cloud Service. For more information, see Generating an SSH Key. Make a note of the URL of the newly created repository for use later in the article.

Capture

Creating a simple SOA Composite

In order to demonstrate this process I am going to create a dummy SOA Application using JDeveloper. This is not really the focus of this article – a great place to start with this is Oracle by Example (OBE). For the minimum required open JDeveloper and start by creating a new SOA Application.

image5

Give it a name a make note of the directory the work will be stored in. A project within the application will be automatically created.

image6  image7

For this example I am using a simple BPEL 2.0 process. This is basically just a placeholder to proof the process.

image8  image9

Adjusting Your Project for Maven

There are a few changes that have to be applied to the local project in order to get it working properly with Maven in DevCS. First open DevCS and navigate to the Maven page. This page shows the details for the distribution management. Note the information for the repositories – it needs to be added to three files on the local file system.

image24

First the file repository needs to be added to the maven configuration in the settings.xml – as shown in the screenshot – click to enlarge. This file usually exists in C:\Users\<user>\.m2\settings.xml in Windows, though the location can differ based on the configuration. The Maven configuration guide is the best source for issues.

image26_2

The next files that need adjustment are the pom.xml – there is one pom.xml on the application level and one on the project level. You can edit the files directly out of JDeveloper as shown in the screenshots.

image27  image23

Also make sure to adjust the <sarLocation> to your needs. If you keep the defaults, you will have to add the -SNAPSHOT string as shown below.

image25

Pushing Application to DevCS

Next a local git repository needs to be created to maintain the contents of the project locally. This repository will then be pushed to the DevCS. The repository gets initialized via the Team menu under the option git – select Initialize and enter a new location for the repository.

  image12

Next all the files that are part of this application need to be added to git and afterwards committed into the local git repository, as shown in the screenshots below.

image13  image14

More details about this process including a great introduction video can be found here.

image15  image16

After all files have been committed, the repository needs to be push into the Developer Cloud Services git repository that has been created earlier – this can be done via the Push Assistant, which is accessible via the context menu of the context menu of the application. The Repository URL can be copied from Developer Cloud Services Code page as shown above. The credentials are the credentials for the DevCS – this can be different from your oracle.com account – see here how to set these credentials.

image17_2   image18

Select the master branch and make sure that the Status is Create – it should not be Update at this stage.

image19  image20

You are now able to see the committed and pushed repository objects in the DevCS. Simply navigate to the Code Section of the DevCS.

image21

Populating the local Maven Repository

By default the location for the local Maven repository will be C:\Users\<username>\.m2\repository.  The ORACLE_HOME references the SOA Suite 12.1.3 Quickstart installation directory. The JAVA_HOME points to the latest available JDK installed locally. There are two steps to this process the first command deploys the Oracle Maven Synchronization Plug-In into the local Maven Repository. The second command pushes the local Maven Repository to the Oracle Cloud Developer Services. These commands are executed via the cmd.exe – no administrative execution is necessary. For more details please see here. Please note this process takes a while, if you have a slower internet connection.

 set JAVA_HOME=C:\Program Files\Java\jdk1.7.0_121
 set ORACLE_HOME=C:\oracle\Middleware\Oracle_Home
mvn deploy:deploy-file -DpomFile=%ORACLE_HOME%\oracle_common\plugins\maven\com\oracle\maven\oracle-maven-sync\12.1.3\oracle-maven-sync-12.1.3.pom
    -Dfile=%ORACLE_HOME%\oracle_common\plugins\maven\com\oracle\maven\oracle-maven-sync\12.1.3\oracle-maven-sync-12.1.3.jar
    -DrepositoryId=SOA_Maven_CI_Project_repo -Durl=https://developer.us2.oraclecloud.com/profile/developerXXXX-aXXXXXX/s/developerXXXXX-aXXXXXX_soa-maven-ci-project_12619/maven/
    mvn com.oracle.maven:oracle-maven-sync:push -oracleHome=%ORACLE_HOME% -serverId=SOA_Maven_CI_Project_repo
    -P Profile_SOA_Maven_CI_Project_repo

Creating the Build Job

The next step is to create the actual build job that will compile, package and deploy the SOA Composite on SOA Cloud Services using the Developer Services. The Developer Services use Hudson for build tasks – Simply Navigate to the Build page of DevCS and click New Job. You usually can work with the Default JDK.

image29  Capture3

The build requires a number of parameters to execute on the right SOA CS. Make sure to double check your spelling – everything is case sensitive. Please note that oracleServerUrl refers to the Public IP of your SOACS deployment. If you are using a load balanced deployment the Oracle Traffic Director will listen on port 80 to forward them to the Managed Server port 8001. You can also opt out to open the port and deploy directly against the Managed Server that is hosting SOA. The details how to find the correct IP can be obtained here.

It is strongly recommended to encrypt the passwords used here. This process is described in this guide.

 

Name Value Parameter Type
middlewareHome opt/Oracle/Middleware/SOA String
oracleUsername weblogic String
oraclePassword <SOACS Password> String
oracleServerUrl http://10.10.10.10:80 Password

 

Capture4

Navigate to the Source Control Tab and add the git repository that has been created previously. Move to the Triggers tap and make sure to select the – 

Capture5  Capture6

Move on to the Build Steps Tab and click the add Build Step Button – for now we only need to add the Invoke Maven 3 step. You can add additional steps to support your needs at a later stage. Please add the proxy properties with the following options – this will avoid the calls being routed unecssary:

    http.proxyPort=$HTTP_PROXY_PORT
    http.nonProxHosts=localhost|*.oraclecloud.com
    http.proxyHost=$HTTP_PROXY_HOST

I have chosen the pre-integration-test Maven Goal for this example as I haven’t included any tests in my SOA Project. Details about this Goal can be found here. The next step should be to implement the tests to allow a complete “clean install” Maven Goal, however this is out of scope for this article. Finally hit the Save button,

Capture7

You can hit the CC’ed button on the Job Home page to receive the logs right into your inbox. Finally hit the “Build Now” button to test this Job. If everything goes well you should see the outcome with the “Finished: Success” message as shown below.

Capture2

Next Steps

The job will execute every time you push your changes in your JDeveloper Application to the DevCS git repository. This example only shows the basics what is possible with the Oracle Developer Services and SOA Cloud Services. The next step should be to incorporate a test that will validate the successful deployment as described here.

 

Further Reading

Oracle SOA Cloud Service Documentation: http://docs.oracle.com/cloud/latest/soacs_gs

Oracle Developer Services Documentation: http://docs.oracle.com/en/cloud/paas/developer-cloud/index.html

 

Loading Data into Oracle BI Cloud Service using OTBI Analyses and SOAP

$
0
0

Introduction

This post details a method of loading data that has been extracted from Oracle Transactional Business Intelligence (OTBI) using SOAP into the Oracle Business Intelligence Cloud Service (BICS). The OTBI instance may either be Cloud-Based or On-Premise. This method may also be used to load data from Oracle Business Intelligence Enterprise Edition (OBIEE).

It builds upon the A-Team post Using Oracle BI Answers to Extract Data from HCM via Web Services which details the extraction process.

This post uses the PL/SQL language to wrap the SOAP extract, XML parsing commands, and database table operations in a stored procedure in the BICS Schema Service database. It produces a BICS staging table which can then be transformed into star-schema object(s) for use in modeling.  The transformation processes and modeling are not discussed in this post.

The most complex portion of this post details how to convert the analysis XML report results, embedded in a CDATA (Character Data) text attribute, back into standard XML markup notation so the rows and columns of data can be parsed.

Additional detailed information, including the complete text of the procedure described, is included in the References section at the end of the post.

Rationale for using PL/SQL

PL/SQL is the only procedural tool that runs on the BICS / Database Schema Service platform. Other wrapping methods e.g. Java, ETL tools, etc. require a platform outside of BICS to run on.

PL/SQL can utilize native SQL commands to operate on the BICS tables. Other methods require the use of the BICS REST API.

Note: PL/SQL is a very good at showcasing functionality. However, it tends to become prohibitively resource intensive when deploying in an enterprise production environment.

For the best enterprise deployment, an ETL tool such as Oracle Data Integrator (ODI) should be used to meet these requirements and more:

* Security

* Logging and Error Handling

* Parallel Processing – Performance

* Scheduling

* Code re-usability and Maintenance

The steps below depict how to load a BICS table.

About the OTBI Analysis

The analysis used in this post is named Suppliers and is stored in a folder named Shared Folders/custom as shown below:

A

The analysis has three columns and output as shown below:

B

Note: The method used here requires all column values in the analysis to be NOT NULL for two reasons. The XPATH parsing command signals either the end of a row or the end of the data when a null result is returned. All columns being NOT NULL ensures that the result set is dense and not sparse. A dense result set ensures that each column is represented in each row. Additional information regarding dense and sparse result sets may be found in the Oracle document Database PL/SQL Language Reference.

One way to ensure a column is not null is to use the IFNull function in the analysis column definition as shown below:

C

An optional parameter may be sent at run time to filter each column.

Ensuring the Web Services are Available

To ensure that the web services are available in the required environment, type a form of the following URL into a browser:

https://hostname/analytics-ws/saw.dll/wsdl/v9

Note: The version number e.g. v9 may vary from server to server.

If you are not able to reach the website, the services may not be offered.  Discuss this with the server administrator.

Calling the Analysis

Calling the analysis is a two-step process. The first step initiates a session in OTBI and returns a session ID.  The second step uses that session ID to call the analysis and extract the data.

The SOAP API requests should be constructed and tested using a SOAP API testing tool e.g. SoapUI.

Note: API testing tools such as SoapUI, cURL, Postman, and so on are third-party tools for using SOAP and REST services. Oracle does not provide support for these tools or recommend a particular tool for its APIs. You can select the tool based on your requirements.

The procedure uses the APEX_WEB_SERVICE package to issue the SOAP API requests and store the XML result in a XMLTYPE variable. The key inputs to the package call are:

* The URL for the OTBI Session Web Service

* The URL for the OTBI XML View Web Service

* The Base64 encoded credentials to access the analysis

* The SOAP envelopes expected by the OTBI Web Service.

* Optional Parameters to filter the results

* An optional proxy override

Decoding the Credentials

To avoid hard-coding credentials in the procedure, the credentials are expected to be encoded in a base64 format prior to invoking the procedure. A useful base64 encoding tool may be found at Base64 Decoding and Encoding Testing Tool. The text to encode should be in the format username:password

The APEX_WEB_SERVICE and the DBMS_LOB packages and the INSTR function are used to decode the credentials into username and password variables. The APEX_WEB_SERVICE package decodes the credentials into a BLOB variable. The DBMS_LOB package converts the BLOB to a CLOB variable. The INSTR function then separates the decoded result into the two variables.

Examples are below:

— Decode the Base 64 Credentials
f_blob := apex_web_service.clobbase642blob(f_base64_creds);
— Create a temporary CLOB instance
dbms_lob.createtemporary(f_clob, true);
— Convert the decoded BLOB credentials to a CLOB
dbms_lob.converttoclob(
f_clob,
f_blob,
v_file_size,
v_dest_offset,
v_src_offset,
v_blob_csid,
v_lang_context,
v_warning);
— Parse the credentials into username and password
f_au := substr ( f_clob, 1, instr(f_clob, ‘:’) -1 ); — username
f_ap := substr ( f_clob, instr(f_clob, ‘:’) +1 ); — password

Calling the Session Service

An example Session URL is below:

https://hostname/analytics-ws/saw.dll?SoapImpl=nQSessionService

An example Logon Request envelope is below. The result will be an envelope containing a session ID.

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:v9=”urn://oracle.bi.webservices/v9″>
<soapenv:Header/>
<soapenv:Body>
<v9:logon>
<v9:name>username</v9:name>
<v9:password>password</v9:password>
</v9:logon>
</soapenv:Body>
</soapenv:Envelope>

 An example APEX_WEB_SERVICE call for the login is below:

f_xml := apex_web_service.make_request(p_url => f_session_url
,p_envelope => f_envelope
— ,p_proxy_override => — An optional Proxy URL
— ,p_wallet_path => — An optional path to an Oracle database wallet file
— ,p_wallet_pwd => — The password for the optional Oracle database wallet file
);

The APEX_WEB_SERVICE package is used to parse the XML result from above to obtain the session ID. An example is below:

f_session_id := apex_web_service.parse_xml_clob(p_xml => f_xml
,p_xpath => ‘//*:sessionid/text()
);

Troubleshooting the Session Service Call

Three common issues are the need for a proxy, the need for a trusted certificate (if using HTTPS), and the need to use the TLS security protocol.

The need for a proxy may be detected when the following error occurs: ORA-12535: TNS:operation timed out. Adding the optional p_proxy_override  parameter to the call may correct the issue. An example proxy override is:

www-proxy.us.oracle.com

The need for a trusted certificate is detected when the following error occurs: ORA-29024: Certificate validation failure.

A workaround may be to run this procedure from a full Oracle Database Could Service or an on-premise Oracle database. Adding the trusted certificate(s) to an Oracle database wallet file and adding the optional p_wallet_path  and p_wallet_pwd  parameters to the call should correct the issue.  For more information on Oracle wallets, refer to Using Oracle Wallet Manager in the References section of this post.

The need to use the TLS protocol maybe detected when the following error occurs: ORA-29259: end-of-input reached.

A workaround is to run this procedure from a different Oracle Database Could Service or an on-premise Oracle database. Ensure the database version is 11.2.0.4.10 or above.

Additionally: When using an on-premise Oracle database, the SQL Operations described later in this post (Create Table, Truncate Table, Insert) may be modified to use the BICS REST API. For more information refer to the REST APIs for Oracle BI Cloud Service in the References section of this post.

Calling the XML View Service

An example XML View service URL is:

https://hostname/analytics-ws/saw.dll?SoapImpl=xmlViewService

An example Analysis Request envelope is below. This envelope contains the session ID from the logon call, the location of the analysis, a placeholder variable for the VNUM analysis variable, and a filter value for the VTYPE variable.

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:v9=”urn://oracle.bi.webservices/v9″>
<soapenv:Header/>
<soapenv:Body>
<v9:executeXMLQuery>
<v9:report>
<v9:reportPath>/shared/custom/Suppliers</v9:reportPath>
<v9:reportXml></v9:reportXml>
</v9:report>
<v9:outputFormat>xml</v9:outputFormat>
<v9:executionOptions>
<v9:async></v9:async>
<v9:maxRowsPerPage></v9:maxRowsPerPage>
<v9:refresh></v9:refresh>
<v9:presentationInfo></v9:presentationInfo>
<v9:type></v9:type>
</v9:executionOptions>
<v9:reportParams>
<!–Zero or more repetitions:–>
<v9:variables>
<v9:name>VNUM</v9:name>
<v9:value></v9:value>
</v9:variables>
<v9:variables>
<v9:name>VTYPE</v9:name>
<v9:value>Supplier</v9:value>
</v9:variables>
</v9:reportParams>
<v9:sessionID>’||F_SESSION_ID||'</v9:sessionID>
</v9:executeXMLQuery>
</soapenv:Body>
</soapenv:Envelope>

An example APEX_WEB_SERVICE call for the analysis result is below:

f_xml := apex_web_service.make_request(p_url => f_report_url
,p_envelope => f_envelope
— ,p_proxy_override => — An optional Proxy URL
— ,p_wallet_path => — An optional path to an Oracle database wallet file
— ,p_wallet_pwd => — The password for the optional Oracle database wallet file
);

Preparing the XML Result

The XML result from the Analysis call contains the report results in a CDATA text section. In order to parse the results, the XML within the text section is converted into standard XML using the XMLTYPE package and the REPLACE function.

An example of the CDATA section result, as seen in SoapUI, is below:

<sawsoap:rowset xsi:type=”xsd:string”><![CDATA[<rowset xmlns=”urn:schemas-microsoft-com:xml-analysis:rowset”>
<Row>
<Column0>UJ Catering Service AG</Column0>
<Column1>5991</Column1>
<Column2>Supplier</Column2>
</Row>
</rowset>]]>
</sawsoap:rowset>

The same result, as seen in APEX_WEB_SERVICE, is below:

<sawsoap:rowset xsi:type=”xsd:string”>&lt;rowset xmlns=&quot;urn:schemas-microsoft-com:xml-analysis:rowset&quot;&gt;
&lt;Row&gt;
&lt;Column0&gt;UJ Catering Service AG&lt;/Column0&gt;
&lt;Column1&gt;5991&lt;/Column1&gt;
&lt;Column2&gt;Supplier&lt;/Column2&gt;
&lt;/Row&gt;
&lt;/rowset&gt;
</sawsoap:rowset>

The converted result needed for parsing  is below:

<sawsoap:rowset xsi:type=”xsd:string”><bi:rowset xmlns:bi=”urn:schemas-microsoft-com:xml-analysis:rowset”>
<Row>
<Column0>UJ Catering Service AG</Column0>
<Column1>5991</Column1>
<Column2>Supplier</Column2>
</Row>
</bi:rowset>
</sawsoap:rowset>

The XMLTYPE package and the REPLACE function usage is below.  Note: the CHR(38) function returns the ‘&’ character.

F_CLOB := F_XML.GETCLOBVAL(); — Convert to CLOB
F_CLOB := REPLACE (F_CLOB, CHR(38)||’lt;’, ‘<‘);
F_CLOB := REPLACE (F_CLOB, CHR(38)||’gt;’, ‘>’ );
F_CLOB := REPLACE (F_CLOB, CHR(38)||’quot;’,'”‘);
F_CLOB := REPLACE (F_CLOB, ‘/rowset’, ‘/bi:rowset’); — Insert bi namespace
F_CLOB := REPLACE (F_CLOB, ‘<rowset’, ‘<bi:rowset’); — Insert bi namespace
F_CLOB := REPLACE (F_CLOB, ‘xmlns=’, ‘xmlns:bi=’); — Insert bi namespace
F_XML := XMLTYPE.createXML( F_CLOB ); — Convert back to XMLTYPE

Creating a BICS Table

This step uses a SQL command to create a simple staging table that has 20 identical varchar2 columns. These columns may be transformed into number and date data types in a future transformation exercise that is not covered in this post.

A When Others exception block allows the procedure to proceed if an error occurs because the table already exists. An example is below:

EXCEPTION
WHEN OTHERS THEN NULL; — Ignore error if table exists

Note: The table needs to be created once before compiling the procedure the first time. The complete DDL is below:

CREATE TABLE STAGING_TABLE
(
C01 VARCHAR2(2048 BYTE),C02 VARCHAR2(2048 BYTE), C03 VARCHAR2(2048 BYTE), C04 VARCHAR2(2048 BYTE), C05 VARCHAR2(2048 BYTE),
C06 VARCHAR2(2048 BYTE),C07 VARCHAR2(2048 BYTE), C08 VARCHAR2(2048 BYTE), C09 VARCHAR2(2048 BYTE), C10 VARCHAR2(2048 BYTE),
C11 VARCHAR2(2048 BYTE),C12 VARCHAR2(2048 BYTE), C13 VARCHAR2(2048 BYTE), C14 VARCHAR2(2048 BYTE), C15 VARCHAR2(2048 BYTE),
C16 VARCHAR2(2048 BYTE),C17 VARCHAR2(2048 BYTE), C18 VARCHAR2(2048 BYTE), C19 VARCHAR2(2048 BYTE), C20 VARCHAR2(2048 BYTE)
)

A shortened example of the create table statement is below:

execute immediate ‘create table staging_table ( c01 varchar2(2048), … , c20 varchar2(2048)  )’;

Loading the BICS Table

This step uses SQL commands to truncate the staging table and insert rows from the BIP report XML content.

The XML content is parsed using an XPATH command inside two LOOP commands.

The first loop processes the rows by incrementing a subscript.  It exits when the first column of a new row returns a null value.  The second loop processes the columns within a row by incrementing a subscript. It exits when a column within the row returns a null value.

The following XPATH examples are for a data set that contains 5 rows and 3 columns per row:

//Row[2]/*[1]/text() — Returns the value of the first column of the second row
//Row[2]/*[4]/text() — Returns a null value for the 4th column signaling the end of the row
//Row[6]/*[1]/text() — Returns a null value for the first column of a new row signaling the end of the — data set

After each row is parsed, it is inserted into the BICS staging table.

An image of the staging table result is shown below:

D

Summary

This post detailed a method of loading data that has been extracted from Oracle Transactional Business Intelligence (OTBI) using SOAP into the Oracle Business Intelligence Cloud Service.

A BICS staging table was created and populated. This table can then be transformed into star-schema objects for use in modeling.

For more BICS and BI best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for BICS.

References

Complete Text of Procedure Described

Using Oracle BI Answers to Extract Data from HCM via Web Services

Database PL/SQL Language Reference

Reference Guide for the APEX_WEB_SERVICE

Soap API Testing Tool

XPATH Testing Tool

Base64 Decoding and Encoding Testing Tool

Using Oracle Wallet Manager

REST APIs for Oracle BI Cloud Service

 

 

 

Loading Data from Oracle Field Service Cloud into Oracle BI Cloud Service using SOAP

$
0
0

Introduction

This post details a method of extracting and loading data from Oracle Field Service Cloud (OFSC) into the Oracle Business Intelligence Cloud Service (BICS).

A compelling reason to use such a method is when data is required that is not in the standard daily extract. Such data might be planning (future) data or data recently provided in new releases of the application.

This post uses SOAP web services to extract XML-formatted data responses. It also uses the PL/SQL language to wrap the SOAP extract, XML parsing commands, and database table operations in a Stored Procedure. It produces a BICS staging table and a staging view which can then be transformed into star-schema object(s) for use in modeling. The transformation processes and modeling are not discussed in this post.

Finally, an example of a database job is provided that executes the Stored Procedure on a scheduled basis.

The PL/SQL components are for demonstration purposes only and are not intended for enterprise production use. Additional detailed information, including the complete text of the PL/SQL procedure described, is included in the References section at the end of this post.

Rationale for Using PL/SQL

PL/SQL is the only procedural tool that runs on the BICS / Database Schema Service platform. Other wrapping methods e.g. Java, ETL tools, etc. require a platform outside of BICS to run on.

PL/SQL may also be used in a DBCS that is connected to BICS.

PL/SQL can utilize native SQL commands to operate on the BICS tables. Other methods require the use of the BICS REST API.

Note: PL/SQL is very good at showcasing functionality. However, it tends to become prohibitively resource intensive when deploying in an enterprise production environment. For the best enterprise deployment, an ETL tool such as Oracle Data Integrator (ODI) should be used to meet these requirements and more:

* Security

* Logging and Error Handling

* Parallel Processing and Performance

* Scheduling

* Code Re-usability and Maintenance

Using Oracle Database Cloud Service

Determining Security Protocol Requirements

If the web service requires a security protocol, key exchange or cypher not supported by the default BICS Schema Database Service, another Oracle Database Cloud Service (DBCS) may be used.

An example security protocol is TLS version 1.2 which is used by the OFSC web service accessed in this post.

Note: For TLSv1.2, specify a database version of 11.2.0.4.10 or greater, or any version of 12c. If the database is not at the required version, PL/SQL may throw the following error: ORA-29259: end-of-input reached

To detect what protocol a web service uses, open the SOAP WSDL page in a browser, click the lock icon, and navigate to the relevant security section. A Chrome example from an OFSC WSDL page is below:

1

Preparing the DBCS

If a DBCS other than the default Schema Service is used, the following steps need to be performed.

Create a BICS user in the database. The use of Jobs and the DBMS_CRPTO package shown in the example below are discussed later in the post. Example SQL statements are below:

— USER SQL
CREATE USER “BICS_USER” IDENTIFIED BY password
DEFAULT TABLESPACE “USERS”
TEMPORARY TABLESPACE “TEMP”
ACCOUNT UNLOCK;
— QUOTAS
ALTER USER “BICS_USER” QUOTA UNLIMITED ON USERS;
— ROLES
ALTER USER “BICS_USER” DEFAULT ROLE “CONNECT”,”RESOURCE”;
— SYSTEM PRIVILEGES
GRANT CREATE VIEW TO “BICS_USER”;
GRANT CREATE ANY JOB TO “BICS_USER”;
–OBJECT PERMISSIONS
GRANT EXECUTE ON SYS.DBMS_CRYPTO TO BICS_USER;

Create an entry in a new or existing Oracle database wallet for the trusted public certificate used to secure connections to the web service via the Internet. A link to the Oracle Wallet Manager documentation is included in the References section. Note the location and password of the wallet as they is used to issue the SOAP request.

The need for a trusted certificate is detected when the following error occurs: ORA-29024: Certificate validation failure.

An example certificate path found using Chrome browser is shown below. Both of these trusted certificates need to be in the Oracle wallet.

2

Preparing the Database Schema

Two objects need to be created prior to compiling the PL/SQL stored procedure.

The first is a staging table comprising a set of identical columns. This post uses a staging table named QUOTA_STAGING_TABLE. The columns are named consecutively as C01 through Cnn. This post uses 50 staging columns. The SQL used to create this table may be viewed here.

The second is a staging view named QUOTA_STAGING_VIEW built over the staging table. The view column names are the attribute names used in the API WSDL. The SQL used to create this view may be viewed here. The purpose of the view is to relate an attribute name found in the SOAP response to a staging table column based on the view column’s COLUMN_ID in the database. For example, if a response attribute name of bucket_id is detected and the COLUMN_ID of the corresponding view column is 3, then the staging table column populated with the attribute value would be C03.

Ensuring the Web Services are Available

To ensure that the web services are available in the required environment, type a form of the following URL into a browser:

https://hostname/soap/capacity/?wsdl

Note: If you are unable to reach the website, the services may not be offered or the URL may have changed. Discuss this with the service administrator.

Using API Testing Tools

The SOAP Request Envelope should be developed in an API testing tool such as SoapUI or Postman. The XPATH expressions for parsing should be developed and tested in an XPATH expression testing tool such as FreeFormatter. Links to these tools are provided in the References section.

Note: API testing tools such as SoapUI, FreeFormatter, Postman, and so on are third-party tools for using SOAP and REST services. Oracle does not provide support for these tools or recommend a particular tool for its APIs. You can select the tool based on your requirements.

Preparing the SOAP Request

This post uses the get_quota_data method of the Oracle Field Service Cloud Capacity Management API. Additional information about the API is included as a link in the References section.

Use a browser to open the WSDL page for this API. An example URL for the page is: https://hostname/soap/capacity/?wsdl. This page provides important information regarding the request and response envelopes used by the API.

The request envelope is comprised of the following sections. Note: To complete the envelope creation, the sections are concatenated together to provide a single request envelope. An example of a complete request enveloped may be viewed here.

Opening

The Opening section is static text as shown below:

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:urn=”urn:toa:capacity”>
<soapenv:Header/>
<soapenv:Body>
<urn:get_quota_data>

User

The User section is dynamic and comprises the following components:

Now

The now component is the current time in the UTC time zone. An example is: <now>2016-12-19T09:13:10+00:00</now>. It is populated by the following command:

SELECT TO_CHAR (SYSTIMESTAMP AT TIME ZONE ‘UTC’, ‘YYYY-MM-DD”T”HH24:MI:SS”+00:00″‘ ) INTO V_NOW FROM DUAL;

Login

The login component is the user name.

Company

The company component is the company for which data is being retrieved.

Authorization String

The auth_string component is the MD5 hash of the concatenation of the now component with the MD5 hash of the user password. In pseudo-code it would be md5 (now + md5 (password)). It is populated by the following command:

SELECT
LOWER (
DBMS_CRYPTO.HASH (
V_NOW||
LOWER( DBMS_CRYPTO.HASH (V_PASSWORD,2) )
,2
)
)
INTO V_AUTH_STRING FROM DUAL;

Note: ‘2’ is the code for MD5.

An example is:

<auth_string>b477d40346ab40f1a1a038843d88e661fa293bec5cc63359895ab4923051002a,/auth_string>

Required Parameters

There are two required parameters: date and resource_id. Each may have multiple entries. However the sample procedure in this post allows only one resource id. It also uses just one date to start with and then issues the request multiple times for the number of consecutive dates requested.

In this post, the starting date is the current date in Sydney, Australia. An example is below:

<date>2016-12-21</date> <resource_id>Test_Resource_ID</resource_id>

The starting date and subsequent dates are populated by this command:

CASE WHEN P_DATE IS NULL
THEN SELECT TO_CHAR (SYSTIMESTAMP AT TIME ZONE ‘Australia/Sydney’, ‘YYYY-MM-DD’) INTO P_DATE FROM DUAL;
ELSE P_DATE:= TO_CHAR (TO_DATE (P_DATE, ‘YYYY-MM-DD’) + 1,’YYYY-MM-DD’); — Increments the day by 1
END CASE;

Aggregation

The aggregation component specifies whether to aggregate the results. Since BI will do this automatically, aggregation and totals are set to 0 (no). An example is:

<aggregate_results>0</aggregate_results> <calculate_totals>0</calculate_totals>

Field Requests

This section may be passed as a parameter and it lists the various data fields to be included in the extract. An example is below:

<day_quota_field>max_available</day_quota_field>
<time_slot_quota_field>max_available</time_slot_quota_field>
<time_slot_quota_field>quota</time_slot_quota_field>
<category_quota_field>used</category_quota_field>
<category_quota_field>used_quota_percent</category_quota_field>
<work_zone_quota_field>status</work_zone_quota_field>

Closing

The Closing section is static text as shown below:

</urn:get_quota_data>
</soapenv:Body>
</soapenv:Envelope>

Calling the SOAP Request

The APEX_WEB_SERVICE package is used to populate a request header and issue the SOAP request. The header requests that the web service return the contents in a non-compressed text format as shown below:

 

APEX_WEB_SERVICE.G_REQUEST_HEADERS(1).NAME := ‘Accept-Encoding’;
APEX_WEB_SERVICE.G_REQUEST_HEADERS(1).VALUE := ‘identity’;

For each date to be processed the SOAP request envelope is created and issued as shown below:

F_XML      := APEX_WEB_SERVICE.MAKE_REQUEST(
P_URL         => F_SOAP_URL
,P_ENVELOPE    => F_REQUEST_ENVELOPE
,P_WALLET_PATH => ‘file:wallet location
,P_WALLET_PWD  => ‘wallet password‘ );

Troubleshooting the SOAP Request Call

Common issues are the need for a proxy, the need for a trusted certificate (if using HTTPS), and the need to use the TLS security protocol. Note: This post uses DBCS so the second and third issues have been addressed.

The need for a proxy may be detected when the following error occurs: ORA-12535: TNS:operation timed out. Adding the optional p_proxy_override parameter to the call may correct the issue. An example proxy override is:

www-proxy.us.oracle.com

 

Parsing the SOAP Response

For each date to be processed the SOAP response envelope is parsed to obtain the individual rows and columns.

The hierarchy levels of the capacity API are listed below:

Bucket > Day > Time Slot > Category > Work Zone

Each occurrence of every hierarchical level is parsed to determine attribute names and values. Both the name and the value are then used to populate a column in the staging table.

When a hierarchical level is completed and no occurrences of a lower level exist, a row is inserted into the BICS staging table.

Below is an example XML response element for one bucket.

<bucket>
<bucket_id>TEST Bucket ID</bucket_id>
<name>TEST Bucket Name</name>
<day>
<date>2016-12-21</date>
<time_slot>
<label>7-10</label>
<quota_percent>100</quota_percent>
<quota>2520</quota>
<max_available>2520</max_available>
<used_quota_percent>0</used_quota_percent>
<category>
<label>TEST Category</label>
<quota_percent>100</quota_percent>
<quota>2520</quota>
<max_available>2340</max_available>
<used_quota_percent>0</used_quota_percent>
</category>
</time_slot>
<time_slot>
<label>10-14</label>
<quota_percent>100</quota_percent>
<quota>3600</quota>
<max_available>3600</max_available>
<used_quota_percent>0</used_quota_percent>
<category>
<label>TEST Category</label>
<quota_percent>100</quota_percent>
<quota>3600</quota>
<max_available>3360</max_available>
<used_quota_percent>0</used_quota_percent>
</category>
</time_slot>
<time_slot>
<label>14-17</label>
<quota_percent>100</quota_percent>
<quota>2220</quota>
<max_available>2220</max_available>
<used_quota_percent>0</used_quota_percent>
<category>
<label>TEST Category</label>
<quota_percent>100</quota_percent>
<quota>2220</quota>
<max_available>2040</max_available>
<used_quota_percent>0</used_quota_percent>
</category>
</time_slot>
</day>
</bucket>

The processing of the bucket element is as follows:

Occurrences 1 and 2 of the bucket level are parsed to return attribute names of bucket_id and name. The bucket_id attribute is used as-is and the name attribute is prefixed with “bucket_” to find the corresponding column_ids in the staging view. The corresponding columns in the staging table, C03 and C04, are then populated.

Occurrence 3 of the bucket level returns the day level element tag. Processing then continues at the day level.

Occurrence 1 of the day level returns the attribute name of date. The attribute name is prefixed with “day_” to find the corresponding column_id in the staging view. The corresponding column in the staging table, C05, is then populated with the value ‘2016-12-21’.

Occurrence 2 of the day level returns the first of three time_slot level element tags. Processing for each continues at the time-slot level. Each time_slot element contains 5 attribute occurrences followed by a category level element tag.

Each category level contains 5 attribute occurrences. Note: there is no occurrence of a work_zone level element tag in the category level. Thus after each category level element is processed, a row is written to the staging table.

The end result is that 3 rows are written to the staging table for this bucket. The table below describes the XML to row mapping for the first row.

Attribute Name Attribute Value View Column Name Table Column Name
bucket_id TEST Bucket ID BUCKET_ID C03
name TEST Bucket Name BUCKET_NAME C04
day 2016-12-21 DAY_DATE C05
label 7-10 TIME_SLOT_LABEL C18
quota_percent 100 TIME_SLOT_QUOTA_PERCENT C19
quota 2520 TIME_SLOT_QUOTA C21
max_available 2520 TIME_SLOT_MAX_AVAILABLE C26
used_quota_percent 0 TIME_SLOT_USED_QUOTA_PERCENT C29
label TEST Category CAT_LABEL C32
quota_percent 100 CAT_QUOTA_PERCENT C33
quota 2520 CAT_QUOTA C35
max_available 2340 CAT_MAX_AVAILABLE C42
used_quota_percent 0 CAT_USED_QUOTA_PERCENT C44

 

In PL/SQL, the processing is accomplished using the LOOP command. There is a loop for each hierarchical level. Loops end when no results are returned for a parse statement.

XPATH statements are used for parsing. Additional information regarding XPATH statements may be found in the References section. Examples are below:

Statement Returns
/bucket[5] The entire fifth bucket element in the response. If no results then all buckets have been processed.
/bucket/*[1] The first bucket attribute or element name.
/bucket/*[2]/text() The second bucket attribute value.
/bucket/day/*[6] The sixth day attribute or element name.
/bucket/day[1]/*[6]/text() The sixth day attribute value.
/bucket/day/time_slot[2]/*[4] The fourth attribute or element name of the second time_slot.

 

Scheduling the Procedure

The procedure may be scheduled to run periodically through the use of an Oracle Scheduler job. A link to the Scheduler documentation may be found in the References section.

A job is created using the CREATE_JOB procedure by specifying a job name, type, action and a schedule. Setting the enabled argument to TRUE enables the job to automatically run according to its schedule as soon as you create it.

An example of a SQL statement to create a job is below:

BEGIN
DBMS_SCHEDULER.CREATE_JOB (
JOB_NAME        => ‘OFSC_SOAP_QUOTA_EXTRACT’,
JOB_TYPE        => ‘STORED_PROCEDURE’,
ENABLED          => TRUE,
JOB_ACTION      => ‘BICS_OFSC_SOAP_INTEGRATION’,
START_DATE      => ’21-DEC-16 10.00.00 PM Australia/Sydney’,
REPEAT_INTERVAL => ‘FREQ=HOURLY; INTERVAL=24’   — this will run the job every 24 hours
);
END;
/

Note: If using the BICS Schema Service database, the package name is CLOUD_SCHEDULER rather than DBMS_SCHEDULER.

The job log and status may be queried using the *_SCHEDULER_JOBS views. Examples are below:

SELECT JOB_NAME, STATE, NEXT_RUN_DATE from USER_SCHEDULER_JOBS;
SELECT LOG_DATE, JOB_NAME, STATUS from USER_SCHEDULER_JOB_LOG;

 

Summary

This post detailed a method of extracting and loading data from Oracle Field Service Cloud (OFSC) into the Oracle Business Intelligence Cloud Service (BICS).

The post used SOAP web services to extract the XML-formatted data responses. It used a PL/SQL Stored Procedure to wrap the SOAP extract, XML parsing commands, and database table operations. It loaded a BICS staging table and a staging view which can be transformed into star-schema object(s) for use in modeling.

Finally, an example of a database job was provided that executes the Stored Procedure on a scheduled basis.

For more BICS and BI best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for BICS.

References

Text of Complete Procedure

OFSC Capacity API Document

OFSC Capacity API WSDL

Scheduling Jobs with Oracle Scheduler

Database PL/SQL Language Reference

Reference Guide for the APEX_WEB_SERVICE

Soap API Testing Tool

XPATH Testing Tool

Base64 Decoding and Encoding Testing Tool

Using Oracle Wallet Manager

Oracle Business Intelligence Cloud Service Tasks

 

PCS Web Form Tips and Tricks – Part 1

$
0
0

In this blog series, I will be discussing some useful tips and tricks that will be helpful to you when you are designing your form using the new PCS web form controls and events.

In one of my colleague’s blog, he has highlighted some of the features available in PCS web form.  In PCS official documentation, it described how to configure the controls.  In this blog, I will highlight some tips and tricks when you are designing the form.  There will be 3 parts in this blog series:

General Tips and Tricks

  • Before you start designing your form, I would strongly recommend you work with business users to define the requirement like form styles for different devices, form rules/events, data definition/business objects, and identify all integration requirement in the web form.
  • Document your web form design and events.
  • PCS Web Form allows you to drag existing business object to create a web form. However, it will create a standard layout that might not meet your requirement and you might end up spending more time rearranging the controls that PCS Form created for you.   In this scenario, you have an option to create our own form data definition using the business object and bind the controls to the data definition manually.
  • In PCS, the main form can contain 1 or more sub form. However, you will not be able to access the data definition in the sub form using event in the main form. If you have a requirement that you need to access the sub form data definition, you will need to detach the sub form in the main form.

webform-img0.1

  • Currently, there is no function to copy the web form from one PCS process to another. You will need to manually export, copy the form definition files and then import the process back into PCS.  The steps are listed below:
  • Export the PCS Process with the web forms.

webform-img0.2

  • Locate the form definition files in the following folder and copy the files to a temporary location.
    • <name of the form>.xsd – SOA\forms\schemas
    • <name of the form>.bom — SOA\businessCatalog\OracleWebForms
    • <form id>.frm — SOA\forms\Definitions. You can use the name field in the definition file to make sure you locate the correct definition file for the form.

      Another way to locate the form definition files is to use the form designer.  When you publish the form changes, the composer will display the modified form files to be published.

webform-img0.3

  • Copy the form definition files to the new PCS process (*.exp) to the same folders as above. You can perform this operation using any compression tools.
  • Import the updated PCS process export file into PCS.
  • It will be difficult to view all the controls in the form designer panel if you have too many controls on the form. To overcome this limitation, you will need to use the browser zoom in/out function to maximize your viewable design area.

webform-img0.4

Layout Controls – Tabs, Panels and Sections

Before you dive into the input controls like text input, checkbox, select etc., I would recommend that you take some time to learn and thoroughly understand some of the layout control that you can use to arrange your input controls on your web form.

When designing a more responsive and useable form for users when they open a task, you can make use of tabs, panels and sections to group multiple controls. It is also more intuitive for users to know that the control/content are divided in an organized way, so users can easily access the content that they are interested in rather than having all in one web form page.

You will need to understand how the controls are being organized when you drag a control onto a form.   By default, the column size in a row is calculated based on the amount of visible controls on it. The row is based on a 12 columns grid system, so for example if the row has 4 controls, each will use 3 columns. When more than 12 controls are added to a row, the remaining controls will be displayed below. On mobile, each control is displayed in one row (when using the automatic calculated size).

You also have an option to uncheck the automatic column calculation, so that you can specify a column size for each media size. If there are still controls that have automatic column span, its size will be calculated using the remaining space. (12 – total amount of manual sizes). If the remaining space is 0, each control will have a column span of 1.

webform-img1

From PCS 16.4.5 onward, tabs and sections will have a lazy loading capability. It will allow the form to render while the control in the tabs and sections are still loading.  This feature will speed up the task form loading time as the user will not have to wait for the entire form to load.

Both tabs and sections have a property called Lazy Loading.  For sections, if you enable lazy loading, the expanded checkbox will be unchecked automatically, this is because the sections will be rendered when user click the section label to expand the section,

webform-img2
Tips:

  • It is recommended that you do not put too many controls in a section or tab, if you have too many controls in a section, I would recommend that you enable the lazy loading feature.
  • If the lazy loading feature is enabled, any control validation errors in the collapsed section will not be displayed until the user expand the section. When the user encounters any validation error, you will need to ask the user to expand the section to fix the error.

Tab control

You can organize your tab by dragging the tab to the order that you want, but when the screen has load, the first tab that display on the screen will depend on the setting of a “Selected Tab” property.

webform-img3

Tips:

  • If you are to add, delete and organize the order of the tab, you will need to check your “Selected Tab” setting to ensure it is the tab you want to display when users open the task.
  • Tab has lazy loading capability, so if you want the tab to be displayed when the task form load, do not enable the lazy loading for the first tab, and do not put too many controls on the first tab.


Panel Control

A panel control can be used to group different controls. By default, it will display the controls vertically, but you can change the way it is displayed by using the layout properties.  If you are to change the layout to “Horizontal”, the controls you drop in the panel will be aligned horizontally.

webform-img4

Tips:

  • Remember to set the desired layout setting (Vertical or Horizontal) before you add any controls in a panel.
  • You can add a panel in a panel to organize your control.

webform-img5

  • You can add a panel to a column in a table control.

webform-img6


Section Control

Sections allow you to group your controls on a page.  It is recommended you set a meaningful label/title on the section, so that users can easily identify the sections that they are interested in.  You can set a static section label, or use the event to change the label. Further discussion on the event will be covered in my later blog.

Tips: 

  • When the page rendering become slow because you have too many controls in a section, I would recommend that you enable the lazy loading on that section.
  • You can use event to expand a section automatically, for example when user click on a tab.

webform-img7

  • Use meaningful section label. You can bind the label to a data definition element or use an event to change the label of the section control.

PCS Web Form Tips and Tricks – Part 2

$
0
0

In part 1 of this blog series, I have discussed and highlighted some tips and tricks of using layout controls in web form.  In my part 2 blog, I will reveal some tips and tricks on using the input controls in web form.

Text Input and Text Area Fields Controls

Text Input is a very common control used in task form.  It is recommended you set the min/max length for text input based on your data object schema element, so that the appropriate validation message will be displayed if the data doesn’t meet the required length.

webform-img8

For text area, you will need to set the number of rows if you want to allow users to enter multiple lines in the control, the total length allow in the text area will be controlled by the “Max Length” property regardless of how many rows you set in the “Rows” property.

Tips:

  • Set the max length per your data object schema definition, by default it is 0 (unlimited).
  • Adjust the size of the control using the Width property in the styling section to ensure no wasted space.

Select Control

Select control let you add drop down list to your form.  You can add a static drop down list or dynamic drop down list.  You will have 2 options for dynamic drop down list, using data definition or a REST connector.

You can also allow users to select multiple value in the select control if you enable “Multiple” in the property, however you will need to ensure the control is bind to a list/array data type.

webform-img9

Tips:

  • If you are using data definition to populate your drop-down list, you will need to make sure your data definition is an array/list type elements.
  • If you are using REST connector, you will need to ensure the rest service response field names are to begin in lower case. This is due to the limitation in current PCS version that the response in REST call will be mapped to the data object element names that also begin in lower case.

Checklist, Checkbox and Radio Button Controls

  • Checkbox: Only have true or false value, when it is checked, the value will be true; false when it is unchecked. The data will be bind to a Boolean type.
  • CheckList : Allow multiple values, you can define your own key and value pair. The source of the data can be from a REST connector, the data definition or static values can be defined by you. The data will be bind to an array type.
  • Radio button: Similar to CheckList Control, but it only allow user to select one option at a time. The data will be bind to a String type.

Tips:

  • The checklist and radio control options can be displayed horizontally or vertically, by default it is displayed vertically, check the “Inline” property to display horizontally.
  • The option values for the checklist and radio must be unique, this also apply to the source data in payload and REST Connector.

webform-img10

 

Number and Money Controls

Number and money controls work almost the same way, both controls bind to the number data type. The main differences are:

  • Money control: The value will be formatted based on the currency setting, and the value is rounded to 2 decimal places.
  • Number control: The value will be rounded to 3 decimal places.

webform-img11

Tips:

  • Currently the money control only support 5 currency sign. Use USD if you require “$” sign.
  • Both controls follow the basic rounding rules for the decimal places. Using money control as an example, if user entered a number with 3 decimal places and the 3rd decimal point is less than five (0, 1, 2, 3, or 4), we leave the 2nd digit after the decimal place alone, any digits after that number will be dropped. If the 3rd decimal point user entered is greater than or equal to five (5, 6, 7, 8, or 9), we then increase the value of the 2nd digit by one, any remaining digits after the 2nd decimal point are dropped.

Date, Time and Date Time Controls

These 3 controls bind to different data type.

  • Date Control: Bind to date data type.
  • Date Time Control: Bind to Date Time data type.
  • Time Control: Bind to Time data type.

webform-img12

Tips:

    • Remember to set the desire data and time format.
    • If you want to set the default value to current date, time or date and time, you will need to use an ”On Load” event.

webform-img13

  • When you render the form with data from a data object, make sure and check that you are using the correct control with the same data type.

URL and Link Controls

Link control allows users to add a hyperlink on the form, when users click on the link, the web page will be opened either in a new tab or current tab.  URL is just like an input text, but it validates the value entered by the user whether it is a correct URL format.

webform-img14

Tips:

  • Currently the default value for URL control cannot accept the URL with more than 2 levels deep, e.g. it will have validation error when you enter: http://www.mycompany.com/level1/level2/level3, it will accept http://www.mycompany.com/level1/level2, this is the limitation in the current version of PCS.

Email and Phone Controls

The Email control will validate the email format.  Error will be shown on the form when user enter an incorrect email format. The Phone control only support 2 format, US and international. Read the PCS Documentation for more details on the properties:

Tips:

  • If you choose to use the international format for the phone control, you will need to prompt the user to enter the correct international format. User need to add the ‘+’ prefix and the country code in front of the number, for example +14155552671. You can use the “Placeholder” property to guide the users on the acceptable format or use “Hint” property to display a usage guide when user select the control.

webform-img15
Message Control

Message control is used to display informative message to the user, it can be used to display alert or instruction when user enter data into the form.  The message control allows users to set the style using standard HTML style e.g. heading, paragraph etc.  Message control cannot accept HTML or CSS tag.

Tips:

  • You can use event to display summary information when user change the value of other control.

webform-img16

Image and Video Controls

Image and video controls allow you to add external URL to the resource.  You will not be able to upload the image or video to PCS. The image and video source must be hosted in another environment.

Tips:

  • For Image control, you will need to provide an absolute or relative web address to the image.
  • Video control supports YouTube or Vimeo video embedded URL address.
  • The Image URL cannot be a secured link that require authentication.
  • Disable automatic column size in the styling tab and use the device layout column size to adjust the size of the image.

Table and Repeatable Section Controls

Table and repeatable section are bind to the array data type.  Both have similar control properties.  Table control only provides the table structure just like the <table></table> in html, you will need to add other controls like input text, select, button etc. to the column for it to function as per your requirement.

Tips:

    • Table control also allows you to add layout type control like panel. With combination of panel or other basic controls like input text, you can have multiple controls in one column. For example, I can add a panel in a column with horizontal layout, and add an input text and a button in the panel.

webform-img17

  • By default, the “Users can Add/Remove Rows” property is unchecked for table and repeatable section controls, if you want to allow users to add or remove rows, you will need to make sure it is checked.
  • Add layout controls in the repeatable section controls to layout your control property.
  • If you have large number of rows in a repeatable section, it is recommended that you add a section control with lazy loading enabled.

webform-img18


PCS Web Form Tips and Tricks – Part 3

$
0
0

In part 1 and part 2 of my blog series, I have discussed and highlighted some tips and tricks of using layout and input controls in web form.  In part 3, I will discuss and share some of the tips that will be useful to you when you add some dynamic behaviors using events in web form.  Different controls have different behaviors/event, you can read the PCS official documentation for more details.

Using connector in an event

You can use REST connector in an event. To use the rest connector, you will need to first define the REST connector in PCS composer->Application Home->Integrations. After you have defined the connection, the connector will be available for selection when you define the event.

webform-img19

Tips:

  • The response variable name must be in lower case.
  • If the response is an array type element, then, when you map the array data type to the control, the control must be an array type control like table or repeatable section. For example, if you have a table with 2 columns (supplier name and postcode) and the REST Service connector will return an array of objects in the REST response, when you map the response data to the table, you can assign the REST object field to the table column name.

webform-img20

webform-img21

Using action in a table/repeatable section

Table and repeatable section are array data type. Event action allows you to traverse through an array data type:

  • Self – Reference to the entire table/repeatable control
  • First- First item in the table/repeatable section control
  • Last – Last item in the table/repeatable section control
  • Selected – The current user selected Item in the table/repeatable section control, user must click on the item for this action to function.

There are a number of control actions you can defined in the table and repeatable section control.   Some of the actions allow you to change the property of the control like value, enable or disable the control etc.., some actions provide extra functionality, for example: Add Row, which you can use to add a row dynamically. Clear Value – which you can use to clear a value.

webform-img23

Tips:

  • It is not possible to reference to current item in an array using “On Load” event. If you need to access the current item in a repeatable section or table, you can add a section in a repeatable section with “Expanded” unchecked. On the section property, create an “On Expand” event with Which? = “Selected”.  With this event setting, user must click to expand the section for the action to be activated on the selected item.

webform-img22

    • You can use action to change the property of the control during runtime. For example, you can dynamically enable/disable the property “Allow Add/Remove Rows” using event to add and remove rows during runtime.

webform-img24

  • If the table or repeatable section data is from a REST connector, you can also use action to refresh the REST connector, the REST call will be fired again to refresh the data for the control.

webform-img25

Using condition and functions

Web Form allows you to configure If/Then/Else condition for an action or connector.  Please read the PCS documentation for more details about the condition function.

Functions are useful when you need to perform an operation with the data.  Please read the PCS documentation for more details about the common functions in PCS Web Form.

Tips:

  • Avoid complex conditions and business logic in PCS Form.
  • Split the event into multiple small manageable events. For example, you can define multiple on Load events. However, when you define operations to overwrite the value of a control using multiple event of the same type (like on Load), the output of the last defined event will be the final value. For example, if you defined 2 “On Load” events, and both events update the same input text control, the second “On Load” event will define the final value of the input text control.
  • The replace function takes 3 parameters, the second parameter supports regular expression.

webform-img26

Loading Data from Oracle Identity Cloud Service into Oracle BI Cloud Service using REST

$
0
0

Introduction

This post details a method of extracting and loading data from Oracle Identity Cloud Service (IDCS) into the Oracle Business Intelligence Cloud Service (BICS). It builds upon the A-team post IDCS Audit Event REST API which details the REST API calls used.

One use case for this method is for analyzing trends regarding audit events.

This post uses REST web services to extract JSON-formatted data responses. It also uses the PL/SQL language to wrap the REST extract, JSON parsing commands, and database table operations in a Stored Procedure. It produces a BICS staging table which can then be transformed into star-schema object(s) for use in modeling. The transformation processes and modeling are not discussed in this post.

Finally, an example of a database job is provided that executes the Stored Procedure on a scheduled basis.

The PL/SQL components are for demonstration purposes only and are not intended for enterprise production use. Additional detailed information, including the complete text of the PL/SQL procedure described, is included in the References section at the end of this post.

Rationale for Using PL/SQL

PL/SQL is the only procedural tool that runs on the BICS / Database Schema Service platform. Other wrapping methods e.g. Java, ETL tools, etc. require a platform outside of BICS to run on.

PL/SQL may also be used in a DBaaS (Database as a Service) that is connected to BICS.

PL/SQL can utilize native SQL commands to operate on the BICS tables. Other methods require the use of the BICS REST API.

Note: PL/SQL is a very good at showcasing functionality. However, it tends to become prohibitively resource intensive when deploying in an enterprise production environment. For the best enterprise deployment, an ETL tool such as Oracle Data Integrator (ODI) should be used to meet these requirements and more:

* Security

* Logging and Error Handling

* Parallel Processing – Performance

* Scheduling

* Code Re-usability and Maintenance

Using Oracle Database as a Service

Determining Security Protocol Requirements

If the web service requires a security protocol, key exchange or cypher not supported by the default BICS Schema Database Service, another Oracle Database Cloud Service (DBaaS) may be used.

Note: For the most consistent response, specify a database version of 11.2.0.4.10 or greater, or any version of 12c. If the database is not at the required version, PL/SQL may throw the following error: ORA-29259: end-of-input reached

To detect what protocol a web service uses, open the IDCS Login page in a browser, click the lock icon, and navigate to the relevant security section. A Chrome example from an IDCS Login page is below:

1

Preparing the DBaaS

If DBaaS is used, the following steps need to be performed.

Creating the BICS User

Create a BICS user in the database. The use of the Job privilege is discussed later in the post. Example SQL statements are below:

 — USER SQL
CREATE USER “BICS_USER” IDENTIFIED BY password
DEFAULT TABLESPACE “USERS”
TEMPORARY TABLESPACE “TEMP”
ACCOUNT UNLOCK;
— QUOTAS
ALTER USER “BICS_USER” QUOTA UNLIMITED ON USERS;
— ROLES
ALTER USER “BICS_USER” DEFAULT ROLE “CONNECT”,”RESOURCE”;
— SYSTEM PRIVILEGES
GRANT CREATE VIEW TO “BICS_USER”;
GRANT CREATE ANY JOB TO “BICS_USER”;

Managing Trusted Certificates

Create an entry in a new or existing Oracle database wallet for the trusted public certificate used to secure connections to the web service via the Internet. A link to the Oracle Wallet Manager documentation is included in the References section. Note the location and password of the wallet as they are used to issue the REST request.

The need for a trusted certificate is detected when the following error occurs: ORA-29024: Certificate validation failure.

An example certificate path found using Chrome browser is shown below. Both of these trusted certificates need to be in the Oracle wallet.

2

Granting Network Access

This post uses the UTL_HTTP package which requires the user to have permission to access web services via an Access Control List (ACL).

The need for an ACL privilege is detected when the following error occurs: ORA-24247: network access denied by access control list (ACL).

Grant the BICS_USER authority to connect to the network access control list (ACL). To determine your unique network ACL name run the following:

SELECT * FROM DBA_NETWORK_ACLS;

Using the network name from above run the following:

BEGIN
DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE(acl   => ‘NETWORK_ACL_YourUniqueSuffix’
principal   => ‘BICS_USER’,
is_grant    => true,
privilege   => ‘connect’);
END;
/

 

Preparing the Database Schema

A staging table needs to be created prior to compiling the PL/SQL stored procedure.

This post uses a staging table named AUDIT_EVENT. The columns are those chosen from the REST API for Oracle Identity Cloud Service. A link to the document may be found in the References section. This post uses the following columns:

ACTOR_DISPLAY_NAME
ACTOR_ID
ACTOR_NAME
ACTOR_TYPE
ADMIN_REF_RESOURCE_NAME
ADMIN_RESOURCE_NAME
EC_ID
EVENT_ID
ID
MESSAGE
SSO_COMMENTS
SSO_PROTECTED_RESOURCE
SSO_USER_AGENT
TIMESTAMP

The SQL used to create this table may be viewed here.

Using API Testing Tools

The REST requests should be developed in API testing tools such as SoapUI and Postman. The JSON expressions for parsing should be developed and tested in a JSON expression testing tool such as CuriousConcept. Links to these tools are provided in the References section.

Note: API testing tools such as SoapUI, CuriousConcept, Postman, and so on are third-party tools for using SOAP and REST services. Oracle does not provide support for these tools or recommend a particular tool for its APIs. You can select the tool based on your requirements. As a starting point and for some examples refer to the A-Team post IDCS OAuth 2.0 and REST API.

Preparing and Calling the IDCS REST Service

This post uses the AuditEvents and Token methods of the IDCS REST API 

Preparing the Token Request

IDCS uses the OAuth 2.0 framework for authorization. This requires an access token to be requested and provided via the Token method of the API.

Before preparing the REST request, a Web Application needs to be created in IDCS. This administrative function is not covered in this post. You will need the Client ID and the Client Secret generated with the web application.

You must encode the Client ID and Client Secret when you include it in a request for an access token. A Base64 encoding tool such as Base64 may be used to perform this step. Place the Client ID and Client Secret on the same line and insert a colon between them: clientid:clientsecret and then encode the string. An example encoded result is

Y2xpZW50aWQ6Y2xpZW50c2VjcmV0

You will need the wallet path and password discussed in the Preparing the DBaaS section above. An example path from a linux server is:

/u01/app/oracle

You will need the URL for the Token method of the URL such as:

https://idcs-hostname/oauth2/v1/token

The APEX_WEB_SERVICE package is used to set the headers and parameters described below.

Two HTTP request headers are needed. The first is a Content-Type header and the second is an Authorization header. The authorization header value is the concatenation of the string ‘Basic ‘ with the Base64 encoded result of the Client ID and the Client Secret as shown below:

v_authorization_token := ‘Y2xpZW50aWQ6Y2xpZW50c2VjcmV0’;
apex_web_service.g_request_headers (1).name := ‘Content-Type’;
apex_web_service.g_request_headers(1).value := ‘application/x-www-form-urlencoded; charset=UTF-8’;
apex_web_service.g_request_headers(2).name := ‘Authorization’;
apex_web_service.g_request_headers(2).value := ‘Basic ‘||v_authorization_token ;

The parameter method is set to POST and two HTTP request parameters are needed. The first is a grant_type and the second is a scope as shown below:

p_http_method => ‘POST’,
p_parm_name => apex_util.string_to_table(‘grant_type:scope’),
p_parm_value => apex_util.string_to_table(‘client_credentials~urn:opc:idm:__myscopes__’,’~’)

Note: The urn:opc:idm:__myscopes__ in the scope parameter value is used as a tag by Oracle Identity Cloud Service clients requesting access tokens from the OAuth authorization server. Access tokens are returned that contain all applicable Oracle Identity Cloud Service scopes based on the privileges represented by the Oracle Identity Cloud Service administrator roles granted to the requesting client.

Calling the Token Request

The APEX_WEB_SERVICE package is used to call the request and store the result in a CLOB variable as shown below:

l_ws_response_clob := apex_web_service.make_rest_request (
p_url => l_ws_url,
p_http_method => ‘POST’,
p_parm_name => apex_util.string_to_table(‘grant_type:scope’),
p_parm_value => apex_util.string_to_table (‘client_credentials~urn:opc:idm:__myscopes__’,’~’)
,p_wallet_path => ‘file:/u01/app/oracle
,p_wallet_pwd => ‘password
);

The result of the call is shown below with a partial token. The token is actually over 2,000 characters long.

{“access_token”:”eyJ4NXQjUzI1NiI6Ijg1a3E1M… “, “token_type”:”Bearer”,”expires_in”:3600}

Note: The response includes the expires_in:3600 parameter. This means that your token is no longer valid after one hour from the time that you generate it.

Parsing the Token Response

The APEX_JSON package is used to parse the token response and store the result in a VARCHAR variable as shown below. Additional information about this package is included as a link in the References section.

apex_json.parse(l_ws_response_clob);
f_idcs_token := apex_json.get_varchar2(p_path => ‘access_token’);

The result of the parse is just the token itself which is used to prepare the Audit Events request.

Preparing the Audit Events Request

The Audit Events request is prepared two or more times. Once to get a first response containing one event that has a field holding the total number of events. Then subsequent requests are made to retrieve all of the events.

IDCS has a limit of how many events are returned for each request. This post uses 500 as a chunk size value which may be modified. Check with the web services administrator for the maximum number of events per request. Also ensure that the number of events inserted into the BICS table equals the total number found in the initial response.

The number of subsequent requests needed is calculated as the total number of events divided by the chunk size, rounded up to the nearest integer. For example 614 events divided by 500 would result in two subsequent requests needed.

The UTL_HTTP package is used instead of the APEX_WEB_SERVICE package to avoid a limitation of 1,024 characters on the length of a header value. The access token is used in a header value and is over 2,000 characters. The error received with the APEX_WEB_SERVICE call is: ORA-06502: PL/SQL: numeric or value error: character string buffer too small.

Preparing All Requests

All requests need to have the following:

The wallet path and password specified. These are specified globally as shown below:

utl_http.set_wallet(‘file:/u01/app/oracle’, ‘password‘); — For Trusted Certificates

Persistent connection support enabled as shown below:

utl_http.set_persistent_conn_support(FALSE, 1); — Set default persistent connections (1)

Begin the request as shown below:

req := utl_http.begin_request(l_ws_url, ‘get’,’http/1.1′);

Note: The result is stored in a variable named req which is of the req type defined in the UTL_HTTP package as shown below:

— A PL/SQL record type that represents a HTTP request
TYPE req IS RECORD (
url VARCHAR2(32767 byte), — Requested URL
method VARCHAR2(64), — Requested method
http_version VARCHAR2(64), — Requested HTTP version
private_hndl PLS_INTEGER — For internal use only
);

The following three HTTP headers set are shown below:

utl_http.set_header(REQ, ‘Content-Type’, ‘application/scim+json’);
utl_http.set_header(REQ, ‘Cache-Control’, ‘no-cache’);
utl_http.set_header(REQ, ‘Authorization’, ‘Bearer ‘ || l_idcs_token); — The received access token

All but the last need persistent connection support as shown below:

utl_http.set_persistent_conn_support(req, TRUE); — Keep Connection Open

Note: The last request does not have the above setting so will default to FALSE and the connection to the service will be closed.

Preparing Individual Requests

Individual requests need to have the following:

The URL set as shown below:

l_ws_url := https://idcs-hostname/admin/v1/AuditEvents?count=1′; — Get first event for total event count

Subsequent URLs are as shown below:

l_ws_url := https://idcs-hostname /admin/v1/AuditEvents?count=500&startIndex=1&sortBy=timestamp;

Note: subsequent requests need the startindex parameter incremented by the chunk size (500).

Calling the Audit Events Request

The Audit Events requests are called using the UTL_HTTP package as shown below:

resp := utl_http.get_response(req);

Note: The result is stored in a variable named resp which is of the resp type defined in the UTL_HTTP package as shown below:

— A PL/SQL record type that represents a HTTP response
TYPE resp IS RECORD (
status_code PLS_INTEGER, — Response status code
reason_phrase VARCHAR2(256), — Response reason phrase
http_version VARCHAR2(64), — Response HTTP version
private_hndl PLS_INTEGER — For internal use only
);

Troubleshooting the REST Request Calls

Common issues are the need for a proxy, the need for an ACL, the need for a trusted certificate (if using HTTPS), and the need to use the correct TLS security protocol. Note: This post uses DBaaS so all but the first issue has been addressed.

The need for a proxy may be detected when the following error occurs: ORA-12535: TNS:operation timed out. Adding the optional p_proxy_override parameter to the call may correct the issue. An example proxy override is:

www-proxy.us.oracle.com

Parsing the Audit Event Responses

The APEX_JSON package is used to parse the responses.

Before parsing begins the staging table is truncated as shown below:

execute immediate ‘truncate table audit_event’;

An example of a response containing just one event is below:

{“schemas”:[“urn:scim:api:messages:2.0:ListResponse”]
,”totalResults”:614
,”Resources”:[
{“eventId”:”sso.authentication.failure”
,”ssoProtectedResource”:”https://idcs-hostname:443/ui/v1/myconsole”
,”actorName”:”user.name@oracle.com”
,”ssoIdentityProvider”:”localIDP”
,”ssoCSR”:”false”
,”ssoUserPostalCode”:”null”
,”ssoUserCity”:”null”
,”reasonValue”:”SSO-1018″
,”ssoUserCountry”:”null”
,”rId”:”0:1:3:2:4″
,”message”:”Authentication failure User not found.”
,”timestamp”:”2016-10-04T09:38:46.336Z”
,”ssoComments”:”Authentication failure User not found.”
,”ssoApplicationHostId”:”idcs-hostname”
,”ssoUserState”:”null”
,”ecId”:”q^Unq0s8000000000″
,”ssoRp”:”IDCS”
,”ssoLocalIp”:”10.196.29.102″
,”serviceName”:”SSO”
,”ssoAuthnLevel”:0
,”actorType”:”User”
,”ssoSessionId”:”null”
,”ssoUserAgent”:”Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36″
,”actorId”:”IDCS”
,”id”:”0a37c7374c494ed080d15c554ae75be8″
,”meta”: {“created”:”2016-10-04T09:38:46.353Z”
,”lastModified”:”2016-10-04T09:38:46.353Z”
,”resourceType”:”AuditEvent”
,”location”:”https://idcs-hostname/admin/v1/AuditEvents/0a37c7374c494ed080d15c554ae75be8″}
,”schemas”:[“urn:ietf:params:scim:schemas:oracle:idcs:AuditEvent”]
,”idcsCreatedBy”: {“value”:”UnAuthenticated”
,”$ref”:”https://idcs-hostname/admin/v1/AuditEvents/UnAuthenticated”}
,”idcsLastModifiedBy”: {“value”:”UnAuthenticated”
,”$ref”:”https://idcs-hostname/admin/v1/AuditEvents/UnAuthenticated”}
}],”startIndex”:1,”itemsPerPage”:1}

 

Parsing the First Response

The first JSON response of one event is read into a varchar variable as shown below:

utl_http.read_text(resp, l_ws_response_varchar, 32766);

The variable is then parsed as shown below:

apex_json.parse(l_ws_response_varchar);

Note: the above result is implicitly stored in a global package array named g_values. This array contains the JSON members and values.

The value of the JSON member named totalResults is retrieved and stored in a variable as shown below:

v_resultSet := apex_json.get_varchar2(p_path => ‘totalResults’);

This is the total number of events to be retrieved and is all that is wanted from the first response.

Parsing the Subsequent Responses

Subsequent Responses may contain a number of events up to the setting of the chunk size (500 in this post). These responses will need to be stored in a temporary CLOB variable.

The DBMS_LOB package is used to manage the temporary CLOB variable. Additional information about the package may be found in the References section.

This variable is created at the beginning of the parsing and freed at the end of the procedure as shown below:

dbms_lob.createtemporary(l_ws_response_clob, true);
dbms_lob.freetemporary(l_ws_response_clob);

This variable is also trimmed to zero characters at the beginning of each chunk of events using the following:

DBMS_LOB.TRIM (l_ws_response_clob, 0);

The response is read by a LOOP command. Each iteration of the loop reads 32,766 characters of text and appends these to the temporary CLOB variable as shown below:

while not(EOB)
LOOP
BEGIN
utl_http.read_text(resp, l_ws_response_varchar, 32766);
if l_ws_response_varchar is not null and length(l_ws_response_varchar)>0 then
dbms_lob.writeappend(l_ws_response_clob, length(l_ws_response_varchar), l_ws_response_varchar);
end if;
EXCEPTION
WHEN utl_http.end_of_body THEN
EOB := TRUE;
utl_http.end_response(resp);
END;
END LOOP;

The CLOB result is then parsed into the implicit package array of JSON elements and values as shown below. This array contains a number of events equal to or less than the chunk size setting (500).

apex_json.parse(l_ws_response_clob);

Each event in the array is retrieved, has its columns parsed, and is inserted into the BICS staging table as shown below:

for i in 1..v_chunkSize LOOP
v_loadCount := v_loadCount + 1;
IF v_loadCount > v_resultSet THEN NULL;
ELSE
INSERT
INTO AUDIT_EVENT
(
EVENT_ID,
ID,
ACTOR_ID,
ADMIN_REF_RESOURCE_NAME,
ACTOR_NAME,
ACTOR_DISPLAY_NAME,
MESSAGE,
SSO_COMMENTS,
SSO_PROTECTED_RESOURCE,
SSO_USER_AGENT,
TIMESTAMP,
ACTOR_TYPE,
ADMIN_RESOURCE_NAME,
EC_ID
)
VALUES
(
apex_json.get_varchar2(p_path => ‘Resources[‘ || i || ‘].eventId’)
,apex_json.get_varchar2(p_path => ‘Resources[‘ || i || ‘].id’)
,apex_json.get_varchar2(p_path => ‘Resources[‘ || i || ‘].actorId’)
,apex_json.get_varchar2(p_path => ‘Resources[‘ || i || ‘].adminRefResourceName’)
,apex_json.get_varchar2(p_path => ‘Resources[‘ || i || ‘].actorName’)
,apex_json.get_varchar2(p_path => ‘Resources[‘ || i || ‘].actorDisplayName’)
,apex_json.get_varchar2(p_path => ‘Resources[‘ || i || ‘].message’)
,apex_json.get_varchar2(p_path => ‘Resources[‘ || i || ‘].ssoComments’)
,apex_json.get_varchar2(p_path => ‘Resources[‘ || i || ‘].ssoProtectedResource’)
,apex_json.get_varchar2(p_path => ‘Resources[‘ || i || ‘].ssoUserAgent’)
,apex_json.get_varchar2(p_path => ‘Resources[‘ || i || ‘].timestamp’)
,apex_json.get_varchar2(p_path => ‘Resources[‘ || i || ‘].actorType’)
,apex_json.get_varchar2(p_path => ‘Resources[‘ || i || ‘].adminResourceName’)
,apex_json.get_varchar2(p_path => ‘Resources[‘ || i || ‘].ecId’)
);
v_row_count := v_row_count + 1;
END IF;
END LOOP;

After the last chunk of events is processed the procedure terminates.

Scheduling the Procedure

The procedure may be scheduled to run periodically through the use of an Oracle Scheduler job. A link to the Scheduler documentation may be found in the References section.

A job is created using the DBMS_SCHEDULER.CREATE_JOB procedure by specifying a job name, type, action and a schedule. Setting the enabled argument to TRUE enables the job to automatically run according to its schedule as soon as you create it.

An example of a SQL statement to create a job is below:

BEGIN
dbms_scheduler.create_job (
job_name => ‘IDCS_REST_AUDIT_EXTRACT’,
job_type => ‘STORED_PROCEDURE’,
enabled => TRUE,
job_action => ‘BICS_IDCS_REST_INTEGRATION’,
start_date => ’21-DEC-16 10.00.00 PM Australia/Sydney’,
repeat_interval => ‘freq=hourly;interval=24’ — this will run once every 24 hours
);
END;
/

Note: If using the BICS Schema Service database, the package name is CLOUD_SCHEDULER rather than DBMS_SCHEDULER.

The job log and status may be queried using the *_SCHEDULER_JOBS views. Examples are below:

SELECT JOB_NAME, STATE, NEXT_RUN_DATE from USER_SCHEDULER_JOBS;
SELECT LOG_DATE, JOB_NAME, STATUS from USER_SCHEDULER_JOB_LOG;

Summary

This post detailed a method of extracting and loading data from Oracle Identity Cloud Service (IDCS) into the Oracle Business Intelligence Cloud Service (BICS).

The post used REST web services to extract the JSON-formatted data responses. It used a PL/SQL Stored Procedure to wrap the REST extract, JSON parsing commands, and database table operations. It loaded a BICS staging table which can be transformed into star-schema object(s) for use in modeling.

Finally, an example of a database job was provided that executes the Stored Procedure on a scheduled basis.

For more BICS and BI best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for BICS.

References

Complete Procedure

REST API for Oracle Identity Cloud Service

Scheduling Jobs with Oracle Scheduler

Database PL/SQL Language Reference

APEX_WEB_SERVICE Reference Guide

APEX_JSON Reference Guide

UTL_HTTP Package Reference Guide

Soap API Testing Tool

Curious Concept JOSN Testing Tool

Base64 Decoding and Encoding Testing Tool

Using Oracle Wallet Manager

Oracle Business Intelligence Cloud Service Tasks

DBMS_LOB Reference Guide

 

Automated unit tests with Node.JS and Developer Cloud Services

$
0
0

Introduction

Oracle’s Developer Cloud Service (DevCS) is a great tool for teams of developers. It provides great tools for continuous delivery, continuous integration, team collaboration, scrum boards, code repositories and so on. When using these feature, you can leverage the best practices in an application lifecycle to deliver high quality and manageable code.
One of the phases of an application lifecycle we are going to focus on today is the testing phase. Tests can take place on the developer’s machine and by leveraging these test in an automated way on DevCS, we ensure the quality of the code throughout the lifecycle of the code.

 

Main Article

In this article we will take a closer look at using Node.JS in combination with Jasmine to test our code and configure an automated test script that will run every time a developer pushes his code to a specific branch in the code repository.

Why testing?

To many developers it is clear that testing can be advantages however many feel that testing adds an overhead to their already busy schedule. This is mainly a misconception as proper testing will increase the quality of the code. If you don’t test, you will spend more time debugging your code later on so you could say that testing is a way of being lazy by spending some more time in the beginning.

In addition to this, testing is not just a tool to make sure your code works, it can also be used as a design tool. This comes from the Behavior-driven development paradigm.  The idea is to define your unit test before writing any code. By doing this, you will have a clear understanding of the requirements of the code and as such, your code will be aligned with the requirements.
This also increases the re-usability of the code because a nice side effect of designing your code this way is that your code will be very modular and loosely coupled.

When we talk about Node.JS and JavaScript in general, the side effect of a “test-first” approach is that it will be much easier to reuse your code no matter if it’s client side JavaScript or server side JavaScript. This will become clear in the example we will build in this article.

Different types of test

When we take about writing tests, it is important to understand that there are different types of tests, each testing a specific area of your application and serving their own purpose:

Unit Tests

Unit tests are your first level of defense. These are the tests run on your core business logic. A good unit test does not need to know the context it is running on and has no outside dependencies.
The purpose of a unit test is like the name says: to test a unit of work. A typical example of a unit test is to test a function that checks if a credit card number is valid. That method doesn’t need to understand where the credit card number is coming from, nor does it need to understand anything around security or encryption. All it does is take a credit card number as input and returns a true or false value depending on the validity of the number.

Integration Tests

The next level of tests are the integration tests. These will test if all your different modules integrate well and test if the data coming from external sources is accurate.
It will group the different modules and check if these work well together. It will check for data integrity when you pass information from one module to another and makes sure that the values passed through are accurate.

End 2 End Tests

An end 2 end test typically requires a tool that allows you to record a user session after which that session is replayed. In a web application, Selenium is popular tool to perform these E2E tests. In such a scenario, you will define certain areas on the page that you know should have specific value. When the HTML of these areas are different from what you define, the test will fail. This is the highest level of testing you can have.

 

In this post we will focus on unit testing.

Creating a new project on Developer Cloud Service

Before we can start writing code, we need to define a project in Developer Cloud Service (DevCS). A project in DevCS is much more than a code repository. It also allows us to manage the development lifecycle by creating tasks and assigning them to people. It also provides a scrum board so we can manage project in an agile way.

In this post, we will create a microservice that does temperature conversion. It will be able to convert Celsius and Fahrenheit temperatures to each other and Kelvin.
In DevCS we define a new project called “Converter”:

project1

 

As template we select the “Initial Repository” as this will create the code repository we will be using to check in our code.

 

In the next step, we define the properties and we initialize a repository with readme file:

project3

Now we can continue and create our project.

Once the project is created, you will see your project dashboard:

project4

As you can see, the system created a repository called converter.git. On the right hand side you can find the HTTP and SSH links to the repo. We will need the HTTP link in order to clone the initial repository before we can start coding.

Once you copied the HTTP link to your GIT repo, you can open a command line so we can clone the repo.

At the location you want the repo to be created, we simply execute following command:

D:\projects\Oracle\testing>git clone https://<yourRepo>
Cloning into 'converter'...
Password for 'https://yannick.ongena@oracle.com@developer.us2.oraclecloud.com':
remote: Counting objects: 3, done
remote: Finding sources: 100% (3/3)
remote: Getting sizes: 100% (2/2)
remote: Compressing objects: 100% (37/37)
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
Checking connectivity... done.

This will clone the repository into a folder called “converter”. At the moment that folder will only contain a README.md file. The next step is to initialize that folder as a node.js project. This can easily be done by using the npm init command:

D:\projects\Oracle\testing>cd converter

D:\projects\Oracle\testing\converter>npm init
This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg> --save` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
name: (converter)
version: (1.0.0)
description:
entry point: (index.js) app.js
test command:
git repository: (https://<yourURL>)
keywords:
author:
license: (ISC)
About to write to D:\projects\Oracle\testing\converter\package.json:

{
  "name": "converter",
  "version": "1.0.0",
  "description": "converter.git",
  "main": "app.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "repository": {
    "type": "git",
    "url": "<yourURL>"
  },
  "author": "",
  "license": "ISC"
}


Is this ok? (yes)

This will have created the package.json file.

The next thing we need to do is installing the required modules.
For this example we will use express and the body-parser module to give us the basic middleware to start building the application. For testing purpose we will use jasmine which is a popular framework for behavior driven testing. Jasmine will be configured as a development dependency.
We will add following content to the package.json:

"dependencies": {
    "body-parser": "^1.15.2",
    "express": "^4.14.0"
  },
  "devDependencies": {
    "jasmine": "^2.5.2"
  }

Now we can simply install these modules by executing the npm install command from within the application’s folder.

Writing tests

Now that the project has been setup and we downloaded the required dependencies, we can start writing our code, or should I say, writing our tests?
Like I said in the introduction, we can use testing as a tool to design our service signature and this is exactly what we are going to do.

Jasmine is a perfect framework for this as it is designed to define behaviors. These behaviors will be translated to units of code that we can easily test.

If we think about our temperature converter that we are going to write, what behaviors would we have?

  • Convert Celsius to Fahrenheit
  • Convert Celsius to Kelvin
  • Convert Fahrenheit to Celsius
  • Convert Fahrenheit to Kelvin

Each of these behaviors will have its own piece of implementation that can be mapped to some testing code.

Before we can write our tests, we need to initialize the project for Jasmine. This can be done by executing the jasmine init command from within your application root:

node node_modules/jasmine/bin/jasmine.js init

This command will create a spec folder in which we need to write the specifications of our tests.

In that folder we create a new file converterSpec.js

It is important to end the filename with Spec because Jasmine has been configured to search for files that end with Spec. You can of course, change this behavior by changing the spec_files regex in the jasmine.json file in the support folder but by default Jasmine will look for every file ending in Spec in the spec folder.

The contents of the converterSpec.js will look like this:

describe("Converter ",function(){

    it("converts celsius to fahrenheit", function() {
        expect(converter.celsiusToFahrenheit(0)).toBeCloseTo(32);
        expect(converter.celsiusToFahrenheit(-10)).toBeCloseTo(14);
        expect(converter.celsiusToFahrenheit(23)).toBeCloseTo(73.4);
        expect(converter.celsiusToFahrenheit(100)).toBeCloseTo(212);
    });

    it("converts fahrenheit to celsius", function() {
        expect(converter.fahrenheitToCelsius(32)).toBeCloseTo(0);
        expect(converter.fahrenheitToCelsius(14)).toBeCloseTo(-10);
        expect(converter.fahrenheitToCelsius(73.4)).toBeCloseTo(23);
        expect(converter.fahrenheitToCelsius(212)).toBeCloseTo(100);
    });

    it("converts celsius to kelvin", function() {
        expect(converter.celsiusToKelvin(0)).toBeCloseTo(273.15);
        expect(converter.celsiusToKelvin(-20)).toBeCloseTo(253.15);
        expect(converter.celsiusToKelvin(23)).toBeCloseTo(296.15);
        expect(converter.celsiusToKelvin(100)).toBeCloseTo(373.15);
    });

    it("converts fahrenheit to kelvin", function() {
        expect(converter.fahrenheitToKelvin(32)).toBeCloseTo(273.15);
        expect(converter.fahrenheitToKelvin(14)).toBeCloseTo(263.15);
        expect(converter.fahrenheitToKelvin(73.4)).toBeCloseTo(296.15);
        expect(converter.fahrenheitToKelvin(212)).toBeCloseTo(373.15);
    });
});

These tests will fail because we haven’t written a converter yet.

We can execute this test suite by calling Jasmine from our root directory of the application:

node node_modules/jasmine/bin/jasmine.js

The output will contain some erros and a message saying that 4 out of 4 specs have failed:

D:\projects\Oracle\testing\converter>node node_modules/jasmine/bin/jasmine.js
Started
FFFF

Failures:
1) Converter  converts celsius to fahrenheit
  Message:
    ReferenceError: converter is not defined
  Stack:
    ReferenceError: converter is not defined
        at Object.<anonymous> (D:\projects\Oracle\testing\converter\spec\converterSpec.js:9:16)

2) Converter  converts fahrenheit to celsius
  Message:
    ReferenceError: converter is not defined
  Stack:
    ReferenceError: converter is not defined
        at Object.<anonymous> (D:\projects\Oracle\testing\converter\spec\converterSpec.js:16:16)

3) Converter  converts celsius to kelvin
  Message:
    ReferenceError: converter is not defined
  Stack:
    ReferenceError: converter is not defined
        at Object.<anonymous> (D:\projects\Oracle\testing\converter\spec\converterSpec.js:23:16)

4) Converter  converts fahrenheit to kelvin
  Message:
    ReferenceError: converter is not defined
  Stack:
    ReferenceError: converter is not defined
        at Object.<anonymous> (D:\projects\Oracle\testing\converter\spec\converterSpec.js:30:16)

4 specs, 4 failures
Finished in 0.01 seconds

By writing these tests, we established that our convertor should have following methods:

  • celsiusToFahrenheit
  • fahrenheitToCelsius
  • celsiusToKelvin
  • fahrenheitToKelvin

 

Implementing the converter

Once the signature of our code has been established, we can start implementing the code.
In our case, we need to create an object for the converter with the required functions. Therefore we create a new file converter.js with the following content:

var Converter = function(){
    var self = this;
}

Converter.prototype.celsiusToFahrenheit = function(temp){
    return temp*9/5+32;
};
Converter.prototype.fahrenheitToCelsius = function(temp){
    return (temp-32)/1.8;
};
Converter.prototype.celsiusToKelvin = function(temp){
    return temp +273.15;
}
Converter.prototype.fahrenheitToKelvin = function(temp){
    var cel = this.fahrenheitToCelsius(temp);
    return this.celsiusToKelvin(cel);
}


if (typeof exports == 'object' && exports)
    exports.Converter = Converter;

Now that the implementation is done, we can include this file in our converterSpec.js so the test will use this object:
At the top of converterSpec.js add the following lines:

var Converter = require("../converter").Converter;
var converter = new Converter();

If we rerun the jasmine tests we will notice that they succeed:

D:\projects\Oracle\testing\converter>node node_modules/jasmine/bin/jasmine.js
Started
....


4 specs, 0 failures
Finished in 0.005 seconds

So far, wrote some tests and implemented a plain old JavaScript object. We haven’t written any server specific code but yet, our core business logic is already done and tested.

Notice how we wrote this code without worrying about things like request body, response objects, get, post and other server specific logic. This is a very powerful feature of writing test in this way because now the exact same code can be used in any project that used JavaScript. No matter if it’s Node.JS, Oracle JET, Angular, Ionic,… it should work in any of these frameworks and we didn’t even spend additional time optimizing the code for this. It’s just a bi-product of a test-first approach!

Implementing the server

The last step is to write our server that consumes the converter. Our server will expose a single endpoint where we can specify an object with a temperature value and a units value. Based upon the units value, the converter will make al the required conversions and send the result back to the user.

Create a new file app.js with following contents

var express = require("express");
var parser = require("body-parser");

var app = express();
var http = require('http').Server(app);

app.use(parser.json());

var Converter = require("./convertor").Converter;
var converter = new Converter();

app.post("/convert",function(req,res){
    var temp = req.body.temp;
    var units = req.body.units;
    var result = {};
    if(units.toLowerCase() == "f"){
        result.fahrenheit = temp;
        result.celsius = converter.fahrenheitToCelsius(temp);
        result.kelvin = converter.fahrenheitToKelvin(temp);
    }
    else if(units.toLowerCase() == "c"){
        result.celsius = temp;
        result.fahrenheit = converter.celsiusToFahrenheit(temp);
        result.kelvin = converter.celsiusToKelvin(temp);
    }
    res.send(result);
    res.end();
});


http.listen(3000, function(){
    console.log('listening on *:3000');
});

 

Setting up automated testing on Developer Cloud Service

Once we have a first finished version of the code, it’s a good time to commit our code to the code repository. At the same time, we want to setup a build process on DevCS so that every time we commit code to the repository. it will fire of the tests we created so far.

In order to do this, we first need to modify the package.json so that we can make use of the npm test command to start the test.
This is fairly simple as npm test is just a shortcut to a script you define in the package.json. This should be the same command as we use when starting the tests from our command line.
Modify package.json so the scripts part looks like this:

 "scripts": {
    "test": "node node_modules/jasmine/bin/jasmine.js"
  },

When you save the file and execute npm test from a command line in the root folder of your application, it should start the tests.

Adding a .gitignore file

The next step we have to do before committing the code, is to add a gitignore file. This file will tell GIT what files and folder to ignore. The reason why we want this is because it’s a bad practice to include the node_modules folder in your code repository. The code in that folder isn’t written by us and we can simply initialize a new consumer of the repo by executing npm install. This way the modules don’t take additional space in the repository and it will be much faster to upload the code.

The .gitignore file needs to be put in the root of your application. For this application we only need to ignore the node_modules folder so the file will look like this:

# Dependency directories
node_modules

Creating a build configuration in DevCS

Before we commit the code, we need to setup a build configuration in DevCS.
A build configuration is a sequence of actions that can be configured depending on the type of application. For example when you are developing a J2EE application, the build configuration can execute a maven build, build the JAR/EAR file and pass it on to a deployment script so it can be deployed automatically to Java Cloud Services.

In our case, we are working with Node.JS so technically we don’t have anything to build. However, a build config can still be usefull because it allows us to execute certain command to test the integrity of the code. If everything passes, we are able to hand it over to a deployment profile for Application Container Cloud Service to deploy it on the cloud.

In this step, we will focus on the build step.

In DevCS, select your project and go to the Build page. At the moment only a sample maven_build has been created which doesn’t do us any good so we will go ahead and create a new job.

build1

Once we saved the job we will be redirected to the configuration.

The Main and Build Parameters tab can remain unchanged. In the Source Control tab we specify that the build system integrates with a GIT repository.

From the Repository drop down, we select our converter repo.
In the Branch section we click the Add button and select master. This way we can specify on which branch of the code this build applies.

It is a common practice to use something like GitFlow to develop features. Each feature will be represented by a branch and once the feature is finished, that branch is merged into a development branch. In these cases, it makes a lot of sense to only initiate the build when a commit is done towards the development branch so that’s why we specify a certain branch in this step. If we don’t specify a branch, the build will start on every single commit.

build2

In the next tab, Triggers, we specify what triggers the build. Because we re relying on a commit to the source control system, we have to select “Based on SCM polling schedule”. This links the configuration from the Source Control tab to the Trigger.

build3

The next tab Environment isn’t required in this step so we can go ahead and open the Build Steps tab. This is where we configure the actions that are done when the build starts.

In our case we want to execute the npm test command which is a shell script. From the add button we select the Execute Shell step. This will add a text area in which we can specify shell commands to execute. In this box we can add multiple lines of code.

Add the following code in the command box:

git config --global url.https://github.com/.insteadOf git://github.com/
npm install
npm test

Because we have added the node_modules to our gitignore file, we need to install the modules from our package.json. On your machine a simple npm install would be sufficient, however because DevCS is behind a firewall that only accepts traffic on port 80 (HTTP) and 443 (HTTPS), we need to make sure that we force git to use HTTP and not the git protocol. The git config command does make sure that we download all the modules over regular HTTPS traffic, even if the git repository of a module has been configured using the git protocol.

After that we can install the modules using the npm install command and once this is done, a npm test will start the Jasmine tests.

build4

Our build config is now complete.

Committing the code

Now that our build config has been setup, we can commit and push our code after which the build should start.

Commit the code using your favorite GIT client or from within your IDE. After the commit, push the changes to the master branch.

Once you have pushed the code, go back to the Jobs Overview page on DevCS and you will notice that our new build config has been queued and after a few seconds or a minute it will start:

build5

After aboud half a minute, the build should complete and you should see the status:

build6

On the right hand side you have a button that can take you to the Console Output. This gives you a good overview of what the build actually did. In our case, everything went fine and it ended in success however when your test fails, the build will fail and the console output will be crucial to identify what test failed.

This is the output from my build (I omitted the npm install output).

Started by an SCM change
Building remotely on Builder 22
Checkout:<account>.Converter Unit test / /home/c2c/hudson/workspace/developer85310.Converter Unit test - hudson.remoting.Channel@2564c81d:Builder 22
Using strategy: Default
Checkout:<account>.Converter Unit test / /home/c2c/hudson/workspace/developer85310.Converter Unit test - hudson.remoting.LocalChannel@ebc21da
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from https://developer.us2.oraclecloud.com/<account>/converter.git
Commencing build of Revision af98f72759ebdfc7a88b0a1f49d70b278bdcbab4 (origin/master)
Checking out Revision af98f72759ebdfc7a88b0a1f49d70b278bdcbab4 (origin/master)
No change to record in branch origin/master
[developer85310-chatbotdev1_converter_12171.Converter Unit test] $ /bin/sh -xe /home/builder/tmp/hudson2474779517119792441.sh
+ git config --global url.https://github.com/.insteadOf git://github.com/
+ npm install
<npm install output>
+ npm test

> converter@1.0.0 test /home/builder/hudson/workspace/<account>.Converter Unit test
> node node_modules/jasmine/bin/jasmine.js

Started
....


4 specs, 0 failures
Finished in 0.01 seconds

Finished: SUCCESS

Conclusion

In this post we have shown how we can leverage the power of Developer Cloud Service to setup an automated test build for your Node.JS code. By doing this, you not only get the benefits of getting instant feedback when your code is pushed to the repository but you also get better quality and re-usability of your code.

 

Loading Data from Oracle Field Service Cloud into Oracle BI Cloud Service using REST

$
0
0

Introduction

This post details a method of extracting and loading data from Oracle Field Service Cloud (OFSC) into the Oracle Business Intelligence Cloud Service (BICS) using RESTful services. It is a companion to the A-Team post Loading Data from Oracle Field Service Cloud into Oracle BI Cloud Service using SOAP . Both this post and the SOAP post offer methods to compliment the standard OFSC Daily Extract described in Oracle Field Service Cloud Daily Extract Description.

One case for using this method is analyzing trends regarding OFSC events.

This post uses RESTful web services to extract JSON-formatted data responses. It also uses the PL/SQL language to call the web services, parse the JSON responses, and perform database table operations in a Stored Procedure. It produces a BICS staging table which can then be transformed into star-schema object(s) for use in modeling. The transformation processes and modeling are not discussed in this post.

Finally, an example of a database job is provided that executes the Stored Procedure on a scheduled basis.

The PL/SQL components are for demonstration purposes only and are not intended for enterprise production use. Additional detailed information, including the complete text of the PL/SQL procedure described, is included in the References section at the end of this post.

Rationale for Using PL/SQL

PL/SQL is the only procedural tool that runs on the BICS / Database Schema Service platform. Other wrapping methods e.g. Java, ETL tools, etc. require a platform outside of BICS to run on.

PL/SQL may also be used in a DBaaS (Database as a Service) that is connected to BICS.

PL/SQL can utilize native SQL commands to operate on the BICS tables. Other methods require the use of the BICS REST API.

Note: PL/SQL is a very good at showcasing functionality. However, it tends to become prohibitively resource intensive when deploying in an enterprise production environment. For the best enterprise deployment, an ETL tool such as Oracle Data Integrator (ODI) should be used to meet these requirements and more:

* Security

* Logging and Error Handling

* Parallel Processing – Performance

* Scheduling

* Code Re-usability and Maintenance

About the OFSC REST API

The document REST API for Oracle Field Service Cloud Service should be used extensively, especially the Authentication, Paginating, and Working with Events sections. Terms described there such as subscription, page, and authorization are used in the remainder of this post.

In order to receive events, a subscription is needed listing the specific events desired. The creation of a subscription returns both a subscription ID and a page number to be used in the REST calls to receive events.

At this time, a page contains 0 to 100 items (events) along with the next page number to use in a subsequent call.

The following is a list of supported events types available from the REST API:

Activity Events
Activity Link Events
Inventory Events
Required Inventory Events
User Events
Resource Events
Resource Preference Events

This post uses the following subset of events from the Activity event type:

activityCreated
activityUpdated
activityStarted
activitySuspended
activityCompleted
activityNotDone
activityCanceled
activityDeleted
activityDelayed
activityReopened
activityPreworkCreated
activityMoved

The process described in this post can be modified slightly for each different event type. Note: the columns returned for each event type differ slightly and require modifications to the staging table and parsing section of the procedure.

Using Oracle Database as a Service

This post uses the new native support for JSON offered by the Oracle 12c database. Additional information about these new features may be found in the document JSON in Oracle Database.

These features provide a solution that overcomes a current limitation in the APEX_JSON package. The maximum length of JSON values in that package is limited to 32K characters. Some of the field values in OFSC events exceed this length.

Preparing the DBaaS Wallet

Create an entry in a new or existing Oracle database wallet for the trusted public certificates used to secure connections to the web service via the Internet. A link to the Oracle Wallet Manager documentation is included in the References section. Note the location and password of the wallet as they are used to issue the REST request.

The need for a trusted certificate is detected when the following error occurs: ORA-29024: Certificate validation failure.

An example certificate path found using Chrome browser is shown below. Both of these trusted certificates need to be in the Oracle wallet.

  • 2

Creating a BICS User in the Database

The complete SQL used to prepare the DBaaS may be viewed here.

Example SQL statements are below:

CREATE USER “BICS_USER” IDENTIFIED BY password
DEFAULT TABLESPACE “USERS”
TEMPORARY TABLESPACE “TEMP”
ACCOUNT UNLOCK;
— QUOTAS
ALTER USER “BICS_USER” QUOTA UNLIMITED ON USERS;
— ROLES
ALTER USER “BICS_USER” DEFAULT ROLE “CONNECT”,”RESOURCE”;
— SYSTEM PRIVILEGES
GRANT CREATE VIEW TO “BICS_USER”;
GRANT CREATE ANY JOB TO “BICS_USER”;

Creating Database Schema Objects

Three tables need to be created prior to compiling the PL/SQL stored procedure. These tables are:

*     A staging table to hold OFSC Event data

*     A subscription table to hold subscription information.

*     A JSON table to hold the JSON responses from the REST calls

The staging table, named OFSC_EVENT_ACTIVITY, has columns described in the OFSC REST API for the Activity event type. These columns are:

PAGE_NUMBER — for the page number the event was extracted from
ITEM_NUMBER — for the item number within the page of the event
EVENT_TYPE
EVENT_TIME
EVENT_USER
ACTIVITY_ID
RESOURCE_ID
SCHEDULE_DATE
APPT_NUMBER
CUSTOMER_NUMBER
ACTIVITY_CHANGES — To store all of the individual changes made to the activity

The subscription table, named OFSC_SUBSCRIPTION_PAGE, has the following columns:

SUBSCRIPTION_ID     — for the supported event types
NEXT_PAGE                — for the next page to be extracted in an incremental load
LAST_UPDATE            — for the date of the last extract
SUPPORTED_EVENT — for the logical name for the subscription event types
FIRST_PAGE               — for the first page to be extracted in a full load

The JSON table, named OFSC_JSON_TMP, has the following columns:

PAGE_NUMBER — for the page number extracted
JSON_CLOB       — for the JSON response received for each page

Using API Testing Tools

The REST requests should be developed in API testing tools such as cURL and Postman. The JSON expressions for parsing should be developed and tested in a JSON expression testing tool such as CuriousConcept. Links to these tools are provided in the References section.

Note: API testing tools such as SoapUI, CuriousConcept, Postman, and so on are third-party tools for using SOAP and REST services. Oracle does not provide support for these tools or recommend a particular tool for its APIs. You can select the tool based on your requirements.

Subscribing to Receive Events

Create subscriptions prior to receiving events. A subscription specifies the types of events that you want to receive. Multiple subscriptions are recommended. For use with the method in this post, a subscription should only contain events that have the same response fields.

The OFSC REST API document describes how to subscribe using a cURL command. Postman can also easily be used. Either tool will provide a response as shown below:

{
“subscriptionId”: “a0fd97e62abca26a79173c974d1e9c19f46a254a”,
“nextPage”: “160425-457,0”,
“links”: [ … omitted for brevity ]
}.

Note: The default next page is for events after the subscription is created. Ask the system administrator for a starting page number if a past date is required.

Use SQL*Plus or SQL Developer and insert a row for each subscription into the OFSC_SUBSCRIPTION_PAGE table.

Below is an example insert statement for the subscription above:

INSERT INTO OFSC_SUBSCRIPTION_PAGE
(
SUBSCRIPTION_ID,
NEXT_PAGE,
LAST_UPDATE,
SUPPORTED_EVENT,
FIRST_PAGE
)
VALUES
(
‘a0fd97e62abca26a79173c974d1e9c19f46a254a’,
‘160425-457,0’,
sysdate,
‘Required Inventory’,
‘160425-457,0’
);

Preparing and Calling the OFSC RESTful Service

This post uses the events method of the OFSC REST API.

This method requires the Basic framework for authorization and mandates a base64 encoded value for the following information: user-login “@” instance-id “:” user-password

An example encoded result is:

dXNlci1sb2dpbkBpbnN0YW5jZS1pZDp1c2VyLXBhc3N3b3Jk

The authorization header value is the concatenation of the string ‘Basic’ with the base64 encoded result discussed above. The APEX_WEB_SERVICE package is used to set the header as shown below:

v_authorization_token := ‘ dXNlci1sb2dpbkBpbnN0YW5jZS1pZDp1c2VyLXBhc3N3b3Jk’;
apex_web_service.g_request_headers(1).name  := ‘Authorization’;
apex_web_service.g_request_headers(1).value := ‘Basic ‘||v_authorization_token;

The wallet path and password discussed in the Preparing the DBaaS Wallet section are also required. An example path from a Linux server is:

/u01/app/oracle

Calling the Events Request

The events request is called for each page available for each subscription stored in the OFSC_SUBSCRIPTION_PAGE table using a cursor loop as shown below:

For C1_Ofsc_Subscription_Page_Rec In C1_Ofsc_Subscription_Page
Loop
V_Subscription_Id := C1_Ofsc_Subscription_Page_Rec.Subscription_Id;
Case When P_Run_Type = ‘Full’ Then
V_Next_Page := C1_Ofsc_Subscription_Page_Rec.First_Page;
Else
V_Next_Page := C1_Ofsc_Subscription_Page_Rec.Next_Page;
End Case; … End Loop;

The URL is modified for each call. The subscription_id and the starting page are from the table.

For the first call only, if the parameter / variable p_run_type is equal to ‘Full’, the staging table is truncated and the page value is populated from the FIRST_PAGE column in the OFSC_SUBSCRIPTION_PAGE table. Otherwise, the staging table is not truncated and the page value is populated from the NEXT_PAGE column.

Subsequent page values come from parsing the nextPage value in the responses.

An example command to create the URL from the example subscription above is:

f_ws_url := v_base_url||’/events?subscriptionId=’ ||v_subscription_id|| chr(38)||’page=’ ||v_next_page;

The example URL result is:

https://ofsc-hostname/rest/ofscCore/v1/events?subscriptionId=a0fd97e62abca26a79173c974d1e9c19f46a254a&page=160425-457,0

An example call using the URL is below:

f_ws_response_clob := apex_web_service.make_rest_request (
p_url => f_ws_url
,p_http_method => ‘GET’
,p_wallet_path => ‘file:/u01/app/oracle’
,p_wallet_pwd => ‘wallet-password‘ );

Storing the Event Responses

Each response (page) is processed using a while loop as shown below:

While V_More_Pages
Loop
Extract_Page;
End Loop;

Each page is parsed to obtain the event type of the first item. A null (empty) event type signals an empty page and the end of the data available. An example parse to obtain the event type of the first item is below. Note: for usage of the JSON_Value function below see JSON in Oracle Database.

select  json_value (f_ws_response_clob, ‘$.items[0].eventType’ ) into f_event_type from  dual;

If there is data in the page, the requested page number and the response clob are inserted into the OFSC_JSON_TMP table and the response is parsed to obtain the next page number for the next call as shown below:

f_json_tmp_rec.page_number := v_next_page; — this is the requested page number
f_json_tmp_rec.json_clob := f_ws_response_clob;
insert into ofsc_json_tmp values f_json_tmp_rec;
select json_value (f_ws_response_clob, ‘$.nextPage’ ) into v_next_page from dual;

Parsing and Loading the Events Responses

Each response row stored in the OFSC_JSON_TMP table is retrieved and processed via a cursor loop statement as shown below:

for c1_ofsc_json_tmp_rec in c1_ofsc_json_tmp
loop
process_ofsc_json_page (c1_ofsc_json_tmp_rec.page_number);
end loop;

An example response is below with only the first item shown:

{
“found”: true,
“nextPage”: “170110-13,0”,
“items”: [
{
“eventType”: “activityUpdated”,
“time”: “2017-01-04 12:49:51”,
“user”: “soap”,
“activityDetails”: {
“activityId”: 1297,
“resourceId”: “test-resource-id“,
“resourceInternalId”: 2505,
“date”: “2017-01-25”,
“apptNumber”: “82994469003”,
“customerNumber”: “12797495”
},
“activityChanges”: {
“A_LastMessageStatus”: “SuccessFlag – Fail – General Exception: Failed to update FS WorkOrder details. Reason: no rows updated for: order_id = 82994469003 service_order_id = NULL”
}
}
],
“links”: [

]
}

Each item (event) is retrieved and processed via a while loop statement as shown below:

while f_more_items loop
process_item (i);
i := i + 1;
end loop;

For each item, a dynamic SQL statement is prepared and submitted to return the columns needed to insert a row into the OFSC_EVENT_ACTIVITY staging table as shown below (the details of creating the dynamic SQL statement have been omitted for brevity):

An example of a dynamically prepared SQL statement is below. Note: for usage of the JSON_Table function below see JSON in Oracle Database.

DYN_SQL

The execution of the SQL statement and the insert are shown below:

execute immediate f_sql_stmt into ofsc_event_activity_rec;
insert into ofsc_event_activity values ofsc_event_activity_rec;

Verifying the Loaded Data

Use SQL*Plus, SQL Developer, or a similar tool to display the rows loaded into the staging table.

A sample set of rows is shown below:

tabResults

Troubleshooting the REST Calls

Common issues are the need for a proxy, the need for an ACL, the need for a trusted certificate (if using HTTPS), and the need to use the correct TLS security protocol. Note: This post uses DBaaS so all but the first issue has been addressed.

The need for a proxy may be detected when the following error occurs: ORA-12535: TNS:operation timed out. Adding the optional p_proxy_override parameter to the call may correct the issue. An example proxy override is:

www-proxy.us.oracle.com

Scheduling the Procedure

The procedure may be scheduled to run periodically through the use of an Oracle Scheduler job as described in Scheduling Jobs with Oracle Scheduler.

A job is created using the DBMS_SCHEDULER.CREATE_JOB procedure by specifying a job name, type, action and a schedule. Setting the enabled argument to TRUE enables the job to automatically run according to its schedule as soon as you create it.

An example of a SQL statement to create a job is below:

BEGIN
dbms_scheduler.create_job (
job_name => ‘OFSC_REST_EVENT_EXTRACT’,
job_type => ‘STORED_PROCEDURE’,
enabled => TRUE,
job_action => ‘BICS_OFSC_REST_INTEGRATION’,
start_date => ’12-JAN-17 11.00.00 PM Australia/Sydney’,
repeat_interval => ‘freq=hourly;interval=24’ — this will run once every 24 hours
);
END;
/

Note: If using the BICS Schema Service database, the package name is CLOUD_SCHEDULER rather than DBMS_SCHEDULER.

The job log and status may be queried using the *_SCHEDULER_JOBS views. Examples are below:

SELECT JOB_NAME, STATE, NEXT_RUN_DATE from USER_SCHEDULER_JOBS;
SELECT LOG_DATE, JOB_NAME, STATUS from USER_SCHEDULER_JOB_LOG;

Summary

This post detailed a method of extracting and loading data from Oracle Field Service Cloud (OFSC) into the Oracle Business Intelligence Cloud Service (BICS) using RESTful services.

The method extracted JSON-formatted data responses and used the PL/SQL language to call the web services, parse the JSON responses, and perform database table operations in a Stored Procedure. It also produced a BICS staging table which can then be transformed into star-schema object(s) for use in modeling.

Finally, an example of a database job was provided that executes the Stored Procedure on a scheduled basis.

For more BICS and BI best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for BICS.

References

Complete Procedure

JSON in Oracle Database

REST API for Oracle Field Service Cloud Service

Scheduling Jobs with Oracle Scheduler

Database PL/SQL Language Reference

APEX_WEB_SERVICE Reference Guide

APEX_JSON Reference Guide

Curious Concept JSON Testing Tool

Postman Testing Tool

Base64 Decoding and Encoding Testing Tool

Using Oracle Wallet Manager

Oracle Business Intelligence Cloud Service Tasks

 

Using Oracle Data Integrator (ODI) to Load BI Cloud Service (BICS)

$
0
0

For other A-Team articles about BICS, click here

Introduction

Oracle Data Integrator (ODI) is a comprehensive data integration platform that covers most data integration scenarios.  It has long been possible to use ODI to load data into BI Cloud Service (BICS) environments, that use Database as a Service (DBaaS) as the underlying database.

The recent 12.2.1.2.6 release of ODI added the ability to load data into BICS environments based on a Schema Service Database.  ODI does this by using the BICS REST API.

This article will walk through the following steps to set up ODI to load data into the BICS schema service database through this method:

  • Downloading latest version of ODI
  • Configuring the physical and logical connection to BICS in ODI
  • Loading BICS knowledge modules
  • Reverse engineering BICS model
  • Create a simple mapping
  • Importing the BICS certificate into the trust store for the standalone agent

This article will not cover the installation and setup of ODI.  The assumption is that a 12.2.1.2.6 environment has been stood up and is working correctly.  For details on how to install and configure ODI, see this document.

 

Main Article

Download The Latest Version of Oracle Data Integrator

Download and install the latest version of ODI from OTN through this link.

 

Configure and Test Connection to BICS

This article will walk through one (of the several) methods to set up the BICS connection with a Physical and Logical connection.  For more details on topology and other approaches, see this document.

1. In ODI studio, select the ‘Topology‘ tab, and expand out ‘Technologies‘ under the Physical Architecture section

Cursor_and_Windows7_x86

2. Scroll down to the ‘Oracle BI Cloud Service‘ entry, right click and select ‘New Data Server

Cursor_and_Windows7_x86

3. Give the Data Server a name, and enter the BICS Service URL, as well as the user credentials and Identity Domain.

The syntax for the URL is:

https://service-identity_domain.analytics.data_center.oraclecloud.com

This URL can be obtained from the BICS instance, by taking the first part of the URL up to ‘oraclecloud.com’

Oracle_BI_Cloud_Service

Note – the Data Loader path will default to /dataload/v1, leave this.

4. Save the Data Server.  ODI will give you an informational warning about needing to register at least one physical schema.  Click ‘OK‘.

Cursor_and_Windows7_x86

5. Test the connection by selecting ‘Test Connection

For the time being, use the ‘Local (No Agent)‘ option.

NOTE – Once configuration has been completed, the ODI Agent where the execution will be run should also be tested.  It is likely that additional configuration will need to be carried out – this is covered in the last section of this article ‘Importing the BICS certificate into the trust store for the standalone agent’.

Windows7_x86

If the credentials and URL have been entered correctly, a notification similar to the following should be displayed.  If an error is displayed, trouble-shoot and resolve before continuing.

Cursor_and_Windows7_x86

TIP :  

ODI studio’s local agent uses the JDK’s certificate store, whereas the Standalone Agent does not.  It is therefore possible – and quite likely – that while the local agent will provide a successful Test Connection, the Standalone agent will produce and error similar to the following:

oracle.odi.runtime.agent.invocation.InvocationException: oracle.odi.core.exception.OdiRuntimeException: javax.ws.rs.ProcessingException: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

To resolve this, the BICS Certificate needs to be added to the trust store used by the Standalone agent.  These steps are covered later in this article in the section ‘Importing Certificate into Trust Store

 

6. Right click on the Data Server created in the step 2, and select ‘New Physical Schema

Cursor_and_Windows7_x86

ODI has the ability to load to both the Database Objects (Tables) in the Schema Service Database, and also Data Sets.

This loading option is chosen in the ‘Target Type‘ dropdown.  The selection then associates the appropriate REST Operations for ODI to connect.  Note – once the target type has been chosen and saved, it cannot be changed.

7. In this example the Target Type of Table is selected.

Windows7_x86

8. Save the Physical Schema.

Because we haven’t associated this with a Logical Architecture yet, the following warning will be shown.  Click OK to complete the save.

Windows7_x86

9. Expand out the Logical Architecture section of Topology, and then right click on ‘Oracle BI Cloud Service‘ and create a ‘New Logical Schema

Windows7_x86

10. In the configuration window, give the Logical Schema a descriptive name, and associate your context(s) with the physical schema that was created in steps 6-8.  Save the changes.

Windows7_x86

11. Repeat the steps from 6 on if you need to create an additional connection to load Data Sets

 

Load BICS Knowledge Modules

ODI uses 2 different Knowledge Modules for BICS:

– a reverse knowledge module (RKM) called RKM Oracle BI Cloud Service, and

– an integration knowledge module (IKM) called IKM SQL to Oracle BI Cloud Service.

 

1. In ‘Designer‘ expand your project and the knowledge modules and see if the KMs are already available.

Cursor_and_Windows7_x86

If they are – continue to the ‘Reverse Engineering‘ section of this article.

2. If the KMs are not shown, right click on the Knowledge Modules section and select ‘Import Knowledge Modules

Windows7_x86

Browse to a path similar to this to find the import directory.

/u01/oracle/ODI12c/odi/sdk/xml-reference

3. In the import wizard, select the 2 BICS KMs, and then select ‘OK’ to load them.

Cursor_and_Windows7_x86

TIP :  

If you already have used ODI for other integration tasks, you may be tempted to use existing Knowledge Modules.  Please note that the IKM SQL to Oracle BI Cloud Service does not support loading the Oracle SDO_GEOMETRY data type column to the BICS target table.

Oracle BI Cloud Service cannot be used as the staging area, and does not support incremental update or flow/static check. Therefore, the following KMs will not work with the Oracle BI Cloud Service technology:

RKM SQL (JYTHON)

LKM File to SQL

CKM SQL

IKM SQL Incremental Update

IKM SQL Control Append

LKM SQL to SQL (JYTHON)

More details can be found in this document.

 

Reverse Engineer BICS

Reverse-engineering is the process that populates the model in ODI, by retrieving metadata from the data server containing the data structures.

 

1. Create a new model in Designer, by selecting the ‘New Model‘ option as shown below

Cursor_and_Windows7_x86

2. In the Definition tab, given the model a name, select the ‘Oracle BI Cloud Service‘ as the technology, and select the Logical Schema created previously.

Cursor_and_Windows7_x86

3. In the Reverse Engineer tab, leave the logical agent set to ‘Local (No Agent)‘, and select the RKM Oracle BI Cloud Service knowledge module.  Then save the changes.

Cursor_and_Windows7_x86

TIP :  

At the time of writing this article, there is a bug in the reverse knowledge module that will present an error if tables in the BICS environment contain non-standard characters.

An error like the following may be generated:

ODI-1590: The execution of the script failed.
Caused By: org.apache.bsf.BSFException: exception from Groovy: oracle.odi.runtime.rest.SnpsRSInvocationException: ODI-30163: REST tool invocation failed with response code : 500. URL : https://businessintelltrialXXXX-usoracletrialXXXXX.analytics.us2.oraclecloud.com/dataload/v1/tables/APEX$TEAM_DEV_FILES

There is at least one Apex related table within BICS environments that has a non-standard character.  That table, as shown in the error above, is ‘APEX$TEAM_DEV_FILES’.

Until this issue is fixed, a workaround is required.

The simplest is to go into the Apex environment attached to the BICS environment, rename the APEX$TEAM_DEV_FILES table temporarily, run the Reverse Engineer process, and then rename the table back.

Another method is to use the ‘Mask’ import option box.  If you have a specific table(s) you need to reverse engineer, then enter the name followed by %

For instance, if there were 5 tables all starting ‘FACT….’, then a mask of ‘FACT%’ could be used to reverse engineer those 5 tables.

 

4. Select the ‘Reverse Engineer‘ action, and then ‘OK‘ to run the action.

Cursor_and_Windows7_x86

5. This will start a session that can be viewed in the Operator.

Cursor_and_Windows7_x86

6. Once the session has completed, expand the model to confirm that the database objects have been imported correctly.  As shown below, the tables in the BICS Schema Service database are now available as targets.

Cursor_and_Windows7_x86

7. Expand the BICS individual database objects that you will load, and confirm within the attributes that the Datatypes have been set correctly.  Adjust where necessary and save.

Cursor_and_Windows7_x86

 

Create Mapping

1. Within the ‘Mapping‘ sub-menu of the project, select ‘New Mapping

Windows7_x86

2. Drag in the source table from the source that will be loaded into BICS, and then the BICS target table, and link the two together.  For more information on how to create mappings, see this document.

TIP :  

The BICS API only allows data to be loaded, not ‘read’ or ‘selected’.  Because of this, BICS using the Schema Service Database CAN ONLY BE USED as a TARGET for ODI mappings.  It can not be used as a SOURCE.

 

3. Make sure the Target is using the IKM SQL to Oracle BI Cloud Service:

Windows7_x86

and that an appropriate loading KM is used:

Cursor_and_Windows7_x86

4. Run the mapping, selecting the Local Agent

Windows7_x86

5. Confirm in the Operator that the mapping was successful.  Trouble-shoot an errors you find and re-run.

Cursor_and_Windows7_x86

 

Importing Certificate into Trust Store

To operate, it is likely that the Standalone Agent will require the BICS certificate be added to its trust store.

These instructions will use Microsoft Explorer, although other browsers offer similar functionality.

1. In a browser, open the BICS /analytics portal, then click on the padlock icon.  This will open an information box, within which select ‘View certificates

Cursor_and_Windows7_x86

2. In the ‘Details‘ tab, select the ‘Copy to File‘ option which will open an export wizard.

Windows7_x86

3. Select the ‘DER encoded binary‘ format and then ‘Next

Cursor_and_Windows7_x86

4. Chose a path and file name for the certificate, then ‘Next‘, and on the final screen ‘Finish‘ to export the certificate.

Cursor_and_Windows7_x86

 

TIP :  

This article will go through the steps needed to add this certificate to the DemoTrust.jks key store.  This should *ONLY* be followed for demonstration or test environments.  For production environments, follow best practice guidelines as outlined in this document.

 

5. Copy the certificate file created in the previous steps to a file system accessible by the host running the standalone ODI agent.

6. Set the JAVA_HOME to the path of the JDK used while installing the standalone agent, for example

export JAVA_HOME=/u01/oracle/jdk1.8.0_111/bin

7. Browse to the bin directory of the ODI Domain Home, in this test environment that path is as follows:

/u01/oracle/ODI12c/user_projects/domains/base_domain/bin

8. Run the ‘setODIDomainEnv‘ script.  In a linux environment this would be:

./setODIDomainEnv.sh

The DemoTrust.jks keystore used by the agent should be located in the following path:

$ORACLE_HOME/wlserver/server/lib

 

TIP :  

It is possible that there are a number of DemoTrust.jks key stores on the file system, so make sure the correct one is updated.  If this process fails to resolve the error with the Standalone Agent, search the file system and see if it is using a different trust store.

 

9. Browse to that directory and confirm the DemoTrust.jks file exists.  In that same directory – run the keytool command to import the certificate created earlier.

The syntax for the command is as follows, $CERTIFICATE referencing the name/path for the certificate file downloaded from the BICS environment through the browser, $ALIAS being a name for that, and $KEYSTORE the name/path of the key store.

keytool -importcert -file $CERTIFICATE -alias $ALIAS -keystore $KEYSTORE

In this example, the command would be:

keytool -importcert -file /u01/oracle/Downloads/BICS.cer -alias BICS -keystore DemoTrust.jks

the default password is DemoTrustKeyStorePassPhrase

10. Details of the certificate are displayed and a prompt to ‘Trust this certificate?’ is displayed.  Type ‘yes‘ and then hit enter.

Cursor_and_Windows7_x86

If the import is successful, a confirmation that the certificate was added to the keystore is given.

11. Return to ODI and run the mapping, this time selecting the Standalone agent, and confirm it runs successfully.

Summary

This article walked through the steps to configure ODI to load data into the BICS schema service database through the BICS API

For other A-Team articles about BICS, click here.

Eloqua ICS Integration

$
0
0

Introduction

Oracle Eloqua, part of Oracle’s Marketing Cloud suite of products, is a cloud based B2B marketing platform that helps automate the lead generation and nurture process. It enables the marketer to plan and execute marketing campaigns while delivering a personalized customer experience to prospects.

In this blog I will describe how to integrate Eloqua with other SaaS applications using Oracle’s iPaaS platform, the Integration Cloud Service(ICS).
ICS provides an intuitive web based integration designer for point and click integration between applications, a rich monitoring dashboard that provides real-time insight into the transactions, all of it running on a standards based, mature runtime platform on Oracle Cloud. ICS boasts of a large library of SaaS, Application, as well as Technology Adapters that add to its versatility.

One such adapter is the Eloqua adapter, which allows synchronizing accounts, contacts and custom objects with other applications. The Eloqua Adapter can be used in two ways in ICS:

  • As the target of an integration where external data is sent to Eloqua,
  • Or as the source of an integration where contacts(or other objects) flowing through a campaign or program canvas in Eloqua are sent out to any external application.

This blog provides a detailed functional as well as technical introduction to the Eloqua Adapter’s capabilities.
The blog is organized as follows:

  1. a. Eloqua Adapter Concepts
  2. b. Installing the ICS App in Eloqua
  3. c. Creating Eloqua connection
  4. d. Designing the Inbound->Eloqua flows
  5. e. Designing the Eloqua->Outbound flows

This blog assumes that the reader has basic familiarity with ICS as well as Eloqua.

a. Eloqua Adapter concepts

In this section we’ll go over the technical underpinnings of the ICS Eloqua adapter.

ICS Adapter is also referred to as ICS Connector, they mean the same.

The ICS adapter can be used in ICS integrations for both triggering the integration and as target(Invoke) within an integration.

When used as target :

  • The adapter can be used to create/update Account, Contact and custom objects defined within Eloqua.
  • Under the hood the adapter uses the Eloqua Bulk 2.0 APIs to import data into Eloqua. More on this later.

When used as trigger :

  • The Eloqua Adapter allows instantiating an ICS integration when a campaign or program canvas runs within Eloqua.
  • The adapter must be used in conjunction with a corresponding ‘ICS App’ installed within Eloqua.

    Installing the ICS App is mandatory for triggering ICS integrations. The next section describes the installation.

    The marketer in Eloqua uses this app as a step in his campaign, and the app in turn invokes the ICS endpoint at runtime. The image below shows a sample ICS App in use in a campaign canvas within Eloqua:

  • Screen Shot 01-19-17 at 10.19 AM

  • The Eloqua ICS App resides within the Eloqua AppCloud, and complements the ICS Eloqua Adapter such that contacts and other objects flow out from the campaign, into the ICS App and eventually to the ICS integration. The image below describes this.
  • Screen Shot 01-19-17 at 12.35 PM

b. Installing the ICS App in Eloqua

As explained above, installing the ICS App in Eloqua is mandatory for the Eloqua->Outbound scenarios.

The app is available in Oracle Marketplace, and the installation is straightforward:

  • Open the ICS App on Oracle Marketplace at https://cloud.oracle.com/marketplace/app/AppICS
  • Click ‘Get App’. Accept the terms and conditions in the popup. Click ‘Next’. This will redirect you to your Eloqua login page. Sign in, and click ‘Accept and Install’
  • Screen Shot 11-30-16 at 06.22 PM

  • The next page takes you to the ICS configuration, where you need to provide the ICS URL, username and password. Click ‘Save’.
  • Screen-Shot-11-30-16-at-06.23-PM

  • Click ‘Sign In’ on the next page, thus providing the app access to Eloqua on your behalf(OAuth2).
  • Screen Shot 11-30-16 at 06.24 PM

  • Click ‘Accept’ on the next page.
  • The ICS App is now installed and ready to use as an ‘Action’ in Eloqua Canvas.

Now we will look at creating Eloqua connections and integrations in ICS.

c. Creating Eloqua connection in ICS

  1. Log on to the ICS home page. Click on ‘Create Connections’, then ‘New Connection’ , and choose ‘Eloqua’.
  2. Name the connection appropriately.
  3. Screen Shot 01-17-17 at 10.15 PM

  4. The Connection Role can be:
    • a. Trigger, used in integrations where the connection is only used to trigger the integration.
    • b. Invoke, used in integrations where the connection is only used as target.
    • c. Or Trigger and Invoke, which can be used either way.
  5. Click ‘Create’. Click on the ‘Configure Security’ button, and enter the Eloqua Company name, username and password. Then click on ‘Test’.
  6. At this point ICS authenticates with Eloqua using the credentials provided above. The authentication process depends on the connection role:

  • a.If ‘Invoke’ role, then ICS performs an HTTP Basic Authentication to https://login.eloqua.com using base64-encoded “<company>\<username>:<password>” string. This process is described in more detail here.
  • b.If ‘Trigger’ or ‘Trigger and Invoke’ role, then along with the above test ICS also reaches out to Eloqua AppCloud and checks if the Eloqua ICS App has been installed. If not installed then the connection test will fail.
  • Once the connection test is successful, save the connection.
  • Now that the connection has been defined, we can use the Eloqua adapter in an ICS integration to sync data. Let’s take a look at designing the Inbound->Eloqua usecases, i.e. where Eloqua is the target application.

    d. Designing the Inbound->Eloqua flows

    The Eloqua adapter for inbound->Eloqua flows only relies on the Bulk 2.0 APIs and doesn’t need the ICS App to be installed in Eloqua.
    Below are the steps to configure the adapter.

    Design time:

    • Create an ICS integration, and drag the Eloqua connection on the target or as an invoke activity in an orchestration.
    • Name your endpoint and click Next.
    • On the operations page, you can choose the Eloqua business object that needs to be created/updated, as well as fields within the object. You can choose the field to be uniquely matched on, etc.
    • Screen Shot 01-19-17 at 03.06 PM

    • You can also set the Auto-Sync time interval such that periodically the Eloqua data inserted into staging area will be synced to actual Eloqua tables.
    • Finish the wizard, complete the rest of the integration, and then activate it.

    At runtime, since we know that under the hood the Bulk Import APIs are being used, the following specific events happen:

    • Depending on the business object and the fields chosen, an import definition is created by POSTing to the “/bulk/2.0/<object>/imports/” Eloqua endpoint.
    • This returns a unique URI in the response, which is used to POST the actual data to Eloqua. Thus, as data gets processed through the ICS integration, it reaches the Eloqua Invoke activity, which internally uses the URI returned above to POST the data to Eloqua. The data is now in the Eloqua staging area, ready to be synced into Eloqua.
    • Now, depending on the ‘Auto-Sync’ interval defined in design-time, periodically the ‘/bulk/2.0/syncs’ endpoint is invoked which moves the data from the staging area to Eloqua database tables.

    The Bulk API steps above are described in more detail here.

    e. Designing the Eloqua->Outbound flows

    Design time :

    • Create an ICS integration, and drag the Eloqua connection as the source of the integration.
    • Select the business object, select the fields, followed by selecting the response fields.
    • Finish the wizard. Complete the integration and activate it.

    When the integration is activated, ICS makes a callout to the Eloqua ICS App, registering the integration name, its ICS endpoint, and request and response fields chosen above.

    At this point, back in the Eloqua UI, the marketer can configure the ICS App in her campaign by choosing among the activated ICS integrations and configuring them appropriately. For example, the screenshot below shows the ICS App’s ‘cloud action’ configuration screen from a sample Eloqua campaign, after an integration called ‘eloqua_blog’ with the Eloqua Adapter as source is activated:
    Screen Shot 01-19-17 at 03.41 PM

    The Marketer now runs her campaign. Contacts start flowing through various campaign steps, including the ICS App step, at which point the ICS App gets invoked, which in turn invokes the configured ICS integration.

    Integrating Oracle Project Cloud with Documents Cloud Service using REST APIs and business object-level security.

    $
    0
    0

    Introduction

    Oracle Documents Cloud Service (DCS) enables collaboration through rich set of social and mobile-optimized features. Customers often come across requirements to integrate DCS to Oracle ERP cloud. Such integration improves productivity by taking advantage of features of DCS Service. In this post, let’s take a look at integrating Project Management Cloud, a part of Oracle ERP cloud, with DCS. Contents of this post are applicable to R11 of Project Management Cloud and R16.4.5 of DCS Service.

    Main Article

    Project Cloud and Document Cloud both provide secure REST APIs for integration. In addition, Document Cloud offers UI integration through applinks, short-lived links accessible through HTML IFRAME. Project Cloud offers UI customization through Page Composer, which is sufficient to implement this solution. See links to documentation to these APIs in references section below. The solution described in this post uses aforementioned APIs and tools and a custom integration service deployed to JCS-SX. It leverages parts of design described in another blog post on integrating DCS and Sales Cloud (link provided in references section). Below is a high-level depiction of the solution.

    001

    Figure 1 – Overview of the solution

     

    JCS-SX is a PaaS-for-SaaS offering usually deployed alongside the Oracle SaaS application and pre-integrated with SaaS through Single-Sign-on. Guidance to implement this solution is split into subsections. For ease of comprehension, these instructions are abstracted. Click on one of the links below to jump to a subsection of interest.

    Documents Cloud REST API

    The following actions need to be performed through the API:

    • Query whether a sub-folder exists in DCS for the selected project.
    • Create a sub-folder for the project, based on project name.
    • Get an appslink to the sub-folder

    Get contents of a folder, in order to verify existence of sub folder with same name as project:
    Request:

    GET /documents/api/1.1/folders/F7A4AF94F58A48892821654E3B57253386C697CACDB0//items HTTP/1.1
    Host: &lt;DocsCloudHostName:port&gt;
    Authorization: Basic am9obi5kdW5iYXI6VmlzaW9uMTIzIQ==
    .....<br class="none" />
    

    Response:

    ....
    {
    "type": "folder",
    "id": "FE4E22621CBDA1E250B26DD73B57253386C697CACDB0",
    "parentID": "F7A4AF94F58A48892821654E3B57253386C697CACDB0",
    "name": "Cloud based HCM",
    "ownedBy": {
    "displayName": "John Doe",
    "id": "UDFE5D9A1F50DAA96DA5F4723B57253386C6",
    "type": "user"
    }
    ...<br class="none" /><br class="none" />

    Create a new sub-folder:

    Request:

    POST /documents/api/1.1/folders/F7A4AF94F58A48892821654E3B57253386C697CACDB0 HTTP/1.1
    Host: <hostname:port>
    Authorization: Basic am9obi5kdW5iYXI6VmlzaW9uMTIzIQ==
    …..
    {
        "name": "TestFolder1",
        "description": "TestFolder"
    }

    Response:

    HTTP/1.1 201 Created
    Date: Tue, 24 Jan 2017 22:14:50 GMT
    Location: https://docs-gse00000310.documents.us2.oraclecloud.com/documents/api/1.1/folders/F073C821561724BDA2E6B6C73B57253386C697CACDB0
    ….

    Create appslink to a subfolder:
    Request:

    POST /documents/api/1.1/applinks/folder/F7A4AF94F58A48892821654E3B57253386C697CACDB0 HTTP/1.1
    Host: <DCS host:port>
    Authorization: Basic am9obi55iYXI6VmlzaW9uMTIzIQ==
    ....
    
    {
        "assignedUser": "casey.brown",
        "role":"contributor"
    }

    Response:

    HTTP/1.1 200 OK
     Date: Wed, 25 Jan 2017 00:52:40 GMT
     Server: Oracle-Application-Server-11g
     .....
    
    {
     "accessToken": "eDkMUdbNQ2ytyNTyghBbyj43yBKpY06UYhQer3EX_bAQKbAfv09d4T7zuS5AFHa2YgImBiecD2u-haE_1r3SYA==",
     "appLinkID": "LF0fW2LLCZRsnvk1TVNcz5UhiqDSflq_2Kht39UOZGKsglZo_4WT-OkR1kEA56K91S1YZxSa8pBpQZD6BSWYCnAXZZKAZaela3IySlgJaaAvJrijCvWTazDqCeY56DvyYgHNjAoZPSy2dL0DzaCWi0XA==",
     "appLinkUrl": "https://docs-gse00000310.documents.us2.oraclecloud.com/documents/embed/link/app/LF0fW2LLCZRsnvk1TVNcz5UhiqDSflq_2Kht39UOZGKsglZo_4WT-OkR1kEA56K91S1YZxSa8pBpQZD6BSWYCnAXZZKAZaela3IySlgJaaAvJrijCvWTazDqCeY56DvyYgHNjAoZPSy2dL0DzaCWi0XA==/folder/F7A4AF94F58A48892821654E3B57253386C697CACDB0/_GruppFinancial",
     "errorCode": "0",
     "id": "F7A4AF94F58A48892821654E3B57253386C697CACDB0",
     "refreshToken": "LugYsmKWK6t5aCfAb8-lgdmp7jgF8v3Q9aEtits4oy0Oz9JtaYnL9BOs8q4lwXK8",
     "role": "contributor",
     "type": "applink"
     }<br class="none" /><br class="none" />

    Project Cloud REST API

    JCS-SX in the solution ensures that only users with access to a project could access the corresponding folder in DCS. This is achieved by invoking Project API with the JWT passed to the  service by project cloud. Without a valid token, the JCS-SX service will return an error.

    Here is the sample payload for the service.
    Request:

    GET /projectsFinancialsApi/resources/11.1.11/projects/300000058801556?fields=ProjectId,ProjectName&onlyData=true HTTP/1.1
    Host: <Project Cloud>
    Authorization:Bearer <JWT token>
    ...

    Response:

    HTTP/1.1 200 OK
    Server: Oracle-Application-Server-11g
    …
    {
      "ProjectId" : 300000058801556,
      "ProjectName" : "Dixon Financials Upgrade"
    }

    Security

    There are several aspects of security addressed by this solution.

    • Project Cloud and JCS-SX integration is secured by single-sign-on infrastructure of which both systems are participants. Single sign-on is enabled for JCS-SX instances and their co-located Fusion SaaS applications. This integration only ensures that the service is invoked on behalf of a valid user of ERP Cloud.
    • The API calls from JCS-SX to Project Cloud are secured by JWT tokens supplied by Project Cloud upon invoking the JCS-SX service. This JWT token is bound the currently logged in Project Cloud user. JWT Tokens are issued with a predetermined expiry time.
    • JCS-SX to DCS integration in this solution is secured by basic authentication. Federation of identity domains could allow seamless authentication and authorization of users between these two systems, with additional effort.

    JCS-SX Service

    This is a J-EE servlet that takes Project Name, Project ID and a JWT token as query string parameters. The functions of the service are as follows:

    • Using supplied JWT token and Project Id, try to get information about project using Project Cloud REST API. If the request fails, stop processing and return “HTTP 401 unauthorized” error.
    • If the previous step succeeds, query DCS for a sub-folder with the supplied project name. The root folder ID in DCS, basic authentication credentials are available to the servlet.
    • If a sub-folder does not exist, create a new sub-folder.
    • Create an appslink to the sub-folder. Generate HTML content with an IFRAME element pointing to the appslink returned by DCS API.

    Customizing Project Cloud

    For this integration, Project Cloud must be customized for the following:

    • Invoke JCS-SX service
    • Pass Project information such as Name and Id, along with a JWT token to JCS-SX service.
    • Display the appslink content from DCS.

    Project Cloud does not yet provide the app composer tool available in Sales Cloud at the time of publishing this post. However, page composer’s features are sufficient for this integration.  Here are the steps to implement:

    • Create and activate a sandbox, if the current use does not have one already.
    • Navigate to an appropriate page of project management cloud where Document Cloud’s content could be displayed. For this solution, let’s navigate to Home->Projects->Project Financial Management. Then, search for projects and click on a project, then click on Documents tab.

    002

    • Click on top right menu and select “Customize Pages”. Page Composer is now activated current page.
    • Click on a section of page where DCS appslink should be displayed.
    • On top left menu of Page Composer, click on “View” and select “Source”. Click on “Add Content”, click on “Components” and select “Web Page” widget.
    • 003Once the widget is displayed, drag the edges to desired size. Then, while the web page widget is selected, click on “Edit” of the Page Composer menu, on top left. Web Page component’s property dialog is displayed. Click the drop-down next to “Source” field and select “Expression Builder”.
      004
    • Once the widget is displayed, drag the edges to desired size. Then, while the web page widget is selected, click on “Edit” of the Page Composer menu, on top left. Web Page component’s property dialog is displayed. Click the drop-down next to “Source” field and select “Expression Builder”. Enter appropriate JCS-SX host and service URI for the JSC-SX service. Notice the bindings variables for project information and JWT token supplied through query string. These variables are available to the page by default.
      https://<JCS-SX HOST>:<PORT>/doccloud?projectID=#{bindings.ProjectId.inputValue}&projectName=#{bindings.Name.inputValue}&buname=#{bindings.Name3.inputValue}&customername=#{bindings.Customer.inputValue}&jwt=#{applCoreSecuredToken.trustToken}

    005

    • Click OK to submit and click “Apply” on Component properties page. If the integration works end-to-end, DCS page should be displayed as shown below, with a sub-folder named after the project in focus. Use can drag and drop documents into the Widget to add documents.

      006

    Summary

    This article explains how to integrate Oracle Project Management Cloud and DCS using REST API and JCS-SX.  It provides API snippets, instructions for customizing Project Cloud and the overall logic of the service deployed on JCS-SX. This approach is suitable for R11 of ERP cloud and R16.4.5 of DCS. Subsequent releases of these products offer equivalent or better integration capabilities. Refer to product documentation for later versions before implementing a solution based on this article. 

    References

    DCS REST API:

    http://docs.oracle.com/cloud/latest/documentcs_welcome/WCDRA/index.html

    Project Portfolio Management Cloud REST API:

    http://docs.oracle.com/cloud/latest/projectcs_gs/FAPAP/

    Blog on Sales Cloud to DCS integration:

    http://www.ateam-oracle.com/integrating-oracle-document-cloud-and-oracle-sales-cloud-maintaining-data-level-business-object-security/

     

     


    IDCS Audit Event REST API

    $
    0
    0

    Introduction

    This article is to help expand on topics of integration with Oracle’s Cloud Identity Management service called Identity Cloud Service (IDCS). IDCS delivers core essentials around identity and access management through a multi-tenant Cloud platform. As part of the IDCS framework, it collects audit events that capture all significant events, changes, and actions which are sent an audit table. Like any Identity and access service that revolves around security you will eventually need to access audit records for various reasons pertinent to standard security practices and corporate policy. In this article I want to cover what the IDCS audit events provide and how to leverage them using the IDCS REST API Audit Events endpoints.

     

    Auditing Overview

    The audit events can be accessed using the IDCS SCIM 2.0 compliant REST API. SCIM (System for Cross-domain Identity Management) which is an open standard to simplify user identity management in the cloud. The following is a quick summary of what you should know from a high level.

    * Audit events include login events, changes, and actions.
    * Audit events are kept up to a maximum of 90 days
    * Audit events are managed using REST APIs via OAuth 2.0
    * Audit event REST endpoints allow query parameters and filters
    * Audit event REST responses are in JSON format

    Reporting is a basic feature that comes as part of the IDCS user interface, but only provides some simple reporting. A more powerful way to retrieve Audit records from IDCS is to use the REST API. The REST API endpoint can use optional query parameters and filters to fine tune what information you want, more on this in the next couple of sections.

     

    Audit Event Endpoints

    The following table covers all possible IDCS audit event endpoints. In addition, you should know that some endpoints can include query parameters that can include schema attributes, which I will cover in the next section.

     

    Method

    Action

    Endpoint

    Comment

    POST

    Create

    /admin/v1/AuditEvents

    Create a new audit record. Any parameters need to be included in a JSON body.

    DELETE

    Delete

    /admin/v1/AuditEvents/{id}

    Delete an audit record using an audit record ID. Any parameters need to be included in a JSON body.

    GET

    Get Event by ID

    /admin/v1/AuditEvents/{id}

    Retrieve a single audit record using a unique ID. Any parameters need to be included in query string.

    GET

    Search by GET

    /admin/v1/AuditEvents

    Parameters can be included using a query string. Any parameters need to be included in query string.

    POST

    Search by POST

    /admin/v1/AuditEvents/.search

    Parameters are posted in the request body using JSON. Any parameters need to be included in a JSON body.

     

    Audit Event Query Parameters

    The following table provides all the audit event parameters that can be used to query the records. In later sections I will get into some examples on how to use some of these parameters.

    Parameter

    Type

    Description

    filter

    string

    A filter using valid schema attributes to request specific resources.  The filter can include logical operators such as AND and OR. For more details see the SCIM specifications https://tools.ietf.org/html/draft-ietf-scim-api-19#section-3.4.2.2 for more information.

    attributes

    string

    A comma delimited string of valid attributes that specify resources.  The values of the attributes should match the required SCIM schema definition.

    sortBy

    string

    Used to sort the response by some valid audit event attribute.

    sortOrder

    string

    Using the allowed values “ascending” or “descending” to order the sort of the results; the default if sortOrder is not used is ascending.

    count

    number

    Can set the maximum number of records returned per page.  Excluding “count” sets the default maximum number to 50, where the maximum value allowed is 1000.  If the number of records returned is larger than the count value, you must use the startIndex to paginate through the records.

    startIndex

    number

    This determines the first record in the page set.  The default is 1, so if the startIndex is set to 100, the 100th record will be the first in the list returned.  See the Pagination section of the SCIM specifications https://tools.ietf.org/html/draft-ietf-scim-api-19#section-3.4.2.4 for more information.

    My First Audit Event Search

    Now that I have covered the core endpoints and query parameters let’s get into our first search. Imagine you either work in Info Sec or work with someone who does, in either case there will been times an audit is required. Even if an audit is not something done on a regular basis, the fact that IDCS will only keep a maximum of 90 days of records means if your corporate policy demands say records that be kept for 7 years, you must establish a process to query the IDCS audit records on a regular basis and store them externally so that you can use tools like BICS (Business intelligence Cloud Service) to build reports when needed even if you need to go back 7 years.

    There are two methods used to send searches to the IDCS REST API Audit Events endpoints, GET or POST, and though each option provides the same results, how the query parameters are sent differ. The following basic searches should help illustrate the differences between using GET and POST.

     

    GET method

         https://tenant1.mycompany.com/admin/v1/AuditEvents?filter=actorName sw “bhaas”

    POST method

         https://tenant1.mycompany.com/admin/v1/AuditEvents/.search

    JSON body

         JSon Body
         {
              “schemas”: [“urn:ietf:params:scim:api:messages:2.0:SearchRequest”],
              “attributes”: [“actorName”],
              “filter”: “actorName sw \”bhaas\””,
              “startIndex”: 1,
              “count”: 5
         }

    Notice the GET method above send all the parameters in a URL query string, while the POST method requires the addition of “/.search” to the endpoint plus the search parameters to be sent in a JSON body. Whether you use GET or POST is up to you, it will most likely depend on your application integration requirements.

    Before we jump into sending a search via the REST API, I am going to assume you already know how to get a proper OAuth 2.0 token. If not, for your convenience I have already published an article on how to do this in the article “IDCS OAuth 2.0 and REST API”, which gives easy steps and examples using cURL or Postman. So going forward I am only going to focus on the endpoint and query parameters. Let’s move on to our first audit event search example.

         /admin/v1/AuditEvents?filter=actorName sw “tim”

    Let’s break out the above search to understand what we are doing using the following table.

    Part

    Value

    Description

    Endpoint

    /admin/v1/AuditEvents

    The endpoint used to query audit events

    Query parameter

    ?filter=

    A parameter used to filter on some SCIM attribute

    Attribute

    actorName

    Attribute used in the filter parameter

    Logical Operator

    sw

    The logical operator “sw” is starts with, but there are many others.

    Search value

    “tim”

    This is the value to search for

     

    Now before we finally send that search, I want to point out if you are going to send a GET request using the cURL command, the query string needs to be URL encoded.

    Take the following example…

    * Will NOT work with cURL:
         ?filter=actorName sw “tim”

     

    * Will work with cURL:
         ?filter=actorName%20sw%20%E2%80%9Ctim%E2%80%9D

     

    So the below final cURL command can be used to send our first audit event search; be sure to replace the <Your Bearer token> with a real token.

         curl \
         -X GET \
         -H “Content-Type:application/scim+json” \
         -H “Authorization: Bearer <Your Bearer token>” \
         http://tenant1.idcs.my.company.com:8990/admin/v1/AuditEvents?filter=actorName%20sw%20%E2%80%9Ctim%E2%80%9D

     

    Once you send your search a response will come back if everything went successful. The format of the response is JSON, an open standard, that can then be parsed or manipulated as needed. This makes building custom interfaces relatively simple. The following is an example of what your response may look like.

    JSON RESPONSE

    Search by Date Range

    Another useful search is to filter by date range. This example is a little more complicated, but I will walk through all the parts of the search.

         /admin/v1/AuditEvents?filter=timestamp ge “2016-06-20T00:00:00Z” and timestamp le “2016-06-22T00:00:00Z”&sortBy=timestamp&sortOrder=descending”

     

    Below is a table to help understand more about the parts of the search to help learn what we are sending and why.

    Part

    Value

    Description

    Endpoint

    /admin/v1/AuditEvents

    The endpoint used to query audit events

    Query parameter

    ?filter=

    A parameter used to filter using a valid SCIM attribute.

    Attribute

    timestamp

    Attribute used in the filter parameter

    Logical operator

    “ge” and “le”

    The logical operator “ge” for greater than or equal to, and “le” for less than or equal to. 

    Search value

    “2016-06-20T00:00:00Z” and “2016-06-22T00:00:00Z”

    These are the date values used in the search, which must be in UTC format.

    sortBy

    timestamp

    Sorts by a valid SCIM schema attribute, “timestamp” of the audit records.

    sortOrder

    descending

    This the sort order of the results; options are ascending or descending.

     

    IMPORTANT:  When using a date range search with IDCS, you should include the “sortBy” parameter as a habit.  The reason is if you are paging through multiple results by setting the startIndex parameter, you will get an error if the page limit goes beyond the first page set.  For example, if there are a total of 152 records returned, and you set the startIndex parameter to 51 to get the second page set of records you will get the following error unless you use the sortBy parameter.

     

         {

         “schemas”: [

         “urn:ietf:params:scim:api:messages:2.0:Error”,

         “urn:ietf:params:scim:api:oracle:idcs:extension:messages:Error

         ],

         “detail”: “Missing \”sortby\”. sortby is mandatory when startIndex is greater than 1.”,

         “status”: “400”,

         “urn:ietf:params:scim:api:oracle:idcs:extension:messages:Error”: {

         “messageId”: “error.common.common.missingSortBy”

         }

         }

     

    Again, to send the query string using the cURL command, we need to URL encode our query string as follows before sending it.

         curl \
         -X GET \
         -H “Content-Type:application/scim+json” \
         -H “Authorization: Bearer <Your Bearer token>” \
         http://tenant1.idcs.my.company.com:8990/admin/v1/AuditEvents?filter=timestamp%20ge%20%222016-06-20T00%3A00%3A00Z%22%20and%20timestamp%20le%20%222016-06-22T00%3A00%3A00Z%22%26sortBy%3Dtimestamp%26sortOrder%3Ddescending%E2%80%9D

    Understanding the REST API Result Limits

    The IDCS REST API has some defaults when returning a large number of records. When you send a search you will notice a couple things in the JSON response. At the top of the result you will see a parameter “totalResults”. This shows the total number of records from the query, but it does not mean that is how many results you got in your response.

           {
              “schemas”: [
              “urn:scim:api:messages:2.0:ListResponse”
              ],
              “totalResults”: 52,

    At the bottom of the result there are a couple other parameters, “startIndex” and “itemsPerPage”. The startIndex is the point in the page result you are viewing where the itemsPerPage parameter lets us know there is a maximum of 50 records per page.

         ],
              “startIndex”: 1,
              “itemsPerPage”: 50
         }

     

    Every page set that is returned will contain totalResults, startIndex, and itemsPerPage. If we put this together it tells us —

    1. The is a total number of records in our search is 52; e.g. “totalResults”: 52.
    2. Our current result set is our first; e.g. “startINdex”: 1.
    3. Finally our maximum records per page set is 50; e.g. “itemsPerPage”: 50.

    An important note is the “itemsPerPage” has a default value of 50, but you can override this by using the “count” parameter. If you were to include the “count” parameter in the query string or JSON body with the value of 200, the total records per page returned would be a maximum of 200 per page set. For example…

     

         /admin/v1/AuditEvents?filter=timestamp ge “2016-06-20T00:00:00Z” and timestamp le “2016-06-22T00:00:00Z”&sortBy=timestamp&sortOrder=descending&count=200”

     

    An important note about the “count” parameter is there is a maximum limit of 1000. Even if you say change the count value to 2000, only a maximum of 1000 records per page set is returned. This presents a little work if say 5000 records are returned. So how do you deal with that?

     

    How to Deal with Large Record Results?

    To build on the previous section let’s try to understand how to deal with large record results. Let’s assume our total record result equals 152. One option is to set the count parameter we learned about earlier to a value of 1000. We certainly would get all our records returned since the total number of records is less than 1000, but if our total record result is greater than 1000 that introduces a problem. To solve this, we need to paginate through the records using the startIndex parameter.

    First of all, we won’t really know how many records we are going to get returned do we. So a trick is to use the “count” parameter and set the value to “0”.

     

        /admin/v1/AuditEvents?filter=timestamp ge “2016-06-20T00:00:00Z” and timestamp le “2016-06-22T00:00:00Z”&sortBy=timestamp&sortOrder=descending&count=0”

     

    This will not return any records, but it will show us the total number of records returned in your search. In the example below we have 152 records in total returned by the search.

     

         {

            “schemas”: [

            “urn:scim:api:messages:2.0:ListResponse”

            ],

            “totalResults”: 152,
       
    “Resources”: [],

            “startIndex”: 1,

            “itemsPerPage”: 0

         }

     

    Once we know the total records we can paginate through the records using the startIndex parameter.  Assume we don’t bother with using the count parameter and go with the default of 50 records per page.  We can then do something such as the pseudo code below using awhile loop.

         count = 0
         while count < 152
              startIndex = count +1
              Get 50 records starting at startIndex
              count = startIndex + 50
         end

     

    The idea is we can get all our records by paging through them incrementing the startIndex. To continue with the previous query examples we would do something like this.

     

    Records 1 – 50

         /admin/v1/AuditEvents?filter=timestamp ge “2016-06-20T00:00:00Z” and timestamp le “2016-06-22T00:00:00Z”&sortBy=timestamp&sortOrder=descending&startIndex=1”

     

    Records 51 – 101

         /admin/v1/AuditEvents?filter=timestamp ge “2016-06-20T00:00:00Z” and timestamp le “2016-06-22T00:00:00Z”&sortBy=timestamp&sortOrder=descending&startIndex=51”

     

    Records 102 – 152

         /admin/v1/AuditEvents?filter=timestamp ge “2016-06-20T00:00:00Z” and timestamp le “2016-06-22T00:00:00Z”&sortBy=timestamp&sortOrder=descending&startIndex=102”

    Audit Events Schema

    I have covered a couple common search examples to get audit events, but eventually you may want to do other things. So I already pointed out the Audit Events endpoints and the query parameters, but as you try to build more queries you may realize that is not enough. For example, you already know about a query parameter “filter”, but what attributes can I use? So to complete the full circle you will need to understand the Audit Events schema.

    You could reference Oracle’s IDCS documentation, but another tip is to simply send a REST request to return the entire Audit Events schema. Before you can do this your client will require the scope “Identity Domain Administrator”. To do this complete the following steps.

     

    1. Login as an Administrator to the IDCS admin console

         2. Select the Applications tab

         3. Click your Client

         4. Select the Configuration tab

         5. Expand the Client Configuration

         6. In the Grant the client access to Identity Cloud Service Admin APIs and add the scope: Identity Domain Administrator

         7. Click the Save button.

    Once the above steps are complete you will need to request a new token which will include the new authorization roles. Next you will use the following query string to request the Audit Events schema.

         /admin/v1/Schemas/urn:ietf:params:scim:schemas:oracle:idcs:AuditEvent

    Below is an example cURL command you can run to return the schema.

         curl \
         -X GET \
         -H “Content-Type:application/scim+json” \
         -H “Authorization: Bearer <Your Bearer token>” \
         http://tenant1.idcs.my.company.com:8990/admin/v1/Schemas/urn:ietf:params:scim:schemas:oracle:idcs:AuditEvent

    Once the REST request is sent a response a JSON format response will be returned that includes the full Audit Events schema. You can now see what attributes and objects are part of the schema in order to learn how to build new filters. The audit event schema JSON output will be fairly large so as another tip is to pipe the JSON response to a file, then copy and paste the data into the JSON field at http://json2table.com/ , and finally click the run button which will display it as a nice table. This should help make it easier to find the attributes you are looking for when building filters or other things with the query parameters.

     

    Summary

    I hope to have enlightened you on some knowledge to understand how to query the IDCS audit event records. This article should be a good building block on going beyond some of my examples. The rest is up to you…no pun intended. Check out the article Loading Data from Oracle Identity Cloud Service into Oracle BI Cloud Service using REST that shows how to develop a process to extract IDCS audit events into BICS (Business Intelligence Cloud Service) so you an leverage its powerful reporting and analytics tools. I hope this will add more excitement to IDCS’s open framework. Until then enjoy!

    Using Process Cloud Service REST API Part 2

    $
    0
    0

    Introduction

    In Part 1 we looked at using the Process Cloud Service REST API and making REST calls from an HTML page using jQuery/AJAX. In Part 2 we’ll take the next step by using and presenting data retrieved from PCS. We started out stuffing the string representation of the JSON data into an HTML <div> on the page. Now let’s present interesting bits of the data in a clear and appealing user interface. Building presentation structure separate from data and code is a fundamental graphical user interface application architecture. The declarative “view” in HTML, the “model” JSON data from the REST calls and the “controller” JavaScript is a classic Model-View-Controller, MVC architecture. In our case we’ll be using a “view model” instead of a controller, resulting in an MVVM Model-View-ViewModel architecture. Building on the tiny bit of CSS started in Part 1, we’ll use CSS to create the look and feel of the HTML view.

    Organizing Source Files

    Bundling CSS and JavaScript in with the HTML was convenient for our small start in Part 1 but now let’s organize CSS in a separate file style.css and JavaScript in a file pcs-api.js. Let’s also take advantage of a development environment. There are a number to choose from, I like NetBeans so we’ll use it.

    Create a new HTML5/JS Application project, name it PCSrestDemo accepting the new project wizard defaults.

    NetBeams-HTML5-project

    We’ll be leveraging parts of the APITest1.html file from Part 1 so copy and paste from it where convenient. Edit the HTML <head> and first <div> for process definitions replacing the contents of the project auto generated index.html with the HTML shown below.

    <!DOCTYPE html>
    <html>
        <head>
            <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.1/jquery.min.js"></script>
            <title>PCS REST API Demo</title>
            <meta charset="UTF-8">
            <meta name="viewport" content="width=device-width, initial-scale=1.0">
        </head>
        <body>
            <h1>PCS REST API Demo</h1>
            <h2>Part 1</h2>
            <p>Use the process-definitions call to get a list of registered process names and definition IDs</p>
            <input type="button" value="Get Process List" onClick="getProcessList()">
            <br><br>
            <div id="proclist">
                <h3>Registered Processes</h3>
            </div>
            <br><br>
        </body>
    </html>

    We’ve changed the <div> id to proclist and we’ll place process definition response data there on the page. Without any styling the bare HTML will look like below.
    bare-htmlIn Part 1 we had a tiny bit of CSS to set the button width and h1 color.

    input {width:300px;}
    h1    {color: blue;}

    We’re not looking to win a design contest, but the CSS code below is a good start at getting control over the style elements of the application. Create a folder css and file style.css with the contents as shown. A real designer would make something a bit more elegant but we’re mainly interested in the mechanics of using CSS in an application.

    input {
        width: 200px;
        height: 50px;
        border-radius: 5px;
        color: DarkBlue;
        background-color: White;
        border: 3px solid DeepSkyBlue;
        cursor: pointer;
        font-size: 20px;
        font-family: Tahoma, Verdana, Arial;
        -webkit-transition: width 1s, height 1s;
        transition: width 1s, height 1s;
    }
    
    input:hover {
        width: 250px;
        height: 62px;
        background-color: DeepSkyBlue;
        box-shadow: 8px 8px 4px Grey;
    }
    
    input:active {
        box-shadow: 8px 4px 4px Grey;
        transform: translateY(4px);
    }
    
    h1, h2, h3 {
        color: DeepSkyBlue;
        text-align: Center; 
        font-family: Tahoma, Verdana, Arial;
    }
    
    h2, h3 {
        text-align: Left;
    }
    
    ul {
        list-style: none;
    }

    also add the CSS link to the <head> section of index.html

            <link rel="stylesheet" href="css/style.css" type="text/css"/>

    This will change the appearance from the bare HTML show above to that below

    Create another folder named js and a file pcs-api.js with contents shown below.

     function getProcessList()
     {
       $.ajax(
           {
             type: "GET",
             url: "http://pcshost:7003/bpm/api/4.0/process-definitions",
             headers: {'Authorization': "Basic d2VibG9naWM6d2VibG9naWMx"},
             contentType: "application/json",
             dataType: "json",
             success: function(json){
                         $("#response").html(JSON.stringify(json));
             },
             failure: function(errMsg) {
                  alert(errMsg);
             },
             error: function(xhr){
                  alert("An error occured: " + xhr.status + " " + xhr.statusTextt);
             }
           });
     }

    This is the same REST call we had in Part 1. The stringify(json) string blob was placed on the page in the #response <div> now we want to extract process names from the json object and place them in a list in the <div> we’re now calling #proclist. Add the script file reference to the <head> section of index.html.

            <script src="js/pcs-api.js"></script>

    vue.js

    Taking the minimalist, clean, simple approach started in Part 1 we’ll use vue.js for the JavaScript framework. There are other bigger, fancier frameworks like Angular.js but vue.js is lightweight, easy to use and a good starting point. In Part 3 of this series we’ll look at heavy duty JavaScript frameworks like Angular.js, node.js and Oracle JET.

    The declaration of what we want is an unordered list of process names

    <ul>
       <li>process name 1</li>
       <li>process name 2</li>
       <li>etc ...</li>
    </ul>

    so that’s what we’ll put in the #proclist <div>, a <ul> containing a declared list of process name, revision and defId items

    <ul>
        <li v-for="proc in procItems">
            {{ proc.processName }} version {{ proc.revision }} -- <b>defId:</b> {{ proc.processDefId }}
        </li>
    </ul>

    “v-for” is the vue.js for-loop binding and the double brace notation “{{ }}” is the data reference.

    In pcs-api.js make the connection between the JSON response data from the REST call and the <ul> on the HTML page. Looking at the PCS REST API documentation (or the JSON response from Postman as we did in Part 1) we see the process definition information is in a JSON array with the key name “items”. The vue.js vm (view-model) is created and used as shown

    var appProcList = new Vue({
        el: '#proclist',
        data: {
            procItems: json.items
        }
    })

    The view-model is a new Vue variable named appProcList, connected to the DOM element #proclist. The data defined by an array named procItems. Replace the stringify bit in the AJAX success function with the vue.js code above. Also load vue.js from the unpkg CDN by adding the line

    <script src="https://unpkg.com/vue@2.1.10/dist/vue.js"></script>

    in the <head> section of index.html. The results from a PCS instance with five registered processes looks like:

    Process Instance and Task List

    Duplicating the approach with process definitions let’s do the process instance call from Part 1 next. Add the HTML code to index.html

            <h2>Part 2</h2>
            <p>Retrieve a Process Instance</p>
            <input type="button" value="Get Process Instance" onClick="getProcessInstance()">
            <br><br>
            <div id="procinstance">
                <h3>Process Instance</h3>
                <ul>
                    <li><b>Title:</b> {{ title }}</li>
                    <li><b>ID:</b> {{ processId }}</li>
                    <li><b>Name:</b> {{ name }}</li>
                    <li><b>Owned By:</b> {{ ownedBy }}</li>
                    <li><b>Priority:</b> {{ priority }}</li>
                    <li><b>State:</b> {{ state }}</li>
                </ul>
            </div>
            <br><br>

    We’ll need a corresponding view-model, in pcs-api.js update and add the getProcessInstance() function.

    function getProcessInstance()
    {
      $.ajax(
          {
            type: "GET",
            url: "http://pcshost:7003/bpm/api/4.0/processes/10003",
            headers: {'Authorization': "Basic d2VibG9naWM6d2VibG9naWMx"},
            contentType: "application/json",
            dataType: "json",
            success: function(json){
                var appProcInstance = new Vue({
                    el: '#procinstance',
                    data: {
                        title: json.title,
                        processId: json.processId,
                        name: json.processName,
                        ownedBy: json.ownedBy,
                        priority: json.priority,
                        state: json.state
                    }
                })
            },
            failure: function(errMsg) {
                 alert(errMsg);
            },
            error: function(xhr){
                 alert("An error occured: " + xhr.status + " " + xhr.statusTextt);
            }
          });
    }

    The view-model appProcInstance connects to the #procinstance <div> id and the data items are mapped individually. The result looks like

    process-instance-result

    Similarly for the Task List call, update and add the <div> to index.html and getTaskList() function to pcs-api.js. The <div> looks like

            <h2>Part 3</h2>
            <p>Retrieve Task List</p>
            <input type="button" value="Get Task List" onClick="getTaskList()">
            <br><br>
            <div id="tasklist">
                <h3>Task List</h3>
                <ul>
                    <li v-for="task in taskItems">
                        {{ task.title }} <b>summary:</b> {{ task.shortSummary }} <b>created:</b> {{ task.createdDate }} - {{ task.state }}
                    </li>
                </ul>
            </div>
            <br><br>

    and the JavaScript looks like

    function getTaskList()
    {
      $.ajax(
          {
            type: "GET",
            url: "http://pcshost:7003/bpm/api/4.0/tasks?status=ASSIGNED&assignment=MY_AND_GROUP",
            headers: {'Authorization': "Basic d2VibG9naWM6d2VibG9naWMx"},
            contentType: "application/json",
            dataType: "json",
            success: function(json){
                var appTaskList = new Vue({
                    el: '#tasklist',
                    data: {
                        taskItems: json.items
                    }
                })
            },
            failure: function(errMsg) {
                 alert(errMsg);
            },
            error: function(xhr){
                 alert("An error occured: " + xhr.status + " " + xhr.statusTextt);
            }
          });
    }

    the view-model appTaskList connects to the #tasklist <div> and data comes from the json.items JSON array response data. The Task List results look like

    tasklist-results

    Audit Diagram

    Let’s do something a bit flashy with the audit diagram. Retrieving binary image data with the REST call as we did in Part 1, let’s open a modal overlay and show the diagram there and back on the main page after closing the modal. This approach and code is from a sample in the CSS section of w3schools (the modal sample is at the bottom of the page on images). The first thing to do is setup the CSS code. Create a file img-modal.css in the css folder of the application and insert the following

    #clickMe {
        border-radius: 5px;
        cursor: pointer;
        transition: 0.3s;
    }
    
    #clickMe:hover {opacity: 0.5;}
    
    /* The Modal (background) */
    .modal {
        display: none; /* Hidden by default */
        position: fixed; /* Stay in place */
        z-index: 1; /* Sit on top */
        padding-top: 100px; /* Location of the box */
        left: 0;
        top: 0;
        width: 100%; /* Full width */
        height: 100%; /* Full height */
        overflow: auto; /* Enable scroll if needed */
        background-color: rgb(0,0,0); /* Fallback color */
        background-color: rgba(0,0,0,0.9); /* Black w/ opacity */
    }
    
    /* Modal Content (image) */
    .modal-content {
        margin: auto;
        display: block;
        width: 80%;
        max-width: 700px;
    }
    
    /* Caption of Modal Image */
    #caption {
        margin: auto;
        display: block;
        width: 80%;
        max-width: 700px;
        text-align: center;
        color: #ccc;
        padding: 10px 0;
        height: 150px;
    }
    
    /* Add Animation */
    .modal-content, #caption {    
        -webkit-animation-name: zoom;
        -webkit-animation-duration: 0.6s;
        animation-name: zoom;
        animation-duration: 0.6s;
    }
    
    @-webkit-keyframes zoom {
        from {-webkit-transform: scale(0)} 
        to {-webkit-transform: scale(1)}
    }
    
    @keyframes zoom {
        from {transform: scale(0.1)} 
        to {transform: scale(1)}
    }
    
    /* The Close Button */
    .close {
        position: absolute;
        top: 15px;
        right: 35px;
        color: #f1f1f1;
        font-size: 40px;
        font-weight: bold;
        transition: 0.3s;
    }
    
    .close:hover,
    .close:focus {
        color: #bbb;
        text-decoration: none;
        cursor: pointer;
    }
    
    /* 100% Image Width on Smaller Screens */
    @media only screen and (max-width: 700px){
        .modal-content {
            width: 100%;
        }
    }

    also add the CSS link in the <head> section of index.html

            <link rel="stylesheet" href="css/img-modal.css" type="text/css"/>

    Add the audit diagram section to index.html.

            <h2>Part 4</h2>
            <p>Retrieve Audit Diagram</p>
            <h3>Audit Diagram</h3>
            <img id="clickMe" src="images/GoGetIt.png" alt="Audit Diagram for Process" onClick="getAndShowAudit()" width="300" height="200">
            <!-- Audit Diagram Modal -->
            <div id="auditModal" class="modal">
                <span class="close">×</span>
                <img class="modal-content" id="imgFromPCS">
                <div id="caption"></div>
            </div>

    The clickMe image file GoGetIt.png has been added to a folder images in the project. You can create your own .png or download the completed NetBeans project attached to this blog. The getAndShowAudit() function needs to be added to pcs-api.js

    function getAndShowAudit()
    {
        var modal = document.getElementById('auditModal');
        var clickMeImg = document.getElementById('clickMe');
        var modalImg = document.getElementById('imgFromPCS');
        var auditCaption = document.getElementById('caption');
    
        var oReq = new XMLHttpRequest();
        oReq.open("GET", "http://pcshost:7003/bpm/api/4.0/processes/10003/audit", true);
        oReq.responseType = "blob";
        oReq.setRequestHeader("Authorization", "Basic d2VibG9naWM6d2VibG9naWMx");
        oReq.onreadystatechange = function () {
                                    if (oReq.readyState == oReq.DONE) {
                                      modalImg.src = window.URL.createObjectURL(oReq.response);
                                      clickMeImg.src = window.URL.createObjectURL(oReq.response);
                                    }
                                  }
        oReq.send();
    
        modal.style.display = "block";
        auditCaption.innerHTML = clickMeImg.alt + " 10003";
    
        var span = document.getElementsByClassName("close")[0];
        span.onclick = function() { modal.style.display = "none"; }
    }

    The REST call is the same as we had in Part 1, code to open and close the modal has been added. The section of the page before retrieving the audit diagram

    audit-diagram-go

    the diagram is displayed in an overlay

    audit-diagram-modal

    Use the x in the upper right to close the overlay. The retrieved diagram also replaces the clickMe image.

    Summary

    A modern HTML/CSS application using the Process Cloud Service REST API is a convenient and straight forward approach to developing a custom user interface for PCS based workflow applications.In Part 3 we’ll take a look at components in vue.js and other JavaScript frameworks like Oracle JET and Angular.js.

    Understanding the Enterprise Scheduler Service in ICS

    $
    0
    0

    Introduction

     

    In many enterprise integration scenarios there is a requirement to initiate tasks at scheduled times or at user defined intervals. The Oracle Integration Cloud Service (ICS) provides scheduling functionality via the Oracle Enterprise Scheduler to satisfy these types of requirements.  The Oracle Enterprise Scheduler Service (ESS) is primarily a Java EE application that provides time-based and schedule-based callbacks to other applications to run their jobs. Oracle ESS applications define jobs and specify when those jobs need to be executed and then gives these applications a callback at the scheduled time or when a particular event arrives. Oracle ESS does not execute the jobs itself, it generates a callback to the application and the application actually executes the job request. This implies that Oracle Enterprise Scheduler Service is not aware of the details of the job request; all the job request details are owned and managed by the application.

     

    What follows will be a discussion as to how ICS utilizes the ESS feature.  The document will cover how the ESS threads are allocated and the internal preparation completed for file processing.

     

    Quick ICS Overview

     

    The Integration Cloud Service deployment topology consists of one cluster.  The cluster has two managed servers along with one administration server.  This bit of information is relevant to the discussion of how the Enterprise Scheduler Service works and how it is used by applications like an ICS flow that runs in a clustered HA environment.

    A common use case for leveraging ESS is to setup a schedule to poll for files on an FTP server at regular intervals.  At the time files are found and then selected for processing, the ESS does some internal scheduling of these files to ensure the managed servers are not overloaded.  Understanding how this file processing works and how throttling might be applied automatically is valuable information as you take advantage of this ICS feature.

    An integration can be scheduled using the ICS scheduling user interface (UI). The UI provides a basic and an advanced option.  The basic option provides UI controls to schedule when to execute the integration.

    schedulingBasic

     

     

    The advanced option allows one to enter an iCal expression for the scheduling of the integration.

    schedulingView

     

     

    The ESS allows for two jobs to be executed at a time per JVM.  This equates to a maximum of four files being processed concurrently in a 2 instance ICS cluster.  So how does ICS process these files, especially, if multiple integrations could pick up twenty-five files at a time?

    As previously stated, there are two asynchronous worker resources per managed server. These asynchronous worker resources are known as an ICSFlowJob or AsyncBatchWorkerJob.   At the scheduled time, the ESS reserves one of the asynchronous worker resources, if one is available.  The initial asynchronous worker is the ICSFlowJob.  This is what we call the parent job.

    It is important to digress at this point to make mention of the database that backs the ICS product. The ICS product has a backing store of an Oracle database.  This database hosts the metadata for the ICS integrations, BPEL instances that are created during the execution of orchestration flows, and the AsyncBatchWorker metadata.  There is no need for the customer to maintain this database – no purging, tuning, or sizing.  ICS will only keep three days of BPEL instances in the database.  The purging is automatically handled by the ICS infrastructure.

    The ICSFlowJob invokes the static ScheduledProcessFlow BPEL. This process does the file listing, creates batches with one file per batch, and submits AsyncBatchWorker jobs to process the files. The AsyncBatchWorker jobs are stored within a database table.  These worker jobs will eventually be picked up by one of the two threads available to execute on the JVM. The graphic below demonstrates the parent and subprocess flows that have successfully completed.

    emcc

     

    Each scheduled integration will have at most ten batch workers (AsyncWorkerJob) created and stored within the database table.  The batch workers will have one or more batches assigned.  A batch is equivalent to one file. After the batch workers are submitted, the asynchronous worker resource, held by the ICSFlowJob, is released so it can be used by other requests.

    Scenario One

    1. 1.One integration that is scheduled to run every 10 minutes
    2. 2.Ten files are placed on the FTP server location all with the same timestamp

    At the scheduled time, the ICSFlowJob request is assigned one of the four available threads (from two JVMs) to begin the process of file listing and assigning the batches.  In the database there will be ten rows stored, since there are ten files.  Each row will reference a file for processing.  These batches will be processed at a maximum of four at a time.  Recall that there are only two threads per JVM for processing batches.

    At the conclusion of processing all of the AsyncWorkerJob subrequests one of the batch processing threads notifies the parent request, ICSFlowJob, that all of the subrequests have completed.

     

    Scenario Two

    1. 1. Two Integrations are scheduled to run every 10 minutes
    2. 2. There are 25 files, per integration, at each integration’s specified FTP server location

    This scenario will behave just as in scenario one; however, since each integration has more than ten files to process, the subrequests, AsyncWorkerJob, must each process more than one file.  Each integration will assign and schedule the file processing as follows:

    5 AsyncWorkerJob subrequests will process 2 files each

    5 AsyncWorkerJob subrequests will process 3 files each

    At the conclusion of the assignment and scheduling of the AsyncWorkerJob subrequests  there will be 20 rows in the database; 10 rows per integration.

    The execution of the AsyncWorkJobs is based upon a first-come first-serve basis.  Therefore, the 20 subrequests will more than likely be interleaved between each other and the processing of all of the files will take longer than if the integrations had not been kicked off at the same time.  The number of scheduler threads to process the batch integrations does not change.  There will only be a maximum of two scheduler threads per JVM.

     

    Summary

    The ESS scheduler provides useful features for having batch processes kicked off at scheduled intervals.  These intervals are user defined providing great flexibility as to when to execute these batch jobs.  However, care must be taken to prevent batch jobs from consuming all of the system resources, especially when there are real-time integrations being run through this same system.

     

    The Integration Cloud Service has a built in feature to prevent batch jobs from overwhelming the service.  This is done by only allowing two scheduler threads to process files at a time per JVM.   This may mean that some batch integrations take longer; however, it prevents the system from being overwhelmed and negatively impacting other ICS flows not related to batch file processing with the ESS.

     

    As we have discussed in this article, the use case here is all about polling for files that ICS needs to process at specified times or intervals; however, the ESS may also be used to trigger integrations such as REST and SOAP-based web services.  When using ESS to initiate file processing, the system is limited to two scheduler threads per JVM.

     

    The polling approach may not always be the best approach, since the delivery of the file may not be on a regularly scheduled cycle.  When the delivery of the file, from the source, is not on a regular schedule then it is probably better to implement a push model.   In a coming blog, I will demonstrate how to implement the push model.  With the push model the system is no longer under the constraints of the two scheduler threads per JVM.

     

    To learn more about the Enterprise Service Scheduler one should reference the Oracle documentation.

     

    Loading Data from Oracle Field Service Cloud into Oracle BI Cloud Service using REST

    $
    0
    0

    Introduction

    This post details a method of extracting and loading data from Oracle Field Service Cloud (OFSC) into the Oracle Business Intelligence Cloud Service (BICS) using RESTful services. It is a companion to the A-Team post Loading Data from Oracle Field Service Cloud into Oracle BI Cloud Service using SOAP . Both this post and the SOAP post offer methods to complement the standard OFSC Daily Extract described in Oracle Field Service Cloud Daily Extract Description.

    One case for using this method is analyzing trends regarding OFSC events.

    This post uses RESTful web services to extract JSON-formatted data responses. It also uses the PL/SQL language to call the web services, parse the JSON responses, and perform database table operations in a Stored Procedure. It produces a BICS staging table which can then be transformed into star-schema object(s) for use in modeling. The transformation processes and modeling are not discussed in this post.

    Finally, an example of a database job is provided that executes the Stored Procedure on a scheduled basis.

    The PL/SQL components are for demonstration purposes only and are not intended for enterprise production use. Additional detailed information, including the complete text of the PL/SQL procedure described, is included in the References section at the end of this post.

    Update: As of December, 2016 the  APEX 5.1 APEX_JSON package has removed the limitation of 32K lengths for JSON values. A new section has been added to this post named Parsing Events Responses using APEX_JSON.

    Rationale for Using PL/SQL

    PL/SQL is the only procedural tool that runs on the BICS / Database Schema Service platform. Other wrapping methods e.g. Java, ETL tools, etc. require a platform outside of BICS to run on.

    PL/SQL may also be used in a DBaaS (Database as a Service) that is connected to BICS.

    PL/SQL can utilize native SQL commands to operate on the BICS tables. Other methods require the use of the BICS REST API.

    Note: PL/SQL is a very good at showcasing functionality. However, it tends to become prohibitively resource intensive when deploying in an enterprise production environment. For the best enterprise deployment, an ETL tool such as Oracle Data Integrator (ODI) should be used to meet these requirements and more:

    * Security

    * Logging and Error Handling

    * Parallel Processing – Performance

    * Scheduling

    * Code Re-usability and Maintenance

    About the OFSC REST API

    The document REST API for Oracle Field Service Cloud Service should be used extensively, especially the Authentication, Paginating, and Working with Events sections. Terms described there such as subscription, page, and authorization are used in the remainder of this post.

    In order to receive events, a subscription is needed listing the specific events desired. The creation of a subscription returns both a subscription ID and a page number to be used in the REST calls to receive events.

    At this time, a page contains 0 to 100 items (events) along with the next page number to use in a subsequent call.

    The following is a list of supported events types available from the REST API:

    Activity Events
    Activity Link Events
    Inventory Events
    Required Inventory Events
    User Events
    Resource Events
    Resource Preference Events

    This post uses the following subset of events from the Activity event type:

    activityCreated
    activityUpdated
    activityStarted
    activitySuspended
    activityCompleted
    activityNotDone
    activityCanceled
    activityDeleted
    activityDelayed
    activityReopened
    activityPreworkCreated
    activityMoved

    The process described in this post can be modified slightly for each different event type. Note: the columns returned for each event type differ slightly and require modifications to the staging table and parsing section of the procedure.

    Using Oracle Database as a Service

    This post uses the new native support for JSON offered by the Oracle 12c database. Additional information about these new features may be found in the document JSON in Oracle Database.

    These features provide a solution that overcomes a current limitation in the APEX_JSON package. The maximum length of JSON values in that package is limited to 32K characters. Some of the field values in OFSC events exceed this length.

    Preparing the DBaaS Wallet

    Create an entry in a new or existing Oracle database wallet for the trusted public certificates used to secure connections to the web service via the Internet. A link to the Oracle Wallet Manager documentation is included in the References section. Note the location and password of the wallet as they are used to issue the REST request.

    The need for a trusted certificate is detected when the following error occurs: ORA-29024: Certificate validation failure.

    An example certificate path found using Chrome browser is shown below. Both of these trusted certificates need to be in the Oracle wallet.

    • 2

    Creating a BICS User in the Database

    The complete SQL used to prepare the DBaaS may be viewed here.

    Example SQL statements are below:

    CREATE USER “BICS_USER” IDENTIFIED BY password
    DEFAULT TABLESPACE “USERS”
    TEMPORARY TABLESPACE “TEMP”
    ACCOUNT UNLOCK;
    — QUOTAS
    ALTER USER “BICS_USER” QUOTA UNLIMITED ON USERS;
    — ROLES
    ALTER USER “BICS_USER” DEFAULT ROLE “CONNECT”,”RESOURCE”;
    — SYSTEM PRIVILEGES
    GRANT CREATE VIEW TO “BICS_USER”;
    GRANT CREATE ANY JOB TO “BICS_USER”;

    Creating Database Schema Objects

    Three tables need to be created prior to compiling the PL/SQL stored procedure. These tables are:

    *     A staging table to hold OFSC Event data

    *     A subscription table to hold subscription information.

    *     A JSON table to hold the JSON responses from the REST calls

    The staging table, named OFSC_EVENT_ACTIVITY, has columns described in the OFSC REST API for the Activity event type. These columns are:

    PAGE_NUMBER — for the page number the event was extracted from
    ITEM_NUMBER — for the item number within the page of the event
    EVENT_TYPE
    EVENT_TIME
    EVENT_USER
    ACTIVITY_ID
    RESOURCE_ID
    SCHEDULE_DATE
    APPT_NUMBER
    CUSTOMER_NUMBER
    ACTIVITY_CHANGES — To store all of the individual changes made to the activity

    The subscription table, named OFSC_SUBSCRIPTION_PAGE, has the following columns:

    SUBSCRIPTION_ID     — for the supported event types
    NEXT_PAGE                — for the next page to be extracted in an incremental load
    LAST_UPDATE            — for the date of the last extract
    SUPPORTED_EVENT — for the logical name for the subscription event types
    FIRST_PAGE               — for the first page to be extracted in a full load

    The JSON table, named OFSC_JSON_TMP, has the following columns:

    PAGE_NUMBER — for the page number extracted
    JSON_CLOB       — for the JSON response received for each page

    Using API Testing Tools

    The REST requests should be developed in API testing tools such as cURL and Postman. The JSON expressions for parsing should be developed and tested in a JSON expression testing tool such as CuriousConcept. Links to these tools are provided in the References section.

    Note: API testing tools such as SoapUI, CuriousConcept, Postman, and so on are third-party tools for using SOAP and REST services. Oracle does not provide support for these tools or recommend a particular tool for its APIs. You can select the tool based on your requirements.

    Subscribing to Receive Events

    Create subscriptions prior to receiving events. A subscription specifies the types of events that you want to receive. Multiple subscriptions are recommended. For use with the method in this post, a subscription should only contain events that have the same response fields.

    The OFSC REST API document describes how to subscribe using a cURL command. Postman can also easily be used. Either tool will provide a response as shown below:

    {
    “subscriptionId”: “a0fd97e62abca26a79173c974d1e9c19f46a254a”,
    “nextPage”: “160425-457,0”,
    “links”: [ … omitted for brevity ]
    }.

    Note: The default next page is for events after the subscription is created. Ask the system administrator for a starting page number if a past date is required.

    Use SQL*Plus or SQL Developer and insert a row for each subscription into the OFSC_SUBSCRIPTION_PAGE table.

    Below is an example insert statement for the subscription above:

    INSERT INTO OFSC_SUBSCRIPTION_PAGE
    (
    SUBSCRIPTION_ID,
    NEXT_PAGE,
    LAST_UPDATE,
    SUPPORTED_EVENT,
    FIRST_PAGE
    )
    VALUES
    (
    ‘a0fd97e62abca26a79173c974d1e9c19f46a254a’,
    ‘160425-457,0’,
    sysdate,
    ‘Required Inventory’,
    ‘160425-457,0’
    );

    Preparing and Calling the OFSC RESTful Service

    This post uses the events method of the OFSC REST API.

    This method requires the Basic framework for authorization and mandates a base64 encoded value for the following information: user-login “@” instance-id “:” user-password

    An example encoded result is:

    dXNlci1sb2dpbkBpbnN0YW5jZS1pZDp1c2VyLXBhc3N3b3Jk

    The authorization header value is the concatenation of the string ‘Basic’ with the base64 encoded result discussed above. The APEX_WEB_SERVICE package is used to set the header as shown below:

    v_authorization_token := ‘ dXNlci1sb2dpbkBpbnN0YW5jZS1pZDp1c2VyLXBhc3N3b3Jk’;
    apex_web_service.g_request_headers(1).name  := ‘Authorization’;
    apex_web_service.g_request_headers(1).value := ‘Basic ‘||v_authorization_token;

    The wallet path and password discussed in the Preparing the DBaaS Wallet section are also required. An example path from a Linux server is:

    /u01/app/oracle

    Calling the Events Request

    The events request is called for each page available for each subscription stored in the OFSC_SUBSCRIPTION_PAGE table using a cursor loop as shown below:

    For C1_Ofsc_Subscription_Page_Rec In C1_Ofsc_Subscription_Page
    Loop
    V_Subscription_Id := C1_Ofsc_Subscription_Page_Rec.Subscription_Id;
    Case When P_Run_Type = ‘Full’ Then
    V_Next_Page := C1_Ofsc_Subscription_Page_Rec.First_Page;
    Else
    V_Next_Page := C1_Ofsc_Subscription_Page_Rec.Next_Page;
    End Case; … End Loop;

    The URL is modified for each call. The subscription_id and the starting page are from the table.

    For the first call only, if the parameter / variable p_run_type is equal to ‘Full’, the staging table is truncated and the page value is populated from the FIRST_PAGE column in the OFSC_SUBSCRIPTION_PAGE table. Otherwise, the staging table is not truncated and the page value is populated from the NEXT_PAGE column.

    Subsequent page values come from parsing the nextPage value in the responses.

    An example command to create the URL from the example subscription above is:

    f_ws_url := v_base_url||’/events?subscriptionId=’ ||v_subscription_id|| chr(38)||’page=’ ||v_next_page;

    The example URL result is:

    https://ofsc-hostname/rest/ofscCore/v1/events?subscriptionId=a0fd97e62abca26a79173c974d1e9c19f46a254a&page=160425-457,0

    An example call using the URL is below:

    f_ws_response_clob := apex_web_service.make_rest_request (
    p_url => f_ws_url
    ,p_http_method => ‘GET’
    ,p_wallet_path => ‘file:/u01/app/oracle’
    ,p_wallet_pwd => ‘wallet-password‘ );

    Storing the Event Responses

    Each response (page) is processed using a while loop as shown below:

    While V_More_Pages
    Loop
    Extract_Page;
    End Loop;

    Each page is parsed to obtain the event type of the first item. A null (empty) event type signals an empty page and the end of the data available. An example parse to obtain the event type of the first item is below. Note: for usage of the JSON_Value function below see JSON in Oracle Database.

    select  json_value (f_ws_response_clob, ‘$.items[0].eventType’ ) into f_event_type from  dual;

    If there is data in the page, the requested page number and the response clob are inserted into the OFSC_JSON_TMP table and the response is parsed to obtain the next page number for the next call as shown below:

    f_json_tmp_rec.page_number := v_next_page; — this is the requested page number
    f_json_tmp_rec.json_clob := f_ws_response_clob;
    insert into ofsc_json_tmp values f_json_tmp_rec;
    select json_value (f_ws_response_clob, ‘$.nextPage’ ) into v_next_page from dual;

    Parsing and Loading the Events Responses

    Each response row stored in the OFSC_JSON_TMP table is retrieved and processed via a cursor loop statement as shown below:

    for c1_ofsc_json_tmp_rec in c1_ofsc_json_tmp
    loop
    process_ofsc_json_page (c1_ofsc_json_tmp_rec.page_number);
    end loop;

    An example response is below with only the first item shown:

    {
    “found”: true,
    “nextPage”: “170110-13,0”,
    “items”: [
    {
    “eventType”: “activityUpdated”,
    “time”: “2017-01-04 12:49:51”,
    “user”: “soap”,
    “activityDetails”: {
    “activityId”: 1297,
    “resourceId”: “test-resource-id“,
    “resourceInternalId”: 2505,
    “date”: “2017-01-25”,
    “apptNumber”: “82994469003”,
    “customerNumber”: “12797495”
    },
    “activityChanges”: {
    “A_LastMessageStatus”: “SuccessFlag – Fail – General Exception: Failed to update FS WorkOrder details. Reason: no rows updated for: order_id = 82994469003 service_order_id = NULL”
    }
    }
    ],
    “links”: [

    ]
    }

    Each item (event) is retrieved and processed via a while loop statement as shown below:

    while f_more_items loop
    process_item (i);
    i := i + 1;
    end loop;

    For each item, a dynamic SQL statement is prepared and submitted to return the columns needed to insert a row into the OFSC_EVENT_ACTIVITY staging table as shown below (the details of creating the dynamic SQL statement have been omitted for brevity):

    An example of a dynamically prepared SQL statement is below. Note: for usage of the JSON_Table function below see JSON in Oracle Database.

    DYN_SQL

    The execution of the SQL statement and the insert are shown below:

    execute immediate f_sql_stmt into ofsc_event_activity_rec;
    insert into ofsc_event_activity values ofsc_event_activity_rec;

    Parsing Events Responses using APEX_JSON

    Update: As of December, 2016 the  APEX 5.1 APEX_JSON package has removed the limitation of 32K lengths for JSON values. This update allows the continued use of an Oracle 11g database if desired.  This new section demonstrates the usage.

    Each page response clob is parsed with the APEX_JSON.PARSE procedure as shown below. This procedure stores all the JSON elements and values in an internal array which is accessed via JSON Path statements.

    apex_json.parse(F_Ws_Response_Clob);

    Each page is tested to see if it is an empty last page. A page is deemed empty when the first event has a null event type as shown below.

    apex_json.parse(F_Ws_Response_Clob);
    F_Event_Type := apex_json.get_varchar2(p_path => ‘items[1].eventType’);
    Case When F_Event_Type Is Null
    Then V_More_Pages := False; …

    An example response is shown in the section above.

    Each item (event) is retrieved and processed via a while loop statement as shown below:

    while f_more_items loop
    process_item_JParse (i);
    i := i + 1;
    end loop;

    For each item (event), the event is parsed into a variable row record as shown below:

    OFSC_EVENT_ACTIVITY_rec.PAGE_NUMBER := F_Page_Number;
    OFSC_EVENT_ACTIVITY_rec.ITEM_NUMBER := FI ;
    OFSC_EVENT_ACTIVITY_rec.EVENT_TYPE := apex_json.get_varchar2(p_path => ‘items[‘ || Fi || ‘].eventType’) ;
    OFSC_EVENT_ACTIVITY_rec.EVENT_TIME := apex_json.get_varchar2(p_path => ‘items[‘ || Fi || ‘].time’) ;
    OFSC_EVENT_ACTIVITY_rec.EVENT_USER := apex_json.get_varchar2(p_path => ‘items[‘ || Fi || ‘].user’) ;
    OFSC_EVENT_ACTIVITY_rec.ACTIVITY_ID := apex_json.get_varchar2(p_path => ‘items[‘ || Fi || ‘].activityDetails.activityId’) ;
    OFSC_EVENT_ACTIVITY_rec.RESOURCE_ID := apex_json.get_varchar2(p_path => ‘items[‘ || Fi || ‘].activityDetails.resourceId’) ;
    OFSC_EVENT_ACTIVITY_rec.SCHEDULE_DATE := apex_json.get_varchar2(p_path => ‘items[‘ || Fi || ‘].activityDetails.date’) ;
    OFSC_EVENT_ACTIVITY_rec.APPT_NUMBER := apex_json.get_varchar2(p_path => ‘items[‘ || Fi || ‘].activityDetails.apptNumber’) ;
    OFSC_EVENT_ACTIVITY_rec.CUSTOMER_NUMBER := apex_json.get_varchar2(p_path => ‘items[‘ || Fi || ‘].activityDetails.customerNumber’) ;
    OFSC_EVENT_ACTIVITY_rec.ACTIVITY_CHANGES := Get_Item_ACTIVITY_CHANGES (FI);

    The insert of the row is shown below:

    insert into ofsc_event_activity values ofsc_event_activity_rec;

    Verifying the Loaded Data

    Use SQL*Plus, SQL Developer, or a similar tool to display the rows loaded into the staging table.

    A sample set of rows is shown below:

    tabResults

    Troubleshooting the REST Calls

    Common issues are the need for a proxy, the need for an ACL, the need for a trusted certificate (if using HTTPS), and the need to use the correct TLS security protocol. Note: This post uses DBaaS so all but the first issue has been addressed.

    The need for a proxy may be detected when the following error occurs: ORA-12535: TNS:operation timed out. Adding the optional p_proxy_override parameter to the call may correct the issue. An example proxy override is:

    www-proxy.us.oracle.com

    Scheduling the Procedure

    The procedure may be scheduled to run periodically through the use of an Oracle Scheduler job as described in Scheduling Jobs with Oracle Scheduler.

    A job is created using the DBMS_SCHEDULER.CREATE_JOB procedure by specifying a job name, type, action and a schedule. Setting the enabled argument to TRUE enables the job to automatically run according to its schedule as soon as you create it.

    An example of a SQL statement to create a job is below:

    BEGIN
    dbms_scheduler.create_job (
    job_name => ‘OFSC_REST_EVENT_EXTRACT’,
    job_type => ‘STORED_PROCEDURE’,
    enabled => TRUE,
    job_action => ‘BICS_OFSC_REST_INTEGRATION’,
    start_date => ’12-JAN-17 11.00.00 PM Australia/Sydney’,
    repeat_interval => ‘freq=hourly;interval=24’ — this will run once every 24 hours
    );
    END;
    /

    Note: If using the BICS Schema Service database, the package name is CLOUD_SCHEDULER rather than DBMS_SCHEDULER.

    The job log and status may be queried using the *_SCHEDULER_JOBS views. Examples are below:

    SELECT JOB_NAME, STATE, NEXT_RUN_DATE from USER_SCHEDULER_JOBS;
    SELECT LOG_DATE, JOB_NAME, STATUS from USER_SCHEDULER_JOB_LOG;

    Summary

    This post detailed a method of extracting and loading data from Oracle Field Service Cloud (OFSC) into the Oracle Business Intelligence Cloud Service (BICS) using RESTful services.

    The method extracted JSON-formatted data responses and used the PL/SQL language to call the web services, parse the JSON responses, and perform database table operations in a Stored Procedure. It also produced a BICS staging table which can then be transformed into star-schema object(s) for use in modeling.

    Finally, an example of a database job was provided that executes the Stored Procedure on a scheduled basis.

    For more BICS and BI best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for BICS.

    References

    Complete Procedure

    Complete Procedure using APEX_JSON

    JSON in Oracle Database

    REST API for Oracle Field Service Cloud Service

    Scheduling Jobs with Oracle Scheduler

    Database PL/SQL Language Reference

    APEX_WEB_SERVICE Reference Guide

    APEX_JSON Reference Guide

    Curious Concept JSON Testing Tool

    Postman Testing Tool

    Base64 Decoding and Encoding Testing Tool

    Using Oracle Wallet Manager

    Oracle Business Intelligence Cloud Service Tasks

     

    Integrating Sales Cloud and Service Cloud using ICS – troubleshooting issues with security configuration

    $
    0
    0

    Introduction

    This blog talks about a few “gotchas” when integrating Oracle Sales Cloud (OSC) and Oracle Service Cloud (OSvC) using Oracle’s iPaaS platform, the Integration Cloud Service(ICS).
    The idea is to have a ready reckoner for some common issues faced, so that customers can hit the ground running when integrating between OSvC and OSc using ICS

     

    ICS Integrations for OSC OSvC

    Pre-built ICS integrations are available from Oracle for certain objects and can be downloaded from My Oracle Support. Contact Oracle Support to download the pre-built integrations and the documentation that comes along with it.   

    The pre-built integration provides out of the box standard integration for the following –

    •     Integrate Account and Contacts Objects from Sales Cloud to Service Cloud

    OSC_SVC_integrations

    •    Integrate Organization and Contact objects from Service Cloud to Sales Cloud

    SVC_OSC_integrations
    The pre-built integration is built using ICS and provides a few standard field mappings. It can serve as a template and users can update any custom field mappings as needed.
    The ICS Prebuilt integrations also serve as reference for building other custom integrations between OSC and OSvC using ICS. ICS integrations can be built for integrating more objects like Partner and Opportunity objects from OSC. Similarly flows can be created to integrate Asset and Incident objects from OSvC. Refer to the Sales cloud Adapter documentation  and OSvC Adapter documentation  here for capabilities that can be used to build Custom integrations.

     

    ICS Credential in Sales Cloud

    One issue that could be faced by users after following the steps in the PreBuilt integrations document and activating the ICS integrations, is that the Account and Contact subscriptions do not flow from OSC to ICS.
    This is usually due to issues with creating the ICS credentials in OSC.
    Note that a csfKey entry in Sales Cloud infrastructure stores the ICS credentials used by Sales Cloud. This key is used to connect to ICS and invoke the subscription based integrations at runtime.

    Refer to this excellent blog post from  my colleague Naveen Nahata, which gives simple steps to create the csf Key. The SOA Composer page where csf key and values are updated is shown below.

    001_CSF_Key

    Note that OSC ‘R12’ and ‘R11’ customers can now self create csfKey on the SOA Composer App using the steps from Naveen’s blog above.
    R10 customers however, should create a support SR for the csfKey creation. Refer to the steps as mentioned in the implementation guide document within the R10 preBuilt integrtaion download package.

    Invalid Field Errors in OSvC

    Further, when testing the integration of Contact or Account from OSC to OSvC, the ICS instances could be going to failed state.  ICS may show the instance to be in failed state as shown below.

    OSC_SVC_ACCT_Created_Error_2
    Tracking the failed instance further may show error message as seen below

     

    ErrorMessage
    If the OSC_SVC_ACCOUNT_CREATED integration is ‘TRACE ENABLED’, then the Activity Stream/Diagnostic log file can be downloaded from ICS to further inspect the message payloads flowing in the integration instance.
    If one searches the logs for request/response message payloads using the ICS instance ID that has failed, he/she may find out that the issue is not really at the createOriginalSystemReference stage of the flow but the BatchResponse stage from Service Cloud.

     Error:  Invalid Field While processing Organization->ExternalReference(string)

    The response payload from OSvC will look as below

    <nstrgmpr:Create>
    <nstrgmpr:RequestErrorFault xmlns:nstrgmpr="urn:messages.ws.rightnow.com/v1_3">
    <n1:exceptionCode xmlns:nstrgmpr="http://xmlns.oracle.com/cloud/adapter/rightnow/OrganizationCreate_REQUEST/types">INVALID_FIELD</n1:exceptionCode>
    <n1:exceptionMessage xmlns:nstrgmpr="http://xmlns.oracle.com/cloud/adapter/rightnow/OrganizationCreate_REQUEST/types">Invalid Field While processing Organization-&gt;ExternalReference(string).</n1:exceptionMessage>
    </nstrgmpr:RequestErrorFault>
    </nstrgmpr:Create>

    Solution:

    Ensure that the credentials specified in the EVENT_NOTIFICATION_MAPI_USERNAME and EVENT_NOTIFICATION_MAPI_PASWD in OSvC do not refer to a ‘real’ OSvC user. OSvC user credentials may not have the rights to update External Reference fileds. It is important  that a dummy username/password is created in the EVENT_NOTIFICATION_MAPI_* fields in OSvC. And remember to use this credential when configuring the OSvC connection in ICS.

    ICS Credential in Service Cloud

    Another crucial part of the OSvC Configuration is setting the Credentials to use for Outgoing Requests from OSvC to ICS. This is done by setting the EVENT_NOTIFICATION_SUBSCRIBER_USERNAME and EVENT_NOTIFICATION_SUBSCRIBER_PASSWD  parameters in OSvC. This credential is used by OSvC to connect and execute ICS integrations and must point to a ‘real’ user on ICS. This user should have the “Integration Cloud Service Runtime Role” granted to it.

    References:

    Using Event Handling Framework for Outbound Integration of Oracle Sales Cloud using Integration Cloud Service
    Service Cloud
    Sales Cloud

     

    Viewing all 376 articles
    Browse latest View live