Quantcast
Channel: ATeam Chronicles
Viewing all 376 articles
Browse latest View live

Extracting Data from Oracle Business Intelligence 12c Using the BI Publisher REST API

$
0
0

Introduction

This post details a method of extracting data from an Oracle Business Intelligence Enterprise Edition (OBIEE) environment that is integrated with Oracle Business Intelligence Publisher (BIP) 12c. The environment may either be Cloud-Based or On-Premise.

The method utilizes the BI Publisher REST API to extract data from a BIP report. It also uses BIP parameters to filter the result set.

It builds upon the A-Team post Using Oracle BI Publisher to Extract Data From Oracle Sales and ERP Clouds. That post uses SOAP web services to accomplish the same result.

Note: The BI Publisher REST API is a new feature in the 12c version and functions only when accessing a BIP 12c environment.

The steps below depict how to build the extract.

Utilize an existing BI Analysis and BI Publisher Report

This post uses the analysis, filter and BI Publisher report from the post Using Oracle BI Publisher to Extract Data From Oracle Sales and ERP Clouds. Note: This post uses a filter named Analysis rather than the one named level4.

Create a REST Request to Retrieve the BIP Report Definition

This step sends a REST request to retrieve information necessary to actually call the report. Specifically, the xml portion needed for the parameter.

This post uses the Postman API testing utility as noted in the References section at the end of this post.

1. Create a new Collection

The collection is created by clicking the icon shown below:

BIP_POSTMAN_COLLECTION

Note: Enter a name and save the collection.

2. Add URL

Add the URL with this format: http(s)://hostname/xmlpserver/services/rest/v1/reports/path/reportname

For example: http(s)://hostname/xmlpserver/services/rest/v1/reports/custom%2FBIP_DEMO_REPORT

Notes:

The catalog location of the first report is Shared Folders/Custom/BIP_DEMO_REPORT. The top level shared folder in the catalog, Shared Folders, is assumed. The starting point is the folder named below that i.e. Custom.

The URL must be HTML encoded to be sent over the internet. Any space character in the path is replaced with %20 and any slash character i.e. / is replaced with %2F.

The URL for this report is shown below:

BIP_POSTMAN_URL

3. Add Authorization Header

Click on Headers as shown in the figure above.

Enter a key of Authorization.

For the value use a Base64 encoded username and password prefixed with “Basic “. To obtain the encoding, this post uses the website at https://www.base64encode.org/

The username and password are shown below separated by a colon character. The encoded result is shown at the bottom.

Base64 Encode Username Password

The header looks like this. Note: the encoded username and password below is derived from valid credentials.

BIP_POSTMAN_AUTHORIZATION

4. Get Report Definition

Set the command to Get, and click Send. The response will return in JSON format as shown below:

Note: That the parameter name for this report is the prompt label, Analysis, prefixed with the text saw.param.

BIP_POSTMAN_reptDef

Create a REST Request to Run the BIP Report

This creates the request to extract the data.

1. Add an Additional Header

For the additional header, enter a key of Content-Type.

Enter a value of multipart/form-data; boundary=“Boundary_1_1153447573_1465550731355”Note: The boundary value entered here in the header is for usage in the body below. The boundary text may be any random text not used elsewhere in the request.

Change the command in the upper left to Post.

The two headers are shown below:

BIP_POSTMAN_RUN1

2. Create the Body

The Body tab is opened as shown in the figure above.

The structure of the body is shown below. Note: The boundary text specified in the header is the first and last line of the structure. All boundary lines must be prefixed by the “–” string. Additionally, the closing boundary line must also be suffixed with the “–” string.

BIP_POSTMAN_RUNBODY

The Content-Type: application/json line specifies the request format.

The Content-Disposition: form-data; name=“ReportRequest” line specifies that the text following the blank line are the non-default items and values to be used for the run.

The JSON request text specifies the cache is bypassed and the value “Audio” is passed to the prompt / parameter to filter the results.

3. Send the Request and Review Results

The results are shown below:

BIP_POSTMAN_RESULTS

The result section is separated by system-generated boundary lines.

The XML output is shown above the closing boundary line.

Usage of the REST Request

The REST API request to run a BIP report may now be used anywhere a REST API request can be issued.

An example of the REST API request used in a Curl statement is shown below. Curl is a command line tool for getting or sending files using REST syntax.

BIP_POSTMAN_CURL

Summary

This post details a simple method of extracting data from an OBIEE environment using BI Publisher 12c and the BI Publisher Rest API.

For more BICS and BI best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for BICS.

References

API Testing POSTMAN Download

API Testing using POSTMAN

REST API for Oracle BI Publisher

Get Started with Analyses and Dashboards

Report Designer’s Guide for Oracle Business Intelligence Publisher


ICS Connectivity Agent Advanced Configuration

$
0
0

Oracle’s Integration Cloud Service (ICS) provides a feature that helps with the integration challenge of cloud to ground integrations with resources behind a firewall. This feature is called the ICS Connectivity Agent (additional details about the Agent can be found under New Agent Simplifies Cloud to On-premises Integration). The design of the Connectivity Agent is to provide a safe, simple, and quick setup for ICS to on-premise resources. In many cases this installation and configuration is an almost no-brainer activity. However, there are edge cases and network configurations that make this experience a bit more challenging.

We have encountered the following post-installation challenges with the ICS 16.3.5 Connectivity Agent:

1. Networks containing proxy server with SSL and/or Man In The Middle (MITM) proxy
2. On-premise resources requiring SSL
3. nonProxyHost required for on-premise resources
4. White list OMCS and upgrade URIs

It’s important to note that future releases of ICS may improve on these configuration challenges. However, some are not related to the product (e.g., network white list) and appropriate actions will need to be coordinated with the on-premise teams (e.g., network administrators).

Import Certificates

One of the more challenging activities with post-configuration of the ICS Connectivity Agent is updating the keystore with Certificates that the agent needs to trust. Since the agent is a lightweight, single server, WebLogic installation, there are no web consoles available to help with the certificate import. However, if you investigate this topic on the internet you will eventually end up with details on using java keytool and WebLogic WLST to accomplish this task. Instead of doing all this research, I am including a set of scripts (bash and WLST) that can be used to expedite the process. The scripts are comprised of 4 files where each file contains a header that provides details on how the script works and its role in the process. Once downloaded, please review these headers to make yourself familiar with what is required and how they work together.

The following is a step-by-step example on using these scripts:

1. Download the scripts archive on the machine where the Connectivity Agent is running
Scripts: importToAgent.tar
2. Extract the scripts archive into a directory. For example:
[oracle@icsagent importToAgent]$ tar xvf importToAgent.tar.gz
createKeystore.sh
importToAgentEnv.sh
importToAgent.sh
importToAgent.py
3. Update the importToAgentEnv.sh to reflect your agent environment
4. Create a subdirectory that will be used to hold all the certificates that will need to be imported to the agent keystore:
[oracle@icsagent importToAgent]$ mkdir certificates
5. Download or copy all certificates in the chain to the directory created in the previous step:
[oracle@icsagent importToAgent]$ ls -l certificates/
total 12
-rwxr-x---. 1 oracle oinstall 1900 Nov 1 14:55 intermediate-SymantecClass3SecureServerCA-G4.crt
-rwxr-x---. 1 oracle oinstall 1810 Nov 1 14:55 main-us.oracle.com.crt
-rwxr-x---. 1 oracle oinstall 1760 Nov 1 14:55 root-VeriSignClass3PublicPrimaryCertificationAuthority-G5.crt
NOTE: You can use your browser to export the certificates if you do not have them available elsewhere. Simply put the secured URL in the browser and then access the certificates from the “lock”:

AdvancedAgentConfig-002

6. Execute the createKeystore.sh:
[oracle@icsagent importToAgent]$ bash createKeystore.sh -cd=./certificates -cp=*.crt
Certificates will be added to ./certificates/agentcerts.jks
Adding certificate intermediate-SymantecClass3SecureServerCA-G4.crt
Certificate was added to keystore

Adding certificate main-us.oracle.com.crt
Certificate was added to keystore

Adding certificate root-VeriSignClass3PublicPrimaryCertificationAuthority-G5.crt
Certificate already exists in system-wide CA keystore under alias
Do you still want to add it to your own keystore? [no]: yes
Certificate was added to keystore

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 3 entries

main-us, Nov 1, 2016, trustedCertEntry,
Certificate fingerprint (SHA1): 9D:61:69:38:4C:54:AC:44:5C:22:90:E1:8F:80:8F:85:43:9E:8D:7C
intermediate-symantecclass3secureserverca-g4, Nov 1, 2016, trustedCertEntry,
Certificate fingerprint (SHA1): FF:67:36:7C:5C:D4:DE:4A:E1:8B:CC:E1:D7:0F:DA:BD:7C:86:61:35
root-verisignclass3publicprimarycertificationauthority-g5, Nov 1, 2016, trustedCertEntry,
Certificate fingerprint (SHA1): 4E:B6:D5:78:49:9B:1C:CF:5F:58:1E:AD:56:BE:3D:9B:67:44:A5:E5

Keystore ready for connectivity agent import: ./certificates/agentcerts.jks

NOTE: This script has created a file called importToAgent.ini that contains details that will be used by the importToAgent.py WLST script. Here’s an example of what it looks like:

[oracle@icsagent importToAgent]$ cat importToAgent.ini
[ImportKeyStore]
appStripe: system
keystoreName: trust
keyAliases: intermediate-SymantecClass3SecureServerCA-G4,main-us,root-VeriSignClass3PublicPrimaryCertificationAuthority-G5
keyPasswords: changeit,changeit,changeit
keystorePassword: changeit
keystorePermission: true
keystoreType: JKS
keystoreFile: ./certificates/agentcerts.jks
7. Make sure your agent server is running and execute the importToAgent.sh:
[oracle@icsagent importToAgent]$ bash importToAgent.sh -au=weblogic -ap=welcome1 -ah=localhost -aport=7001

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

Using the following for the importKeyStore:
WebLogic URI = t3://localhost:7001
WebLogic User = weblogic
WebLogic Password = welcome1
appStripe = system
keystoreName = trust
keyAliases = intermediate-SymantecClass3SecureServerCA-G4,main-us,root-VeriSignClass3PublicPrimaryCertificationAuthority-G5
keyPasswords = changeit,changeit,changeit
keystorePassword = changeit
keystorePermission = true
keystoreType = JKS
keystoreFile = ./certificates/agentcerts.jks

Connecting to t3://localhost:7001 with userid weblogic ...
Successfully connected to Admin Server "AdminServer" that belongs to domain "agent-domain".

Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.

Location changed to serverRuntime tree. This is a read-only tree with ServerRuntimeMBean as the root.
For more help, use help('serverRuntime')

Location changed to domainRuntime tree. This is a read-only tree with DomainMBean as the root.
For more help, use help('domainRuntime')

Keystore imported. Check the logs if any entry was skipped.

At this point you will have imported the certificates into the keystore of the running Connectivity Agent. I always bounce the agent server to make sure it starts cleanly and everything is picked up fresh.

Update http.nonProxyHost

If your network contains a proxy server, you will want to make sure that any on-premise resource the agent will be connecting to is on the http.nonProxyHosts list.  This way the agent knows to not use the proxy when trying to connect to an on-premise endpoint:

AdvancedAgentConfig-003

To update this Java option, open the $AGENT_DOMAIN/bin/setDomainEnv.sh and search for nonProxyHosts. Then add the appropriate host names to the list. For example:

Before

export JAVA_PROPERTIES=”${JAVA_PROPERTIES} -Dhttp.nonProxyHosts=localhost|127.0.0.1 -Dweblogic.security.SSL.ignoreHostnameVerification=true -Djavax.net.ssl.trustStoreType=kss -Djavax.net.ssl.trustStore=kss://system/trust”

After

export JAVA_PROPERTIES=”${JAVA_PROPERTIES} -Dhttp.nonProxyHosts=localhost|127.0.0.1|*.oracle.com -Dweblogic.security.SSL.ignoreHostnameVerification=true -Djavax.net.ssl.trustStoreType=kss -Djavax.net.ssl.trustStore=kss://system/trust”

Once this update has been done, you will need to restart your agent server for the update to be picked up.

Add Agent URIs to Network White List

The Connectivity Agent contains two URIs that it will reach out to. The primary one is Oracle Message Cloud Service (OMCS), which is how ICS communicates to the on-premise agent. The other one is for things like agent upgrades. These two URIs must be added to the network white list or the agent will not be able to receive requests from ICS. The URIs are located in the following Connectivity Agent file:

$AGENT_DOMAIN/agent/config/CpiAgent.properties

The contents of this file will look something like the following (with the URIs circled):

AdvancedAgentConfig-001

Summary

Please follow the official on-line documentation for the ICS Connectivity Agent install. If you run into things like handshake errors when the agent starts or attempts to connect to an on-premise resource, the aforementioned will be a good starting point to resolve the issue. This blog most likely does not cover all edge cases, so if you encounter something outside of what is covered here I would like to hear about it.

Oracle GoldenGate: How to Configure On-Premise to GoldenGate Cloud Services (GGCS) Replication with Corente VPN

$
0
0

Introduction

This document will walk you through how to configure Oracle GoldenGate replication between On-Premise to GoldenGate Cloud Service (GGCS) on Oracle Public Cloud (OPC) via Virtual Private Network (VPN) using Corente Services Gateway (CSG).

The high level steps for this replication configuration are as follows:

  • Creation of SSH Public/Private Key Files
  • Provisioning of Database Cloud Service (DBCS) which is a pre-requisite of GGCS
  • Provisioning of GoldenGate Cloud Service (GGCS)
  • On-Premise Corente Services Gateway Configuration and Setup
  • Provisioning of Compute Instance for OPC Corente Services Gateway
  • On-Premise and OPC Corente VPN Tunnel configuration
  • GGCS VPN Tunnel Configuration via Generic Routing Encapsulation (GRE) protocol
  • On-Premise and GGCS GoldenGate Sample Replication Configuration

Note: Provisioning Resources in this article requires Oracle Cloud and Corente VPN credentials. If you don’t have one, please contact your Oracle Sales Representative.

The following assumptions have been made during the writing of this article:

  • The reader has a general understanding of Windows and Unix platforms.
  • The reader has basic knowledge of Oracle GoldenGate products and concepts.
  • The reader has a general understanding of Cloud Computing Principles
  • The reader has basic knowledge of Oracle Cloud Services
  • The reader has a general understanding of Network Computing Principles

Main Article

The GoldenGate Cloud Service (GGCS), is a cloud based real-time data integration and replication service, which provides seamless and easy data movement from various On-Premises relational databases to databases in the cloud with sub-second latency while maintaining data consistency and offering fault tolerance and resiliency.

GoldenGate Cloud Service (GGCS) Architecture Diagram:

GGCS_Architecture_v2

In a typical implementation of On-Premise to GGCS, the connectivity is accomplished through the use of SSH, since this is the only port opened by default on the cloud. The On-Premise server communicates directly to the GGCS server through the use of SOCKS proxy.

However, in cases where the security policy dictates or the client doesn’t want to use SSH, as an alternative a VPN connection between On-Premise and the OPC can be used. Currently, for GGCS it has been certified with Corente Services Gateway for VPN connectivity.

Corente VPN Service Architecture Diagram:

corente_architecture_v4

GGCS Corente VPN Deployment Architecture diagram depicted in this article:

ggcs_corente_architecture_v2

GoldenGate Connectivity Flow:

  • On-Premise Network to OPC Network: GGCS Instance can be reached via GRE IP address 172.16.201.3
  • OPC Network to On-Premise Network: On-Premise OGG VM Server can be reached via IP address 192.168.201.51

The complete document can be found on the Oracle Support site under the document ID: 2198461.1

 

IDCS Audit Reports using Visual Analyzer

$
0
0

Introduction

This article is to help expand on topics of integration with Oracle’s Cloud Identity Management service called Identity Cloud Service (IDCS).  IDCS delivers core essentials around identity and access management through a multi-tenant Cloud platform.  As part of the IDCS framework, audit events are captured for all significant events, changes, and actions, which are sent to a database table.  I previously wrote an article to understand more about how to get the IDCS Audit Events, but in this article I am going to expand on how to leverage the IDCS Audit Events and create some nice reports using a tool called Visual Analyzer (VA) http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/bics/va/va.html – section1, which is part of Oracle’s BICS (Business Intelligence Cloud Services).  With VA anyone can quickly create useful audit reports in a visual method for a variety reasons.

 

Overview of Oracle Visual Analyzer

Visual Analyzer is a web-based tool that comes with BICS that provides a way to explore and analyze data visually.  It provides self-service analysis versus the more robust grand daddy BICS data modeling, which provides a way to organize and secure data in a complex business form — more on that here http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/bics/DataModeler/bimodeler.html.   However, in this article I want to cover something much simpler where a Business Analysis or Info Security person can create some quick reports using the IDCS audit event data and VA.   The graphic below shows a couple simple reports created in minutes using IDCS audit event data.  The pie chart on the left shows a break out in percentage of user usage of various types of web browsers and their versions. The right pie chart shows the percentage in population of the two user accounts in my test system.  Because IDCS provides a large schema audit events the possibilities of using the data with VA are many!

Visual Analyzer

First Extract IDCS Audit Data into BICS

Before we can create interesting reports using VA we must first extract the audit data from IDCS.   Since IDCS is a cloud service and relies on its powerful REST APIs, we can narrow down how we can get audit data from IDCS by what is offered by way of BICS to integrate with IDCS.   So the two best options to get data from IDCS into BICS for the purposes of VA are the following.

 

  • OPTION 1 – BICS PL/SQL Procedure calling APEX Web Service that calls IDCS REST API
    This processes uses a PL/SQL procedure that calls an APEX REST Web Service to the IDCS REST API, which gets a response in JSON (JavaScript Object Notation) and parses the data to transform and insert it into BICS database table.  Once the data is in a BICS table a person who works with BICS can create some data models and finally expose the IDCS audit data as a data source in a BICS catalog for VA to consume.
  • OPTION 2 –  Use Postman with Online JSON2XSLX Excel conversion Import to BICS
    Postman is a very useful REST Client tool used for testing and development that allows you to send REST requests to the IDCS REST API.  The response in Postman is a JSON output that can be saved as a file. Using Postman you can easily provide filters so that your search is for specific audit data and that helps save time and reduces the amount of data you need to work with.  The goal in this approach is to use the JSON output file, then convert it to an Excel file using an online tool, and finally import the Excel file into BICS as a Data Set to be used by VA.

 

Using Option 2 to Import Data into BICS

Option 1 mentioned in the previous section is a great way to run extractions of audit data on a daily, weekly, monthly, basis, but for just one off simple audit reports option 2 is a great way to go.  So I am going to explain option 2 for the purposes of keeping things simple for this article and another way your Business Analysis, Info Security, or any other interested person can run one off reports with the latest data from IDCS.  I am going to start by presenting you to a great tool called Postman.  Postman is a great REST Client Tool that can be used to send REST requests over HTTP to an API like IDCS REST API.

To keep things simple I want to illustrate a simple use case where we will create some SSO Audit Reports from successful and failed login and logout audit records.  The audit record itself has several attributes per the IDCS schema, but there is one attribute of interest in our case named “eventId”.  When a login or logout event happens in IDCS, an audit record is created and has an attribute eventId that will hold a value that starts with “sso”; i.e. sso.session.create.success, sso.authentication.failure, etc. We can leverage this data to create our reports in VA.  The following are steps to start by getting audit data from IDCS to importing that data into BICS and using VA to create the report.

 

STEP 1 – Use Postman to send a REST search to extract SSO events

As a FYI, this article is not going to provide a step-by-step to using Postman as there are other articles on that.  What I am going to do instead is focus on providing some example IDCS parameters tailored for our use case in order to extract successful or failed login and logout events. The following endpoint and filters is what we will use to extract login and logout events from IDCS.

 

 

/admin/v1/AuditEvents?filter=eventId sw “sso”&sortBy=timestamp&sortOrder=descending&count=1000

 

 

The above value “/admin/v1/AuditEvents” is the IDCS REST endpoint used to query audit records.  Then “filter” with optional parameters are what can be used to query specific records.  The following table breaks out each parameter, its value, and an explanation of what they are doing.

 

Parameter Value Description
eventId sw “sso” EventID values that start with “sso”
sortBy timestamp Sort by the attribute timestamp
sortOrder descending Using the attribute timestamp sort records by descending
count 1000 Limit the number of records returned to 1000

 

Using Postman enter the endpoint and filter to request the JSON response to return SSO events.  Below is a Postman screenshot to help illustrate what you should see when you configure the same filter with the above parameters.  You can see the data returned is a JSON response, which in the next step I will show how to save that JSON response to a file.

Postman

STEP 2 – Save the JSON response

In Postman send the same REST request from the previous step, but this time click the drop arrow next to the Send button and an option “Send and Download” should appear.   Select Send and Download and this time you will get an option to save the response to a file — use the file name “idcs_sso_auditevents.json”.

Send and Save

STEP 3 – Convert the JSON tile to an Excel XslX file.

Using an Internet web browser go to http://www.json-xls.com/json2xls, and follow the follow sub steps.

3.1 – Select the tab named File

3.2 – Click the Choose File button and select the JSON file created in the previous step.

3.3 – Select the Destination Format XlsX (Unless you have an older version of Excel).

3.4 – Select the Layout Auto

3.5 – Check “I’m not a robot” which will make you verify you are not a robot.

3.6 – Finally click the Both button. A file should save as data.xslx

 

NOTE:   It is important to select “Both” otherwise the output file will seem corrupt to Excel when you open it in the next step.

 

JSON2XLS

STEP 4 – Remove unnecessary worksheets from the Excel XslX file.

Before we import the Excel file into BICS as a Data Set all the worksheets except for “Resources” need to be removed.  In the previous step when the option Both was used selecting from Select Rendering View & Submit, it read the schema of the JSON and learned that it needed to break out the JSON file into worksheets.  If any other option was chosen like Plain or Hierarchy importing the Excel file into BICS would have caused complaints that the Excel file is not valid.  By selecting Both, json-xls.com does a good job of separating JSON file to its respective Excel worksheets based on how it reads in the schema and then puts all the necessary data we need on the worksheet Resources.  This saves us a lot of time and provides better data to use when creating an audit report in VA.  To complete this step, delete each worksheet except for Resources by right clicking on each worksheet and then select the delete option.

Delete Worksheet

Remove Excel Worksheets

 

STEP 5 – Save As the Excel file to a new file

Once all the worksheets except for Resources are removed it should look similar to the following screenshot.  Now Save As the Excel file to a new Excel file called idcs_sso_auditevents.xlsx.

Excel Resource Worksheet

 

STEP 6 – Login to BICS and then click on the VA Projects icon.

Launch Visual Analyzer

 

STEP 7 –  In the VA Projects click the Create Project button.

STEP 8 –  Now click the Create New Data Source button.

STEP 9 –  From the Create New Data Source window click From a File.

STEP 10 –  From the File Selector pick the idcs_sso_events.xlsx created earlier.

STEP 11 –  Change the “id” column from attribute to Measure and Count Distinct.

This allows you to provide calculations to reports that I will cover later.

Change ID column to Measure

 

STEP 10 –  Click the Add to Project button to complete the upload.

STEP 11 –  Click the Save project icon in the menu bar, which is the square with a down arrow. Give the Project a name “My IDCS Audit Reports”.

 

Creating Sample Visual Analysis Reports

The two example reports we are going to create will use IDCS schema attributes listed in the following table.  Just keep in mind as you expand beyond this article to think of other reports you want to create there are many more IDCS schema attributes not only for audit events, but many others.

 

IDCS Schema Attribute Meaningful Description
actorName Username
id Unique ID; set in VA as a Count Distinct to sum totals
eventId Value that describes the event. For our purposes the following values will be used:
  • sso.authentication.failure = Login Failure
  • sso.session.create.success = Login Success
  • sso.session.delete.success = Logout Success

 

 

Report 1: Single Sign-On Count Pie Chart

This report is pretty simple and shows the count of SSO login successes, login failures, and logout successes.  The key to knowing what reports you can create is understanding the meaning of some of the IDCS SCIM schema attributes.  From the previous section we left off with importing the data into BICS, now we continue to use that data to create a SSO Count Pie Chart.

STEP 1 –  If not already login to BICS and go to the VA Projects, then click on the My IDCS Audit Reports project.

STEP 2 –  Expand the idcs_sso_events data on the left and drag the eventId data element to the large pane on the right, it should automatically put the eventId in the Rows section.

STEP 3 – Now drag the id data element into the Values (Slice) pane. It should now look like the graphic below and immediately we see some useful counts of login and logout successes and failures.

Layout SSO Pie Chart

STEP 4 –  Now to turn this data into a Pie Chart change the Pivot option from the middle column to Pie.  The Pie Chart becomes a mono chromatic blue.

Define Pie Chart

STEP 5 –  Now let’s color the Pie Chart by dragging the eventId from our data set list to the Color option in the center column.  There –– easy as Pie!  Don’t worry, the puns are free.

SSO Pie Chart

STEP 6 –  Now all we need to do is Save our project by clicking on the icon in the menu that looks like a square with a down arrow so we can continue to use this project to build another report.

Save Pie Chart

This completes our first report.  Hopefully this gives an idea of what is possible to do with IDCS audit data and using Oracle Visual Analyzer.  I want to emphasize it is important to understand the IDCS Schema attributes, the values in the data, and what it means, which allows us to quickly generate some pretty useful data.

Report 2: SSO Events by Username Pivot Table

This report we are going to create is a Pivot Table that lists all users and the count of their SSO event.  For example how many times did user tim.melander login successfully, login failure, logout successfully?

 

STEP 1 –  If not already login to BICS and go to the VA Projects, then click on the My IDCS Audit Reports to open the project.

STEP 2 –  Start by creating a new Visualization by clicking the top right menu icon Add Visualization.  Then make sure the Visualization is changed to Pivot.

Add new Graphic Chart

STEP 3 –  Let’s build the Pivot table by first dragging the actorName into the Visualization and dropping it into the Rows.

STEP 4 –  Now drag the eventId into the Columns section.

STEP 5 –  Now drag the id first into the Values section, and grab the id again and drag a second one on to the Columnssection.  Your Visualization should now look like the following.

Define SSO Count Chart

STEP 6 –  Now Save the project by clicking on the Save Project icon.  Then to see more of the new Pivot Chart click the Maximize button with the double arrow in the top right corner of the Pivot Chart.

SSO Table Chart

From the graphic above you can now see that this Pivot Table provides some useful data on how many successful logins, failed logins, and successful logouts are counted for each User.

Summary

We learned to extract specific audit data from IDCS using the IDCS CLI tool, which makes it pretty simple.  Then we imported that data into BICS to use it in Visual Analyzer to quickly create a couple reports.  I hope this helps provides a start to understanding how easy and powerful both IDCS and BICS Visual Analyzer are when used together to create useful reports.  This should give a starting point, which hopefully will push you to go even further on expanding data from IDCS and use VA to create some amazing reports.

Testing Oracle ATG Commerce with ATG Dust

$
0
0

 

Introduction

ATG Dust is a Java unit testing framework based on JUnit meant for use with Oracle ATG Commerce.

 

How ATG Dust works

In a non ATG application, when you create a unit test against a class, the test is often executing by instantiating the class directly, and calling methods inside it. Code written for Oracle ATG Commerce works a bit differently than a standalone class, or set of classes.

In Oracle ATG Commerce, your custom class is often called a component. ATG components are loaded by, and run inside of Nucleus. You can learn more about Nucleus in the ATG product manuals, but for the purposes of this article, think of Nucleus as a container that loads your class into the ATG application. When you want to access the class, you do so through Nucleus.

ATG Dust allows your JUnit test to access your class through Nucleus. If you attempt to write tests directly against your class, you will likely find yourself creating many mock objects, and having to play games to simulate everything Nucleus provides. By running the unit test through the Dust framework, you are actually starting an instance of Nucleus, and executing your test cases against a running instance of Oracle ATG Commerce.

Since you having a running instance of Nucleus, you can run tests against the real, live components your actual ATG application will use. Access to pipelines, repositories, real shopping carts – they are all live and running. There is no need to mock all the other components your code interacts with or depends on.

Using ATG Dust

The ATG Dust testing framework was updated and moved to the Oracle Technology Network in mid-2016.

Support for Oracle ATG Commerce 11.x was added, as well as new features to allow for faster test case development.

Example tests are included to help get you started on writing your own test cases.

The ATG Dust code, examples, and documentation can be found on the Oracle Commerce sample code section of the Oracle Technology Network:

http://www.oracle.com/technetwork/indexes/samplecode/commerce-samples-2766867.html

 

The samples provided demonstrate running tests with either Maven or Apache Ant.

Unit tests using the ATG Dust framework can be run locally, automatically with continuous integration tools like Hudson and Jenkins, and in Oracle Developer Cloud.

 

 

Using OpenID Connect to delegate authentication to Oracle Identity Cloud Service

$
0
0

In this post, I will describe the process of using the Oracle Identity Cloud Service to provide authentication for a custom web application, using the OpenID Connect protocol. I will focus on the sequence of calls between the application and IDCS in order to focus on building an understanding of how OpenID Connect actually works.

The problem we are trying to solve

Before diving into any specifics, let’s take a minute to talk about OpenID Connect and understand why we might want to use it at all. Have a read through the OpenID Connect 1.0 specification before continuing. In a nutshell, OpenID Connect (OIDC) is a “simple identity layer on top of the OAuth 2.0 protocol”. While OAuth itself is often (mis)used to allow for the externalisation or delegation of authentication, it is, by design, a standard that is wholly concerned with authorisation. While it’s generally true that you need to be authenticated before authorisation makes sense, there was never any formalised way to do this within OAuth itself. OIDC is the layer that adds standardised support for authentication and identity in a way that is fully compatible with and completely built on the OAuth 2.0 standard.

We’re going to look at a very simple example of using OIDC to provide authentication for a custom web app. We’ll be using the Authorisation Code flow here, generally a more secure flow because the user agent (i.e. the web browser) never has direct access to any of the tokens involved. The primary reasons for incorporating this functionality into an application are twofold; firstly, we may want to reduce complexity for the application developers, by removing the need to worry about authentication, password storage, user registration and the like. They can simply use an existing cloud service to handle that part for them. Secondly, and perhaps more immediately valuable, is that by doing this, we can participate in single sign-on with other applications that are also integrated with Oracle Identity Cloud Service.

Now, at this point, I do need to point out that OIDC is not Web Access Management; it does not play the same role as a WAM product like Oracle Access Manager or CA SiteMinder. There is no agent here, doing perimeter authentication and managing user sessions on behalf of your app. Your app needs to explicitly invoke an OIDC flow and explicitly handle session lifecycle on its own, dependant on the identity information that it receives back from the OpenID Connect Provider. In fact, you should probably make a point to read Chris Johnson’s excellent post on why SAML is not the same as WAM, because in a lot of ways, OIDC is very similar to SAML in terms of the problems it attempts to solve. OIDC, though, is lightweight and REST/JSON-based, rather than the heavier XML-based SAML protocol.

Overview of the process

Here’s a simple list of steps explaining what our app needs to do (at run time) in order to establish a session and obtain user profile information, using the OIDC Authorization Code flow. I need to point out that you are very unlikely to ever have to implement these steps “from scratch”, since there are many proven, tested OpenID Connect client libraries available for virtually any development platform or language. Treat the below as instructional, but really, don’t try to roll your own in the real world.

1. When the app needs the authenticate the user, it generates a link to the OAuth2 Authorisation endpoint on the OIDC Provider (which is Oracle IDCS). This link includes the “openid” scope, the “code” response type and a local callback URL to which the Provider will redirect the browser once the authentication has been successful.

2. The user clicks the link, which results in an authentication challenge from Oracle IDCS. The user enters their credentials at the IDCS login screen and these are validated. If they are correct, an IDCS session is created for the user (represented by a browser cookie). Note that if the user already has an IDCS session due to a prior authentication, they will not be re-challenged, but will move on to the next step.

3. IDCS generates an authorisation code. This is a short, opaque string that can safely be passed as part of an HTTP payload, since it is not valuable without the corresponding Application credentials. IDCS redirects the user back to the callback URL specified in step 1, appending the code to the URL string.

4. The app extracts the authorisation code from the HTTP payload. It then makes a REST call to the OAuth2 Token endpoint on IDCS. This call is authenticated by passing the app’s client ID and secret in a Basic Auth header. The body of the call includes the authorisation code.

5. Oracle IDCS authenticates the app using the client ID and secret and validates that authorisation code is valid and was issued for that app. It then returns a JSON payload containing both an Identity Token and an Access Token. Both of these tokens conform to the JSON Web Token (JWT) standard.

6. The app uses the public IDCS signing certificate to validate the Identity Token (which contains a signature). This token, once decoded, contains a number of claims that tell the app about the authentication event that took place. These include the subject, the time of authentication, the session expiry time, level of authentication and so on.

7. The app makes a REST call to the UserInfo endpoint on Oracle IDCS. This call is authenticated by passing the Access Token obtained in step 5 as a Bearer Auth header.

8. IDCS validates the provided Access Token, which is specific to the user that authenticated in step 2. The Access Token will include a number of scopes, and based on these scopes, IDCS will return the appropriate user profile information back to the app in a JSON response.

9. Now it’s up to the app! It has all the identity and user profile information it needs in order to create a local user account (if this is a first-time login) and establish a local session for the user.

Required Setup

The first thing you need is to register your app as an OAuth Client with IDCS. This will allow you to obtain the Client ID and Secret you need. Make sure to select “Web Application” as the type.

Register OAuth

Ensure that you select the “Authorization Code” grant type and specify a valid callback URL pointing to an endpoint on your application that can receive and process the code.

Configure OAuth

Finally, take note of your Client ID and Secret. Your app will need to store these securely in an internal credential store and use them when making calls to IDCS.

Client Credentials

The other bit of setup that’s required is for you to obtain the signing certificate that IDCS uses to sign JSON Web Tokens. Your code or JWT library will need to use this certificate to validate the signature of the Identity and Access Tokens that IDCS generates. You can obtain this certificate by issuing a GET request against the following endpoint: <IDCS_HOST>/admin/v1/SigningCert/jwk. You will need to pass an appropriately-scoped Bearer JWT in the Authorization header in order to obtain the certificate.

Authenticating the User

Now that we have the setup completed, we can look at the details. There are three main steps in the OIDC login flow and the first is to send the user to the OIDC Provider for authentication. This is done by redirecting the client browser to a particular URL on the IDCS server and passing parameters, as defined below. Note that this redirect can be accomplished in a number of different ways: the user can explicitly click a link on the app homepage to start the login flow, or the app could perhaps use an intercepting session filter to generate an automatic redirect when it detects that the browser does not have a valid application session.

In either case, the browser should be redirected to the following IDCS URL:
https://<IDCS_HOST>/oauth2/v1/authorize

The following table explain the URL parameters that must be sent when constructing the authentication link:

client_id The Client ID for this application. IDCS uses this ID to tie the eventual authorisation code to the application that initiated the OIDC flow. No other application will be able to use the code. Note that the Client Secret is never passed in a URL string.
response_type This must be specified as “code” since we’re using the authorisation code flow.
redirect_uri This is the URL-encoded address to which the client must be sent back once the authorisation code has been issued. Note that this must exactly match the “Redirect URL” you specified when you registered your application with IDCS. Once again, this is a safety mechanism to ensure that the code is only sent to your application endpoint, and not elsewhere.
scope This must be a space-delimited strong of the required scopes. The only mandatory value here is the “openid” scope, which is required in order for IDCS to generate the ID token. You can add other scopes as well, such as “profile”, “email”, “phone” and “groups”, depending on how much information about the user you need to retrieve later on. This point will become a bit clearer later, when we call the user info endpoint.
state This is an optional parameter that you can use to maintain state within your application once the authentication redirect has taken place and protect against certain attacks.The OIDC spec defines this as an “Opaque value used to maintain state between the request and the callback. Typically, Cross-Site Request Forgery (CSRF, XSRF) mitigation is done by cryptographically binding the value of this parameter with a browser cookie.” Whatever value your application passes in here will be returned by IDCS along with the authorisation code.
nonce Another optional parameter that you can use to protect against replay attacks. You should generate a strong random string value and associate it with the user session inside your application before passing it to IDCS. IDCS will include the nonce value inside the Identity Token, allowing your application to perform the necessary validation.

Taking the above into account, we redirect the user to the following URL:

https://tenant1.idcs.internal.oracle.com:8943/oauth2/v1/authorize?client_id=0cbf3bc1a3524d47af286f166bb03ef6&response_type=code
&redirect_uri=https%3A%2F%2Fmyapp.oracle.com%2FauthCode
&scope=openid%20profile%20email&state=102345&nonce=AHFG45asd450

The user will need to authenticate:

IDCS Login

And is then redirected back to the following URL:

https://myapp.oracle.com/authCode?
code=AQIDBAUnE761LZohkZk7YCeeUHsC8m_cTK1Ck5VIkyshi7NXcAVX9thOULxfzcAzNMWLptvzo0hpVdqna4kDiZbNbdEXMTEgRU5DUllQVElPTl9LRVkxNCB7djF9NCA=
&state=102345

This completes the first step in the process and we can now move on.

Exchanging the Code for Tokens

Now that the user has been sent back to the application with an authorisation code, we need to execute the second leg of the flow and perform a call back to the OIDC Provider to validate that code and receive the tokens we need. First of all, though, we should examine the “state” parameter value that has been sent back in the redirect to ensure that it matches the one we generated when redirecting the user. Once we’ve done that, our app needs to extract the “code” parameter value from the URL strong and pass this to IDCS via a back-channel REST call. I say “back channel” because this is a direct call from the application to IDCS that does not involve the user’s browser in the flow. This point is key, because it ensures security of the application’s client secret and also adds a layer of “hijack prevention”, ensuring that any party that may intercept the browser redirect and obtain the authorisation code is not able to use it.

There are a few things we need to do here. First, we use the application’s Client ID and Secret to form a basic authorisation header. We do this by concatenating them together with a colon delimiter, then Base64 encoding the whole thing. Thus 0cbf3bc1a3524d47af286f166bb03ef6:0359486e-d5ac-4339-96ca-96686a9cc223 becomes MGNiZjNiYzFhMzUyNGQ0N2FmMjg2ZjE2NmJiMDNlZjY6MDM1OTQ4NmUtZDVhYy00MzM5LTk2Y2EtOTY2ODZhOWNjMjIz and it’s this value that we pass as our basic authorisation header. We POST to the IDCS token endpoint, which is at https://<IDCS_HOST>/oauth2/v1/token, passing the following values in the request body:

grant_type This is set to “authorization_code”
code The value of the authorisation code we received.

Here’s an example, using CURL:

curl -k –header “Authorization: Basic MGNiZjNiYzFhMzUyNGQ0N2FmMjg2ZjE2NmJiMDNlZjY6MDM1OTQ4NmUtZDVhYy00MzM5LTk2Y2EtOTY2ODZhOWNjMjIz” -d “grant_type=authorization_code&code=AQIDBAUnE761LZohkZk7YCeeUHsC8m_cTK1Ck5VIkyshi7NXcAVX9thOULxfzcAzNMWLptvzo0hpVdqna4kDiZbNbdEXMTEgRU5DUllQVElPTl9LRVkxNCB7djF9NCA=” https://tenant1.idcs.internal.oracle.com:8943/oauth2/v1/token

What we get back, assuming our code is valid and has not yet expired, is a JSON response similar to the following. Note that I’ve shortened the token values to make the output more readable:

{“access_token”:”eyJ4NXQjUzI1NiI6Ijg1a3E1….3Sf4u9k”,
“token_type”:”Bearer”,
“expires_in”:3600,
“id_token”:”eyJ4NXQjUzI1NiI6Ijg1a3E1MFVBV…a4B2iVpjbohdSY”}

Note that we receive two tokens back – both of them are standard JSON Web Tokens (JWT’s). The ID Token is the one that we need to use first, since it’s this token that tells us about the authentication event that has taken place. We should use a JWT library within our application to validate the signature of the token we receive, using the certificate obtained earlier and then we should inspect the body of the token. I’ve used an online tool to parse the ID Token and this is what it contains in the payload:

{
“user_tz”: “America/Chicago”,
“sub”: “rob.otto”,
“user_locale”: “en”,
“user_displayname”: “Rob Otto”,
“csr”: “false”,
“sub_mappingattr”: “userName”,
“iss”: “https://identity.oraclecloud.com/”,
“tok_type”: “IT”,
“user_tenantname”: “tenant1”,
“nonce”: “AHFG45asd450”,
“sid”: “f61d5f79-1dad-40b1-ae7c-f04e9453ad87”,
“aud”: [
“https://identity.oraclecloud.com/”,
“0cbf3bc1a3524d47af286f166bb03ef6”
],
“user_id”: “9f12e029a3434918ad8d096ebc6f96ba”,
“authn_strength”: “2”,
“auth_time”: “1478172094”,
“session_exp”: 1478200894,
“user_lang”: “en”,
“exp”: 1478200894,
“iat”: 1478179463,
“tenant”: “tenant1”,
“jti”: “0aeb7369-33d1-4050-bae2-1e57596c9b2c”
}

Now, before we do anything at all with this token, we must check the “nonce” value and ensure that it matches the random value that we passed through to IDCS in the first step. That tells us that this is the correct ID token for the user that performed the authentication and can help mitigate any replay-type attacks.

Other than that, we have all the basic information we need in order to create a session for this user within our application. IDCS has returned the authenticated subject “rob.otto”, the user’s display name “Rob Otto”, the opaque/non-transient user ID and also time stamps indicating when the user was authenticated and when their login session will expire. Depending on our needs, we may stop here, or, if needed, we can carry on with the third part of the flow to obtain further user profile information.

Obtaining the User Profile

If our application requires more information and if we included additional scopes such as “profile” or “email” in our initial authentication request, we can make a further call back to IDCS to obtain this information. The key here is the other token we received from the token end point, the Access Token. Again, we can inspect this token to see that the body looks as follows:

{
“user_tz”: “America/Chicago”,
“sub”: “rob.otto”,
“user_locale”: “en”,
“user_displayname”: “Rob Otto”,
“user.tenant.name”: “tenant1”,
“csr”: “false”,
“sub_mappingattr”: “userName”,
“iss”: “https://identity.oraclecloud.com/”,
“tok_type”: “AT”,
“user_tenantname”: “tenant1”,
“client_id”: “0cbf3bc1a3524d47af286f166bb03ef6”,
“sid”: “f61d5f79-1dad-40b1-ae7c-f04e9453ad87”,
“aud”: “https://tenant1.idcs.internal.oracle.com:8943”,
“user_id”: “9f12e029a3434918ad8d096ebc6f96ba”,
“scope”: “openid profile email”,
“client_tenantname”: “tenant1”,
“user_lang”: “en”,
“exp”: 1478183063,
“iat”: 1478179463,
“client_name”: “MyOIDCClient”,
“tenant”: “tenant1”,
“jti”: “cc49cfad-079c-408b-9aff-354e032b2a3e”
}

This is a standard scoped OAuth JWT that allows us to call back to IDCS on behalf of the logged-in user. As we can see, our token includes the “profile” and “email” scopes, which will allow us to obtain some further information about our user that can be useful in building a local profile.

Our application can obtain the information it needs by making a simple GET request to the UserInfo endpoint on IDCS, which is here: https://<IDCS_HOST>/oauth2/v1/userinfo. There is no need to do anything special, other than passing the access token we obtained as a Bearer Authorisation header. Here’s an example of the CURL command – again, I’ve shorted the access token to keep things all on one line:

curl -k –header “Authorization: Bearer eyJ4NXQjUzI1NiI6Ijg1a3E1….3Sf4u9k” https://tenant1.idcs.internal.oracle.com:8943/oauth2/v1/userinfo

The JSON object we receive back is self-explanatory and contains the necessary information about our user:

{“birthdate”:””,
“email”:”robert.otto@oracle.com”,
“email_verified”:true,
“family_name”:”Otto”,
“gender”:””,
“given_name”:”Rob”,
“name”:”Rob Otto”,
“preferred_username”:
“rob.otto”,”sub”:
“rob.otto”,
“website”:””}

We can use this information within our application to build a local user profile and van even accomplish “just in time” provisioning of the user from IDCS if this fits our need.

In Conclusion

This post has demonstrated, in detail, one of the simpler OpenID Connect authentication flows and has built on it further to show how user registration can be accommodated as well. There is a huge amount more than can be done using Oracle Identity Cloud Service and it’s support for OAuth 2.0 and OpenID Connect. Let me know via the comments if you have any other use cases in mind that I can dive into further.

Thanks for reading and have fun diving into Oracle Identity Cloud Service.

BICS Data Sync – Running Post Load Procedures Against DBCS and Oracle RDBMS

$
0
0

For other A-Team articles about BICS and Data Sync, click here

Introduction

The Data Sync tool provides the ability to extract from both on-premise, and cloud data sources, and to load that data into BI Cloud Service (BICS), and other relational databases. In the recent 2.2 release of Data Sync, the functionality to run Post Load SQL and Stored Procedures was added.

Currently this functionality is only available for Oracle DBCS or Oracle DB target databases – it will NOT work for a Schema Service database target – although this article provides details of a workaround when the target is a schema service database.

This article will walk through an example to set up both a post load SQL command, and to execute a stored procedure on the target database.

 

Download The Latest Version of Data Sync Tool

Be sure to download and install the latest version of the Data Sync Tool from OTN through this link.

For further instructions on configuring Data Sync, see this article.  If a previous version of Data Sync is being upgraded, use the documentation on OTN.

 

Main Article

This article will present a simple use case that can be expanded for real world load scenarios.

A Post Load Processing session will be set up to run both a SQL statement, and a stored procedure on the target database once the load has completed.

 

Create Target Summary Table

In this example, a summary table will be loaded with a row of data once the underlying fact table has been refreshed in BICS.  Because the summary table only exists in the target database, we need to create it as a target in data sync.

1. Under ‘Project’ / ‘Target Tables/ Data Sets’, select ‘New’

In this example the summary table is called ‘AUDIT_EVENT_SUMMARY‘, and consists of just 2 fields.

 

Cursor

An ‘AUDIT_RECORD_COUNT‘ numeric field, and a ‘CAPTURE_DATE‘ date field.

Cursor

2. Create the fields as shown, then ‘Save’

 

Create Post Load Processing Process

Now that we have the target table defined, we can set up the post-load SQL, and the stored procedure.

1. From ‘Project’ / ‘Post Load Processing’ select ‘New’

Cursor

2. Enter an appropriate name, hit ‘Save’, then select the ‘SQL Source Tables’ tab

Cursor

Data Sync offers the ability to execute the SQL and Stored Procedure either at the end of the entire load process, or after the load completion of one or more individual tables.  This is controlled within the ‘SQL Source Tables’ section.

If the post load processing is to be run after all tables have been loaded, then no source tables need to be added.  If this section is left empty, then by default the data sync tool will run the post load processing only after all tables are loaded.

If the post load processing can be run after one or more tables have been loaded, then that dependency can be set up here.

3. Select ‘Add/Remove’ and then the ‘Go’ search button to generate a list of table sources being used.

Cursor

In this example we will trigger the load after the fact table (‘AUDIT_EVENT_DBAAS’) has been loaded.

4. Select the table, then hit ‘Add’, and finally ‘Save’ to close out of the screen.

Cursor

There is a ‘SQL Target Tables’ tab as well.  This is useful if the target table needs to be truncated as part of the update process.

Truncating and reloading tables with indexes and large record volumes can result in performance issues.  The data sync tool will handle this by having the target database perform the following steps:

 

  • Truncate the table
  • Drop all indexes
  • Insert the data
  • Re-create the indexes
  • Analyze the table

If the target table is always going to be loaded incrementally, then select the ‘Truncate for Full Load’ check box, else ‘Truncate Always’.

For demonstration purposes, we will select our target summary table.

5. Select ‘Add/Remove’

Cursor

6. Select ‘Go’ to list the available target tables

Cursor

7. Select the table(s) and ‘Add’.  Then chose the appropriate option as to whether to ‘Truncate Always’ or only ‘Truncate For Full Load’.

Cursor

The next steps will be used to define the SQL and Stored Procedure.

8. Select ‘OK’ to return to the ‘Edit’ tab, hit ‘Save’, and then select the radio button within the ‘SQL(s)/Stored Procedure(s)’ box

Screenshot_11_2_16__11_40_AM

9. In the next screen select ‘Add’, enter an appropriate name, and then select whether this step is to run a ‘SQL’ statement, or a ‘Stored Procedure’.  In this first example we will set up a post load SQL command.

Cursor

10. There is also the option to run this post load process on just an ‘Initial Load’, an ‘Incremental Load’ or ‘Both’.  In this example we select ‘Both’.

Cursor

11. In the section below, as shown, enter the valid SQL statement to be run on the target database.  In this case a single row is added to the summary table that we had created previously.

Cursor

12. Click ‘OK’ to return to the previous screen.

To create a Stored Procedure follow similar steps.  In this example we will set up the post load processing entry to run both the SQL, and a Stored Procedure.

13. Select ‘Add’, enter a suitable name, and select the ‘Stored Procedure’ type.

14. Enter the name of the procedure in the entry box.  You do not need to type in ‘execute’ – the data sync tool will take care of that – just enter the name of the stored procedure, then click ‘OK’ and ‘OK’ again to exit out of the Post Load Processing set-up.

Cursor

 

When the Job is next run, the SQL and Procedure will be run after the fact table has been loaded.

It is possible to set up multiple post load processes, with different dependencies.  Each will be run independently once the source tables defined have been loaded.

 

Summary

This article walked through the steps to create a Post Load SQL and Stored Procedure within the Data Sync tool.

For other A-Team articles about BICS and Data Sync, click here.

BICS Data Sync – Running Post Load Procedures against a Schema Service DB

$
0
0

For other A-Team articles about BICS and Data Sync, click here

Introduction

The Data Sync tool provides the ability to extract from both on-premise, and cloud data sources, and to load that data into BI Cloud Service (BICS), and other relational databases. In the recent 2.2 release of Data Sync, the functionality to run Post Session SQL and Stored Procedures was added. This allows, for instance, the Data Sync tool to call a stored procedure to update summary tables, and materialized views, in the target databases once the underlying data load has been completed.

As of the time of writing, this functionality is only available when the target database is an Oracle DBCS or standalone Oracle database.  It does NOT work with the standard BICS Schema Service target database.

This article provides steps for a viable workaround to run post session commands in a Schema Service target.

(for details on how to run this new functionality with a DBCS or standard Oracle DB target – see this article)

 

Main Article

Download The Latest Version of Data Sync Tool

Be sure to download and install the latest version of the Data Sync Tool from OTN through this link.

For further instructions on configuring Data Sync, see this article.  If a previous version of Data Sync is being upgraded, use the documentation on OTN.

Process Overview

Once the main data load has been completed, a single row will be inserted into a status table in the schema service database.  That will trigger the stored procedure to be run.

This solution will provide two triggering methods.  The choice of which to use will depend on the type of stored procedure that needs to be run once the data load has completed.

The current version of the Data Sync tool does not allow us to control the order that the load steps occur in. This means that we do not have the ability to make sure that the status table – that will trigger the stored procedure – is only loaded after all other table loads are complete.

As a workaround we will use 2 jobs. The first will load the data. Once that finishes, the second job will be triggered. This will load the single row into the status table, and that will trigger the post-load stored procedure to be run.

 

Create the Target Summary Table used to Trigger Post Session Stored Procedure

For this demonstration, a simple target table ‘DS_LOAD_STATUS’ will be created in the Schema Service database with 2 fields – a ‘LOAD_STATUS’ and ‘STATUS_DATE’. The make-up of this table is not important. The main point is that a table needs to exist in the schema service database that can be loaded last.  The two different trigger methods will be discussed next, but both will use the existence of a new row in this DS_LOAD_STATUS table to trigger the post session stored procedure.

1. This example SQL can be run in the ‘SQL Workshop’ tool within Apex for the Schema Service database accompanying the BICS environment to create the DS_LOAD_STATUS table.

CREATE table “DS_LOAD_STATUS” (
“STATUS_DATE” DATE,
“LOAD_STATUS” VARCHAR2(50)
)

 

Create the Triggering Mechanism

Two different methods are shown below.  Method 2 will work for all cases.  Method 1, which is slightly simpler, will work for specific cases.

Method 1

If the post session stored procedure does not include any DDL statements (for example, truncate, drop, create indexes, tables, etc) – so it is using only ‘select’, ‘insert’, ‘update’ and ‘delete’ commands – then the simplest method is to create an On-Insert trigger on the status table.  When a row is added, the trigger fires, and the stored procedure is run.

In this case, it is assumed that a stored procedure, named ‘POST_SESSION_STEPS’, has already been created.

The following SQL will create the ‘on insert’ trigger against the DS_LOAD_STATUS table so that after a row is inserted, this stored procedure is called.

create or replace trigger “DS_LOAD_TRIGGER_SP”
AFTER
insert on “DS_LOAD_STATUS”
for each row
begin
POST_SESSION_STEPS;
end;

 

Method 2

If the stored procedure does use DDL statements, then the use of a table on-insert trigger may not run smoothly.  In that case a scheduled database job will be created, which will look for a new row in the status table.  Once the new row is recognized, this job will execute the post load stored procedure.

Once again it is assumed that a stored procedure named ‘POST_SESSION_STEPS’, has already been created.

This process contains two steps.  Firstly a short stored procedure is created which will evaluate a condition – in this case whether a new row has been recently added to the status table – and if the condition is true, it will execute the the main stored procedure.

This SQL below creates this procedure called ‘CHECK_POST_SESSION_CONDITION‘ which will check to see if a new row has been added to the DS_LOAD_STATUS table within the last 5 minutes.

create or replace procedure CHECK_POST_SESSION_CONDITION as
V_ROW_COUNT INTEGER;
begin
select count (*) from DS_LOAD_STATUS
where STATUS_DATE > sysdate – interval ‘5’ minute;  — checking for row inserted in last 5 minutes
IF V_ROW_COUNT >= 1 THEN
POST_SESSION_STEPS; — post session procedure
END IF;
END;

The final step is to create a scheduled job that runs every 5 minutes checking the condition above.

begin CLOUD_SCHEDULER.CREATE_JOB (
JOB_NAME => ‘POST_SESSION_DS_LOAD_JOB’,
JOB_TYPE => ‘STORED_PROCEDURE’,
JOB_ACTION => ‘CHECK_POST_SESSION_CONDITION’, — run the CHECK_POST_SESSION_CONDITION procedure
REPEAT_INTERVAL => ‘freq=minutely; interval=5’ ); — run job every 5 minutes
END;

 

Set up Second Job in Data Sync

All remaining steps will be carried out in the environment where data sync is installed.

In this scenario, a Data Sync Job already exists which will load the desired data into the BICS schema service database and is named ‘Main_Load’.

If this job has never been run, run it now.  A successful load is important so that the ‘Signal’ file can be created.  This is the mechanism that will be used to trigger the second job, that will then load the status table, that will in turn trigger the post-load process.

We need to create a new Project for the second job.

3. Do this by selecting ‘Projects’ from the ‘File’ menu.

Cursor

4. Chose an appropriate name

Cursor

In this example, the target table with its trigger was created in the steps 1 and 2.  We need to set up this table as a target for data sync to load to.

5. Under ‘Project’, select ‘Target Tables/Data Sets’ and then ‘New’.  In the table name enter the exact name of the existing target table – in this case ‘DS_LOAD_STATUS‘.

Cursor

6. Select the ‘Table Columns’ sub-tab, and enter the column names and correct data types to match what was created in step 1.

Cursor

We also need to define a source to create the data for this DS_LOAD_STATUS table.  If a suitable table already exists in the source database, that may be used.  In this example we will base the data on a SQL statement.

7. Under ‘Project’ / ‘Relational Data’ select ‘Data from SQL’.  Provide a name for the source, and select to load into an existing target.  Use the search drop down to select the ‘DS_LOAD_STATUS’ table created in the step 1.  Select the source connection and enter the SQL.

Cursor

In this case it is a simple select statement that will return one row, with a value of ‘LOAD_COMPLETE’ for the LOAD_STATUS field, and the current time and date, for the STATUS_DATE.

select
sysdate as STATUS_DATE,
‘LOAD_COMPLETE’ as LOAD_STATUS
from dual

 

8. Select the newly created source, and then edit the Load Strategy.  In this case, because it’s a status table, we have chosen to always append the new row, and never delete existing data.

Cursor

9. Give the Job a suitable name in the ‘Jobs’ / ‘Jobs’ menu area, and then ‘Run’ the job.

Cursor

Make sure the job runs successfully before continuing.

 

Create Data Sync Trigger Mechanism

The Data Sync tool creates ‘Signal’ files whenever a job starts and successfully finishes. These files are stored in the /log/jobSignal sub-directory. Take a look in this directory.

In our case we see 4 files, as this image shows. The important one for our purpose is the one that shows when the Main_Load job has completed. In this case that Signal File is named ‘Main_Load_CompletedSignal.txt’. This is the file we will have Data Sync check for, and when it finds it, it will trigger the second job.

 

Cursor

To set up Data Sync to automatically trigger a job, we need to edit the ‘on_demand_job.xml’ file in the /conf-shared directory.

10. Open this file with a text editor.

Cursor

11. An entry needs to be added to the <OnDemandMonitors> section.

The syntax is:

<TriggerFile job=$JOB_NAME file=$FILE_TO_TRIGGER_JOB></TriggerFile>

In this example the full syntax will be:

<TriggerFile job=”POST_LOAD_JOB” file=”C:\Users\oracle\Desktop\BICSDataSync_V2_2\log\jobSignal\Main_Load_CompletedSignal.txt”> </TriggerFile>

12. Change the pollingIntervalInMinutes to the desired check interval. In this case we set it to 1, so that Data Sync will check for the existence of the Signal file every minute.  The entry should look similar to this.

Screenshot_10_27_16__5_36_PM

13. Save the updated on_demand_job.xml

14. Test the process is working.

Re-Open the Original Project and run the Main_Load job.  Monitor the jobSignal directory.  Shortly after finishing the Main_Job, the Signal file, in this case Main_Load_CompletedSignal.txt’ is found.  The Data Sync tool deletes the file so that the process will not run again, and starts the POST_LOAD_JOB created in step 9.

Screenshot_10_27_16__5_41_PM

15. As an additional check, go to the schema service database in Apex, and confirm that the DS_LOAD_STATUS table has had a new entry added, and that the ‘post-load’ stored procedure has been successfully run.

 

Object_Browser

Summary

This article walked through an approach to run a post-load stored procedure with the Data Sync tool and a schema service database target.

For other A-Team articles about BICS and Data Sync, click here.


Schedule Instances and Analytics Archive Using Cron Job in PCS

$
0
0

In PCS, you can archive instance data on demand or schedule the archive to run automatically by using PCS administration UI to select the date and time or alternatively you can use CRON entry. You can also schedule to archive the analytics data to the cloud storage and BICS in the same way.  However, there is a difference when you set the CRON tab for instances archive and analytics archive.

For Instances archive, the CRON expression frequency must be greater than or equal to the allowed 1 day threshold. In another word, it would mean that you cannot set the archive to run for every minute or every hour.  You can set the archive job to run at a specific time daily, for example, you can set it to run at 12pm every day, the CRON expression will be “0 0 12 * * ?”

PCS Archive 1

In the case of analytics archive, the CRON expression frequency must be greater than or equal to the allowed 6 hours’ threshold. That would mean that you cannot set the archive to run for every minute or every hour. However, if you want to set the analytics archive to run a few times a day, you can use the CRON hours field in the CRON expression to set the time you want to run the archive job. For example, if you want to run at 10am, 4pm, 10pm daily (with 6 hours gap in between), the CRON expression will be “0 0 10,16,22 * * ?”.

PCS Archive 2

If the analytics data synchronize to BICS job failed, PCS will retry 3 times internally with an interval of 90 secs. When the internal retries failed, you will receive an email notification with the reason of the failure, the email will contain a link for you to download the analytics archive data, so that you can import the archive data to BICS manually.

PCS Archive 3

You have 2 options to manually import the archive data:

Automated unit tests with Node.JS and Developer Cloud Services

$
0
0

Introduction

Oracle’s Developer Cloud Service (DevCS) is a great tool for teams of developers. It provides great tools for continuous delivery, continuous integration, team collaboration, scrum boards, code repositories and so on. When using these feature, you can leverage the best practices in an application lifecycle to deliver high quality and manageable code.
One of the phases of an application lifecycle we are going to focus on today is the testing phase. Tests can take place on the developer’s machine and by leveraging these test in an automated way on DevCS, we ensure the quality of the code throughout the lifecycle of the code.

 

Main Article

In this article we will take a closer look at using Node.JS in combination with Jasmine to test our code and configure an automated test script that will run every time a developer pushes his code to a specific branch in the code repository.

Why testing?

To many developers it is clear that testing can be advantages however many feel that testing adds an overhead to their already busy schedule. This is mainly a misconception as proper testing will increase the quality of the code. If you don’t test, you will spend more time debugging your code later on so you could say that testing is a way of being lazy by spending some more time in the beginning.

In addition to this, testing is not just a tool to make sure your code works, it can also be used as a design tool. This comes from the Behavior-driven development paradigm.  The idea is to define your unit test before writing any code. By doing this, you will have a clear understanding of the requirements of the code and as such, your code will be aligned with the requirements.
This also increases the re-usability of the code because a nice side effect of designing your code this way is that your code will be very modular and loosely coupled.

When we talk about Node.JS and JavaScript in general, the side effect of a “test-first” approach is that it will be much easier to reuse your code no matter if it’s client side JavaScript or server side JavaScript. This will become clear in the example we will build in this article.

Different types of test

When we take about writing tests, it is important to understand that there are different types of tests, each testing a specific area of your application and serving their own purpose:

Unit Tests

Unit tests are your first level of defense. These are the tests run on your core business logic. A good unit test does not need to know the context it is running on and has no outside dependencies.
The purpose of a unit test is like the name says: to test a unit of work. A typical example of a unit test is to test a function that checks if a credit card number is valid. That method doesn’t need to understand where the credit card number is coming from, nor does it need to understand anything around security or encryption. All it does is take a credit card number as input and returns a true or false value depending on the validity of the number.

Integration Tests

The next level of tests are the integration tests. These will test if all your different modules integrate well and test if the data coming from external sources is accurate.
It will group the different modules and check if these work well together. It will check for data integrity when you pass information from one module to another and makes sure that the values passed through are accurate.

End 2 End Tests

An end 2 end test typically requires a tool that allows you to record a user session after which that session is replayed. In a web application, Selenium is popular tool to perform these E2E tests. In such a scenario, you will define certain areas on the page that you know should have specific value. When the HTML of these areas are different from what you define, the test will fail. This is the highest level of testing you can have.

 

In this post we will focus on unit testing.

Creating a new project on Developer Cloud Service

Before we can start writing code, we need to define a project in Developer Cloud Service (DevCS). A project in DevCS is much more than a code repository. It also allows us to manage the development lifecycle by creating tasks and assigning them to people. It also provides a scrum board so we can manage project in an agile way.

In this post, we will create a microservice that does temperature conversion. It will be able to convert Celsius and Fahrenheit temperatures to each other and Kelvin.
In DevCS we define a new project called “Converter”:

project1

 

As template we select the “Initial Repository” as this will create the code repository we will be using to check in our code.

 

In the next step, we define the properties and we initialize a repository with readme file:

project3

Now we can continue and create our project.

Once the project is created, you will see your project dashboard:

project4

As you can see, the system created a repository called converter.git. On the right hand side you can find the HTTP and SSH links to the repo. We will need the HTTP link in order to clone the initial repository before we can start coding.

Once you copied the HTTP link to your GIT repo, you can open a command line so we can clone the repo.

At the location you want the repo to be created, we simply execute following command:

D:\projects\Oracle\testing>git clone https://<yourRepo>
Cloning into 'converter'...
Password for 'https://yannick.ongena@oracle.com@developer.us2.oraclecloud.com':
remote: Counting objects: 3, done
remote: Finding sources: 100% (3/3)
remote: Getting sizes: 100% (2/2)
remote: Compressing objects: 100% (37/37)
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
Checking connectivity... done.

This will clone the repository into a folder called “converter”. At the moment that folder will only contain a README.md file. The next step is to initialize that folder as a node.js project. This can easily be done by using the npm init command:

D:\projects\Oracle\testing>cd converter

D:\projects\Oracle\testing\converter>npm init
This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg> --save` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
name: (converter)
version: (1.0.0)
description:
entry point: (index.js) app.js
test command:
git repository: (https://<yourURL>)
keywords:
author:
license: (ISC)
About to write to D:\projects\Oracle\testing\converter\package.json:

{
  "name": "converter",
  "version": "1.0.0",
  "description": "converter.git",
  "main": "app.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "repository": {
    "type": "git",
    "url": "<yourURL>"
  },
  "author": "",
  "license": "ISC"
}


Is this ok? (yes)

This will have created the package.json file.

The next thing we need to do is installing the required modules.
For this example we will use express and the body-parser module to give us the basic middleware to start building the application. For testing purpose we will use jasmine which is a popular framework for behavior driven testing. Jasmine will be configured as a development dependency.
We will add following content to the package.json:

"dependencies": {
    "body-parser": "^1.15.2",
    "express": "^4.14.0"
  },
  "devDependencies": {
    "jasmine": "^2.5.2"
  }

Now we can simply install these modules by executing the npm install command from within the application’s folder.

Writing tests

Now that the project has been setup and we downloaded the required dependencies, we can start writing our code, or should I say, writing our tests?
Like I said in the introduction, we can use testing as a tool to design our service signature and this is exactly what we are going to do.

Jasmine is a perfect framework for this as it is designed to define behaviors. These behaviors will be translated to units of code that we can easily test.

If we think about our temperature converter that we are going to write, what behaviors would we have?

  • Convert Celsius to Fahrenheit
  • Convert Celsius to Kelvin
  • Convert Fahrenheit to Celsius
  • Convert Fahrenheit to Kelvin

Each of these behaviors will have its own piece of implementation that can be mapped to some testing code.

Before we can write our tests, we need to initialize the project for Jasmine. This can be done by executing the jasmine init command from within your application root:

node node_modules/jasmine/bin/jasmine.js init

This command will create a spec folder in which we need to write the specifications of our tests.

In that folder we create a new file converterSpec.js

It is important to end the filename with Spec because Jasmine has been configured to search for files that end with Spec. You can of course, change this behavior by changing the spec_files regex in the jasmine.json file in the support folder but by default Jasmine will look for every file ending in Spec in the spec folder.

The contents of the converterSpec.js will look like this:

describe("Converter ",function(){

    it("converts celsius to fahrenheit", function() {
        expect(converter.celsiusToFahrenheit(0)).toBeCloseTo(32);
        expect(converter.celsiusToFahrenheit(-10)).toBeCloseTo(14);
        expect(converter.celsiusToFahrenheit(23)).toBeCloseTo(73.4);
        expect(converter.celsiusToFahrenheit(100)).toBeCloseTo(212);
    });

    it("converts fahrenheit to celsius", function() {
        expect(converter.fahrenheitToCelsius(32)).toBeCloseTo(0);
        expect(converter.fahrenheitToCelsius(14)).toBeCloseTo(-10);
        expect(converter.fahrenheitToCelsius(73.4)).toBeCloseTo(23);
        expect(converter.fahrenheitToCelsius(212)).toBeCloseTo(100);
    });

    it("converts celsius to kelvin", function() {
        expect(converter.celsiusToKelvin(0)).toBeCloseTo(273.15);
        expect(converter.celsiusToKelvin(-20)).toBeCloseTo(253.15);
        expect(converter.celsiusToKelvin(23)).toBeCloseTo(296.15);
        expect(converter.celsiusToKelvin(100)).toBeCloseTo(373.15);
    });

    it("converts fahrenheit to kelvin", function() {
        expect(converter.fahrenheitToKelvin(32)).toBeCloseTo(273.15);
        expect(converter.fahrenheitToKelvin(14)).toBeCloseTo(263.15);
        expect(converter.fahrenheitToKelvin(73.4)).toBeCloseTo(296.15);
        expect(converter.fahrenheitToKelvin(212)).toBeCloseTo(373.15);
    });
});

These tests will fail because we haven’t written a converter yet.

We can execute this test suite by calling Jasmine from our root directory of the application:

node node_modules/jasmine/bin/jasmine.js

The output will contain some erros and a message saying that 4 out of 4 specs have failed:

D:\projects\Oracle\testing\converter>node node_modules/jasmine/bin/jasmine.js
Started
FFFF

Failures:
1) Converter  converts celsius to fahrenheit
  Message:
    ReferenceError: converter is not defined
  Stack:
    ReferenceError: converter is not defined
        at Object.<anonymous> (D:\projects\Oracle\testing\converter\spec\converterSpec.js:9:16)

2) Converter  converts fahrenheit to celsius
  Message:
    ReferenceError: converter is not defined
  Stack:
    ReferenceError: converter is not defined
        at Object.<anonymous> (D:\projects\Oracle\testing\converter\spec\converterSpec.js:16:16)

3) Converter  converts celsius to kelvin
  Message:
    ReferenceError: converter is not defined
  Stack:
    ReferenceError: converter is not defined
        at Object.<anonymous> (D:\projects\Oracle\testing\converter\spec\converterSpec.js:23:16)

4) Converter  converts fahrenheit to kelvin
  Message:
    ReferenceError: converter is not defined
  Stack:
    ReferenceError: converter is not defined
        at Object.<anonymous> (D:\projects\Oracle\testing\converter\spec\converterSpec.js:30:16)

4 specs, 4 failures
Finished in 0.01 seconds

By writing these tests, we established that our convertor should have following methods:

  • celsiusToFahrenheit
  • fahrenheitToCelsius
  • celsiusToKelvin
  • fahrenheitToKelvin

 

Implementing the converter

Once the signature of our code has been established, we can start implementing the code.
In our case, we need to create an object for the converter with the required functions. Therefore we create a new file converter.js with the following content:

var Converter = function(){
    var self = this;
}

Converter.prototype.celsiusToFahrenheit = function(temp){
    return temp*9/5+32;
};
Converter.prototype.fahrenheitToCelsius = function(temp){
    return (temp-32)/1.8;
};
Converter.prototype.celsiusToKelvin = function(temp){
    return temp +273.15;
}
Converter.prototype.fahrenheitToKelvin = function(temp){
    var cel = this.fahrenheitToCelsius(temp);
    return this.celsiusToKelvin(cel);
}


if (typeof exports == 'object' && exports)
    exports.Converter = Converter;

Now that the implementation is done, we can include this file in our converterSpec.js so the test will use this object:
At the top of converterSpec.js add the following lines:

var Converter = require("../converter").Converter;
var converter = new Converter();

If we rerun the jasmine tests we will notice that they succeed:

D:\projects\Oracle\testing\converter>node node_modules/jasmine/bin/jasmine.js
Started
....


4 specs, 0 failures
Finished in 0.005 seconds

So far, wrote some tests and implemented a plain old JavaScript object. We haven’t written any server specific code but yet, our core business logic is already done and tested.

Notice how we wrote this code without worrying about things like request body, response objects, get, post and other server specific logic. This is a very powerful feature of writing test in this way because now the exact same code can be used in any project that used JavaScript. No matter if it’s Node.JS, Oracle JET, Angular, Ionic,… it should work in any of these frameworks and we didn’t even spend additional time optimizing the code for this. It’s just a bi-product of a test-first approach!

Implementing the server

The last step is to write our server that consumes the converter. Our server will expose a single endpoint where we can specify an object with a temperature value and a units value. Based upon the units value, the converter will make al the required conversions and send the result back to the user.

Create a new file app.js with following contents

var express = require("express");
var parser = require("body-parser");

var app = express();
var http = require('http').Server(app);

app.use(parser.json());

var Converter = require("./convertor").Converter;
var converter = new Converter();

app.post("/convert",function(req,res){
    var temp = req.body.temp;
    var units = req.body.units;
    var result = {};
    if(units.toLowerCase() == "f"){
        result.fahrenheit = temp;
        result.celsius = converter.fahrenheitToCelsius(temp);
        result.kelvin = converter.fahrenheitToKelvin(temp);
    }
    else if(units.toLowerCase() == "c"){
        result.celsius = temp;
        result.fahrenheit = converter.celsiusToFahrenheit(temp);
        result.kelvin = converter.celsiusToKelvin(temp);
    }
    res.send(result);
    res.end();
});


http.listen(3000, function(){
    console.log('listening on *:3000');
});

 

Setting up automated testing on Developer Cloud Service

Once we have a first finished version of the code, it’s a good time to commit our code to the code repository. At the same time, we want to setup a build process on DevCS so that every time we commit code to the repository. it will fire of the tests we created so far.

In order to do this, we first need to modify the package.json so that we can make use of the npm test command to start the test.
This is fairly simple as npm test is just a shortcut to a script you define in the package.json. This should be the same command as we use when starting the tests from our command line.
Modify package.json so the scripts part looks like this:

 "scripts": {
    "test": "node node_modules/jasmine/bin/jasmine.js"
  },

When you save the file and execute npm test from a command line in the root folder of your application, it should start the tests.

Adding a .gitignore file

The next step we have to do before committing the code, is to add a gitignore file. This file will tell GIT what files and folder to ignore. The reason why we want this is because it’s a bad practice to include the node_modules folder in your code repository. The code in that folder isn’t written by us and we can simply initialize a new consumer of the repo by executing npm install. This way the modules don’t take additional space in the repository and it will be much faster to upload the code.

The .gitignore file needs to be put in the root of your application. For this application we only need to ignore the node_modules folder so the file will look like this:

# Dependency directories
node_modules

Creating a build configuration in DevCS

Before we commit the code, we need to setup a build configuration in DevCS.
A build configuration is a sequence of actions that can be configured depending on the type of application. For example when you are developing a J2EE application, the build configuration can execute a maven build, build the JAR/EAR file and pass it on to a deployment script so it can be deployed automatically to Java Cloud Services.

In our case, we are working with Node.JS so technically we don’t have anything to build. However, a build config can still be usefull because it allows us to execute certain command to test the integrity of the code. If everything passes, we are able to hand it over to a deployment profile for Application Container Cloud Service to deploy it on the cloud.

In this step, we will focus on the build step.

In DevCS, select your project and go to the Build page. At the moment only a sample maven_build has been created which doesn’t do us any good so we will go ahead and create a new job.

build1

Once we saved the job we will be redirected to the configuration.

The Main and Build Parameters tab can remain unchanged. In the Source Control tab we specify that the build system integrates with a GIT repository.

From the Repository drop down, we select our converter repo.
In the Branch section we click the Add button and select master. This way we can specify on which branch of the code this build applies.

It is a common practice to use something like GitFlow to develop features. Each feature will be represented by a branch and once the feature is finished, that branch is merged into a development branch. In these cases, it makes a lot of sense to only initiate the build when a commit is done towards the development branch so that’s why we specify a certain branch in this step. If we don’t specify a branch, the build will start on every single commit.

build2

In the next tab, Triggers, we specify what triggers the build. Because we re relying on a commit to the source control system, we have to select “Based on SCM polling schedule”. This links the configuration from the Source Control tab to the Trigger.

build3

The next tab Environment isn’t required in this step so we can go ahead and open the Build Steps tab. This is where we configure the actions that are done when the build starts.

In our case we want to execute the npm test command which is a shell script. From the add button we select the Execute Shell step. This will add a text area in which we can specify shell commands to execute. In this box we can add multiple lines of code.

Add the following code in the command box:

git config --global url.https://github.com/.insteadOf git://github.com/
npm install
npm test

Because we have added the node_modules to our gitignore file, we need to install the modules from our package.json. On your machine a simple npm install would be sufficient, however because DevCS is behind a firewall that only accepts traffic on port 80 (HTTP) and 443 (HTTPS), we need to make sure that we force git to use HTTP and not the git protocol. The git config command does make sure that we download all the modules over regular HTTPS traffic, even if the git repository of a module has been configured using the git protocol.

After that we can install the modules using the npm install command and once this is done, a npm test will start the Jasmine tests.

build4

Our build config is now complete.

Committing the code

Now that our build config has been setup, we can commit and push our code after which the build should start.

Commit the code using your favorite GIT client or from within your IDE. After the commit, push the changes to the master branch.

Once you have pushed the code, go back to the Jobs Overview page on DevCS and you will notice that our new build config has been queued and after a few seconds or a minute it will start:

build5

After aboud half a minute, the build should complete and you should see the status:

build6

On the right hand side you have a button that can take you to the Console Output. This gives you a good overview of what the build actually did. In our case, everything went fine and it ended in success however when your test fails, the build will fail and the console output will be crucial to identify what test failed.

This is the output from my build (I omitted the npm install output).

Started by an SCM change
Building remotely on Builder 22
Checkout:<account>.Converter Unit test / /home/c2c/hudson/workspace/developer85310.Converter Unit test - hudson.remoting.Channel@2564c81d:Builder 22
Using strategy: Default
Checkout:<account>.Converter Unit test / /home/c2c/hudson/workspace/developer85310.Converter Unit test - hudson.remoting.LocalChannel@ebc21da
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from https://developer.us2.oraclecloud.com/<account>/converter.git
Commencing build of Revision af98f72759ebdfc7a88b0a1f49d70b278bdcbab4 (origin/master)
Checking out Revision af98f72759ebdfc7a88b0a1f49d70b278bdcbab4 (origin/master)
No change to record in branch origin/master
[developer85310-chatbotdev1_converter_12171.Converter Unit test] $ /bin/sh -xe /home/builder/tmp/hudson2474779517119792441.sh
+ git config --global url.https://github.com/.insteadOf git://github.com/
+ npm install
<npm install output>
+ npm test

> converter@1.0.0 test /home/builder/hudson/workspace/<account>.Converter Unit test
> node node_modules/jasmine/bin/jasmine.js

Started
....


4 specs, 0 failures
Finished in 0.01 seconds

Finished: SUCCESS

Conclusion

In this post we have shown how we can leverage the power of Developer Cloud Service to setup an automated test build for your Node.JS code. By doing this, you not only get the benefits of getting instant feedback when your code is pushed to the repository but you also get better quality and re-usability of your code.

 

Loading Data into Oracle BI Cloud Service using BI Publisher Reports and SOAP Web Services

$
0
0

Introduction

This post details a method of loading data that has been extracted from Oracle Business Intelligence Publisher (BIP) into the Oracle Business Intelligence Cloud Service (BICS). The BIP instance may either be Cloud-Based or On-Premise.

It builds upon the A-Team post Using Oracle BI Publisher to Extract Data from Oracle Sales and ERP Clouds. This post uses SOAP web services to extract data from an XML-formatted BIP report.

The method uses the PL/SQL language to wrap the SOAP extract, XML parsing commands, and database table operations. It produces a BICS staging table which can then be transformed into star-schema object(s) for use in modeling.  The transformation processes and modeling are not discussed in this post.

Additional detailed information, including the complete text of the procedure described, is included in the References section at the end of the post.

Rationale for using PL/SQL

PL/SQL is the only procedural tool that runs on the BICS / Database Schema Service platform. Other wrapping methods e.g. Java, ETL tools, etc. require a platform outside of BICS to run on.

PL/SQL can utilize native SQL commands to operate on the BICS tables. Other methods require the use of the BICS REST API.

Note: PL/SQL is a very good at showcasing functionality. However, it tends to become prohibitively resource intensive when deploying in an enterprise production environment.

For the best enterprise deployment, an ETL tool such as Oracle Data Integrator (ODI) should be used to meet these requirements and more:

* Security

* Logging and Error Handling

* Parallel Processing – Performance

* Scheduling

* Code re-usability and Maintenance

The steps below depict how to load a BICS table.

About the BIP Report

The report used in this post is named BIP_DEMO_REPORT and is stored in a folder named Shared Folders/custom as shown below:

BIP Report Location

The report is based on a simple analysis with three columns and output as shown below:

BIP Demo Analysis

Note: The method used here requires all column values in the BIP report to be NOT NULL for two reasons:

1. The XPATH parsing command signals either the end of a row or the end of the data when a null result is returned.

2. All columns being NOT NULL ensures that the result set is dense and not sparse. A dense result set ensures that each column is represented in each row. Additional information regarding dense and sparse result sets may be found in the Oracle document Database PL/SQL Language Reference.

One way to ensure a column is not null is to use the IFNull function in the analysis column definition as shown below:

BIP IFNULL Column Def

Call the BIP Report

The SOAP API request used here is similar to the one detailed in Using Oracle BI Publisher to Extract Data from Oracle Sales and ERP Clouds.

The SOAP API request should be constructed and tested using a SOAP API testing tool e.g. SoapUI.

This step uses the APEX_WEB_SERVICE package to issue the SOAP API request and store the XML result in a XMLTYPE variable. The key inputs to the package call are:

* The URL for the Report Request Service

* The SOAP envelope the Report Request Service expects.

* Optional Headers to be sent with the request

* An optional proxy override

Note: Two other BI Publisher reports services exist in addition to the one shown below. The PublicReportService_v11 should be used for BI Publisher 10g environments and the ExternalReportWSSService should be used when stringent security is required. An example URL is below:

https://hostname/xmlpserver/services/v2/ReportService

An example Report Request envelope is below:

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:v2=”http://xmlns.oracle.com/oxp/service/v2″>
<soapenv:Header/>
<soapenv:Body>
<v2:runReport>
<v2:reportRequest>
<v2:byPassCache>
true</v2:byPassCache>
<v2:flattenXML>
false</v2:flattenXML>
<v2:reportAbsolutePath>
/custom/BIP_DEMO_REPORT.xdo</v2:reportAbsolutePath>
<v2:sizeOfDataChunkDownload>
-1</v2:sizeOfDataChunkDownload>
</v2:reportRequest>
<v2:userID>’||
P_AU||'</v2:userID>
<v2:password>’||
P_AP||'</v2:password>
</v2:runReport>
</soapenv:Body>
</soapenv:Envelope>

An example of setting a SOAP request header is below:

apex_web_service.g_request_headers(1).name :=SOAPAction‘; apex_web_service.g_request_headers(1).value := ”;

An example proxy override is below:

www-proxy.us.oracle.com

 Putting this together, example APEX statements are below:

apex_web_service.g_request_headers(1).name := ‘SOAPAction’;                  apex_web_service.g_request_headers(1).value := ”;                  f_xml := apex_web_service.make_request(p_url => p_report_url, p_envelope => l_envelope, p_proxy_override => l_proxy_override );

Note: The SOAP header used in the example above was necessary for the call to the BI Publisher 11g implementation used in a demo Sales Cloud instance. If it were not present, the error LPX-00216: invalid character 31 (0x1F) would appear. This message indicates that the response received from the server was encoded in a gzip format which is not a valid xmltype data type.

Parse the BIP Report Result Envelope

This step parses the XML returned by the SOAP call for the data stored in the tag named reportBytes that is encoded in Base64 format.

The XPATH expression used below should be constructed and tested using an XPATH testing tool e.g. freeformatter.com

This step uses the APEX_WEB_SERVICE package to issue parsing command and store the result in a CLOB variable. The key inputs to the package call are:

* The XML returned from BIP SOAP call above

* The XML Path Language (XPATH) expression to find the reportBytes data

An example of the Report Response envelope returned is below:

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”><soapenv:Body><runReportResponse xmlns=”http://xmlns.oracle.com/oxp/service/v11/PublicReportService”><runReportReturn>        <reportBytes>PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPCEtLUdlbmVyYXRlZCBieSBPcmFjbGUgQkkgUHVibGlzaGVyIDEyLjIuMS4xLjAgLURhdGFlbmdpbmUsIGRhdGFtb2RlbDpfY3VzdG9tX0JJUF9ERU1PX01PREVMX3hkbSAtLT4KPERBVEFfRFM+PFNBVy5QQVJBTS5BTkFMWVNJUz48L1NBVy5QQVJBTS5BTkFMWVNJUz4KPEdfMT4KPENPTFVNTjA+QWNjZXNzb3JpZXM8L0NPTFVNTjA+PENPTFVNTjE+NTE2MTY5Ny44NzwvQ09MVU1OMT48Q09MVU1OMj40ODM3MTU8L0NPTFVNTjI+CjwvR18xPgo8R18xPgo8Q09MVU1OMD5BdWRpbzwvQ09MVU1OMD48Q09MVU1OMT43MjM3MzYyLjM8L0NPTFVNTjE+PENPTFVNTjI+NjI3OTEwPC9DT0xVTU4yPgo8L0dfMT4KPEdfMT4KPENPTFVNTjA+Q2FtZXJhPC9DT0xVTU4wPjxDT0xVTU4xPjY2MTQxMDQuNTU8L0NPTFVNTjE+PENPTFVNTjI+NDAzNzQ0PC9DT0xVTU4yPgo8L0dfMT4KPEdfMT4KPENPTFVNTjA+Q2VsbCBQaG9uZXM8L0NPTFVNTjA+PENPTFVNTjE+NjMyNzgxOS40NzwvQ09MVU1OMT48Q09MVU1OMj40Nzg5NzU8L0NPTFVNTjI+CjwvR18xPgo8R18xPgo8Q09MVU1OMD5GaXhlZDwvQ09MVU1OMD48Q09MVU1OMT44ODA3NzUzLjI8L0NPTFVNTjE+PENPTFVNTjI+NjU1MDY1PC9DT0xVTU4yPgo8L0dfMT4KPEdfMT4KPENPTFVNTjA+SW5zdGFsbDwvQ09MVU1OMD48Q09MVU1OMT40MjA4ODQxLjM5PC9DT0xVTU4xPjxDT0xVTU4yPjY2MTQ2OTwvQ09MVU1OMj4KPC9HXzE+CjxHXzE+CjxDT0xVTU4wPkxDRDwvQ09MVU1OMD48Q09MVU1OMT43MDAxMjUzLjI1PC9DT0xVTU4xPjxDT0xVTU4yPjI2OTMwNTwvQ09MVU1OMj4KPC9HXzE+CjxHXzE+CjxDT0xVTU4wPk1haW50ZW5hbmNlPC9DT0xVTU4wPjxDT0xVTU4xPjQxMjAwOTYuNDk8L0NPTFVNTjE+PENPTFVNTjI+NTI3Nzk1PC9DT0xVTU4yPgo8L0dfMT4KPEdfMT4KPENPTFVNTjA+UGxhc21hPC9DT0xVTU4wPjxDT0xVTU4xPjY2Njk4MDguODc8L0NPTFVNTjE+PENPTFVNTjI+Mjc4ODU4PC9DT0xVTU4yPgo8L0dfMT4KPEdfMT4KPENPTFVNTjA+UG9ydGFibGU8L0NPTFVNTjA+PENPTFVNTjE+NzA3ODE0Mi4yNTwvQ09MVU1OMT48Q09MVU1OMj42MzcxNzQ8L0NPTFVNTjI+CjwvR18xPgo8R18xPgo8Q09MVU1OMD5TbWFydCBQaG9uZXM8L0NPTFVNTjA+PENPTFVNTjE+Njc3MzEyMC4zNjwvQ09MVU1OMT48Q09MVU1OMj42MzMyMTE8L0NPTFVNTjI+CjwvR18xPgo8L0RBVEFfRFM+</reportBytes><reportContentType>text/xml</reportContentType><reportFileID xsi:nil=”true”/><reportLocale xsi:nil=”true”/></runReportReturn></runReportResponse></soapenv:Body></soapenv:Envelope>

An example of the XPATH expression to retrieve just the value of reportBytes is below:

//*:reportBytes/text()

Putting these together, an example APEX statement is below:

f_report_bytes := apex_web_service.parse_xml_clob( p_xml => f_xml, p_xpath => ‘//*:reportBytes/text()’ );

Decode the Report Bytes Returned

This step uses the APEX_WEB_SERVICE package to decode the Base64 result from above into a BLOB variable and then uses the XMLTYPE function to convert the BLOB into a XMLTYPE variable.

Decoding of the Base64 result should first be tested with a Base64 decoding tool e.g. base64decode.org

An example of the APEX decode command is below:

f_blob := apex_web_service.clobbase642blob(f_base64_clob);

 An example of the XMLTYPE function is below:

f_xml := xmltype (f_blob, 1);

The decoded XML output looks like this:

<?xml version=”1.0″ encoding=”UTF-8″?>
<!–Generated by Oracle BI Publisher 12.2.1.1.0 -Dataengine, datamodel:_custom_BIP_DEMO_MODEL_xdm –>
<DATA_DS><SAW.PARAM.ANALYSIS></SAW.PARAM.ANALYSIS>
<G_1>
<COLUMN0>Accessories</COLUMN0><COLUMN1>5161697.87</COLUMN1><COLUMN2>483715</COLUMN2>
</G_1>
<G_1>
<COLUMN0>Audio</COLUMN0><COLUMN1>7237362.3</COLUMN1><COLUMN2>627910</COLUMN2>
</G_1>
<G_1>
<COLUMN0>Camera</COLUMN0><COLUMN1>6614104.55</COLUMN1><COLUMN2>403744</COLUMN2>
</G_1>
<G_1>
<COLUMN0>Cell Phones</COLUMN0><COLUMN1>6327819.47</COLUMN1><COLUMN2>478975</COLUMN2>
</G_1>
<G_1>
<COLUMN0>Fixed</COLUMN0><COLUMN1>8807753.2</COLUMN1><COLUMN2>655065</COLUMN2>
</G_1>
<G_1>
<COLUMN0>Install</COLUMN0><COLUMN1>4208841.39</COLUMN1><COLUMN2>661469</COLUMN2>
</G_1>
<G_1>
<COLUMN0>LCD</COLUMN0><COLUMN1>7001253.25</COLUMN1><COLUMN2>269305</COLUMN2>
</G_1>
<G_1>
<COLUMN0>Maintenance</COLUMN0><COLUMN1>4120096.49</COLUMN1><COLUMN2>527795</COLUMN2>
</G_1>
<G_1>
<COLUMN0>Plasma</COLUMN0><COLUMN1>6669808.87</COLUMN1><COLUMN2>278858</COLUMN2>
</G_1>
<G_1>
<COLUMN0>Portable</COLUMN0><COLUMN1>7078142.25</COLUMN1><COLUMN2>637174</COLUMN2>
</G_1>
<G_1>
<COLUMN0>Smart Phones</COLUMN0><COLUMN1>6773120.36</COLUMN1><COLUMN2>633211</COLUMN2>
</G_1>
</DATA_DS>

Create a BICS Table

This step uses a SQL command to create a simple staging table that has 20 identical varchar2 columns. These columns may be transformed into number and date data types in a future transformation exercise that is not covered in this post.

A When Others exception block allows the procedure to proceed if an error occurs because the table already exists.

A shortened example of the create table statement is below:

execute immediate ‘create table staging_table ( c01 varchar2(2048), … , c20 varchar2(2048)  )’;

Load the BICS Table

This step uses SQL commands to truncate the staging table and insert rows from the BIP report XML content.

The XML content is parsed using an XPATH command inside two LOOP commands.

The first loop processes the rows by incrementing a subscript.  It exits when the first column of a new row returns a null value.  The second loop processes the columns within a row by incrementing a subscript. It exits when a column within the row returns a null value.

The following XPATH examples are for a data set that contains 11 rows and 3 columns per row:

//G_1[2]/*[1]text()          — Returns the value of the first column of the second row

//G_1[2]/*[4]text()          — Returns a null value for the 4th column signaling the end of the row

//G_1[12]/*[1]text()        — Returns a null value for the first column of a new row signaling the end of the — data set

After each row is parsed, it is inserted into the BICS staging table.

An image of the staging table result is shown below:

BIP Table Output

 

Summary

This post detailed a method of loading data that has been extracted from Oracle Business Intelligence Publisher (BIP) into the Oracle Business Intelligence Cloud Service (BICS).

Data was extracted and parsed from an XML-formatted BIP report using SOAP web services wrapped in the Oracle PL/SQL APEX_WEB_SERVICE package.

A BICS staging table was created and populated. This table can then be transformed into star-schema objects for use in modeling.

For more BICS and BI best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for BICS.

References

Complete Text of Procedure Described

Using Oracle BI Publisher to Extract Data from Oracle Sales and ERP Clouds

Database PL/SQL Language Reference

Reference Guide for the APEX_WEB_SERVICE

Soap API Testing Tool

XPATH Testing Tool

Base64 Decoding and Encoding Testing Tool

IDCS OAuth 2.0 and REST API

$
0
0

Introduction

This article is to help expand on topics of integration with Oracle’s Cloud Identity Management service called Identity Cloud Service (IDCS).  IDCS delivers core essentials around identity and access management through a multi-tenant Cloud platform.  One of the more exciting features of IDCS is that you can interact with it using a REST API.  REST, or REpresentational State Transfer, is a stateless, client-server, architectural style that runs over HTTP.   The IDCS REST APIs support SCIM 2.0 compliant endpoints with standard SCIM 2.0 core schemas and Oracle schema extensions to programmatically manage users, groups, applications and identity functions like password management and administrative tasks to name a few.  In fact, if you can do it in the IDCS user interface you can probably do it using the IDCS REST API.  This article is to get you started on the basics of using OAuth 2.0 authorization to access the REST API.  Consider this a building block to start your journey with the IDCS REST API.

 

Working with OAuth 2.0 to access the IDCS REST API

OAuth 2.0 is a standard for implementing delegated authorization. Authorization is based on the access token required to access a resource. The access token can be issued for a given scope, which defines what the access token can do and what resources it can access. As a quick prelude to working with OAuth 2.0 to access the IDCS REST API, there are four steps to complete.

 

  1. 1. Login to the IDCS admin console
    2. Create an OAuth client application
    3. Use the client ID and client secret to create the access token
    4. Include the access token in the HTTP header when sending requests to the REST API

Once a client application is registered in IDCS, the following sequence diagram will help to illustrate the OAuth 2.0 authorization flow according to how we will access the IDCS REST API.

IDCS OAuth 2.0 Flow

I want to point out that the diagram could get more elaborate to include things like token expiration, revocation, refresh, other grant types, etc., but I want to keep the focus on a basic example to making requests to the IDCS REST API otherwise things can get convoluted.

The table below includes some important OAuth 2.0 parameters we will be working with IDCS to get started. It is not an exhaustive list, but a starting point to work with OAuth 2.0 and IDCS.

 

Parameter Value Comments
Authorization Header Basic <base64_clientid_secret> Used by client as a Basic authentication scheme to transmit the access token in a header.  The access token value needs to be a base64 UTF-8 encoded value of the Client ID and Client Secret concatenated using a colon as a separator; e.g. clientID:clientSecret.
Client ID <client_id> Required. A unique “API Key” generated when registering your application in the IDCS admin console.
Client Secret <client_secret> Required. A private key similar to a password that is generated when registering your application in the IDCS admin console; do not share this value.
Access Token URL /oauth2/v1/token An endpoint used to obtain an access token from IDCS.
Auth URL /oauth2/v1/authorize An endpoint to obtain an authorization code from IDCS to be further used during a 3-legged OAuth flow.
Grant Type client_credentials Required. It means the REST API to be invoked is owned by the client application.
Scope (required) urn:opc:idm:__myscopes__ This scope returns all the grants given to your application, other scopes could be used to get specific grants if necessary.

 

STEP 1: Registering a Web Application in IDCS

Registering a web application in the IDCS admin console will provide some key items to working with OAuth 2.0, which are Client ID, Client Secret, and Scopes.   An important step I want to mention when registering a web application in IDCS is the step below that adds scopes in a field called “Grant the client access to Identity Cloud Service Admin APIs”.  In my example, and for the purposes of this article, I am giving certain scopes required to request User searches, edits, creates, and deletes, but if you were to do other things, for example manage Audit Events, that will require other scopes.  So let’s get started by registering a web application in IDCS.

 

1. Login as an Administrator to the IDCS admin console
2. Select the Applications tab
3. Click the Add button
4. Use the default Web Application and click Next
5. Enter a Name (5 characters or more) and also give some Description
6. Select Allowed Grant Types: Client Credentials
7. At the bottom check the Grant the client access to Identity Cloud Service Admin APIs and add the following scopes:  Me, Identity Domain Administrator
8. Click the Finish button
9. You should get a prompt with the Client ID and Client Secret, copy these to use later.

 

  1. Application Added
    1. 10. Click the Close button
      11. Click the Activate button and click Activate Application to confirm
      12. Click the Save button; you should get a confirmation.

 

STEP 2:  Use cURL to get our OAuth 2.0 Token

Now that our application is registered with IDCS, we will use the Client ID and Client Secret with the cURL command to request an OAuth 2.0 token.

 

IMPORTANT:  The command cURL is something UNIX versions of operating systems like Linux, Solaris, macOS, etc. have built-in, but if you are using Windows, you will need to download it https://curl.haxx.se/download.html in order to execute it from the command line.

 

  1. 1. From the IDCS Web Application that was registered get your Client ID and Client Secret; below is an example.
  2.      Client ID:          3ebb5563c01246e28450b371fc16cebe
         Client Secret:    e856256a-99d9-405a-a914-6c433330ec62 
  3. 2. Now concatenate the Client ID and Client Secret with a colon separator, and base64 encode it in UTF-8 format using the following command.  Note that the command below will only work on UNIX operating systems, so for Windows you may need to use the site https://www.base64encode.org to do the same thing.  This should generate a single value to use in the next step.
  4.      echo -n “12b844740cec48068a8892dccc3f96e9:6ce580be-3dfe-4894-bc40-178856b6baae” | base64
  5. 3. Use the base64 encoded value from the previous step and the following cURL command to get your OAuth 2.0 token.  Please remove all the line breaks “ \ ” below if you are using Windows.  The response should be your Bearer Token.
  6.      curl \
         -H “Authorization: Basic <your base64 encoded client id/secret>” \
         -H “Content-Type: application/x-www-form-urlencoded;charset=UTF-8” \
         –request POST tenant1.idcs.internal.oracle.com:8990/oauth2/v1/token \
         -d “grant_type=client_credentials&scope=urn:opc:idm:__myscopes__”
  7. 4. The OAuth 2.0 Token from the previous step needs to be copied.  Make sure to only copy the actual token which is the value shown below in bold blue, and not anything highlighted in red.  ALERT: Notice in the output it shows an expiration; i.e. “expires_in”:3600.  This means your token will no longer be valid after 1 hour from the time you generated it. After 1 hour you will either have to refresh the token or get a new one.
  8.      {“access_token”:”eyJ4NXQ[A large section was removed to reduce clutter] n8idMxlHS8“,”token_type”:”Bearer”,”expires_in”:3600}

 

BONUS: If you are using a UNIX OS, you can append “| awk -F”\”” ‘{print $4}’”  to the end of the cURL command to parse out just the Bearer token.  Just remember the default expiration of the token will be 3600 seconds from the time of the request.

 

STEP 3: Use the OAuth 2.0 Token and cURL to Send a REST Request

Now that we have our OAuth 2.0 token from the previous step, we can use the token with the cURL command again to send a REST request to the IDCS REST API in order to do something.  Before I show you the cURL command let’s break it into parts to help explain the request using the following table.

 

# Option Example
1 Method -X GET
2 Content Type Header -H “Content-Type:application/scim+json”
3 Authorization Header -H “Authorization: Bearer <your token>”
4 HTTP Protocol http or https (recommend https)
5 IDCS hostname:port tenant1.mycompany.com:8990
6 IDCS REST Endpoint /admin/v1/Users
7 IDCS Querystring ?filter=userName+co+%22tim%22 (username contains tim)

 

Now let’s use our token with the cURL command to send a request to the IDCS REST API.  The following cURL command will work with UNIX, but for Windows please remove all the line breaks “ \ ” shown below.  If everything is successful you will get a response with a list of users in a JSON format.

 

     curl \
     -X GET \
     -H “Content-Type:application/scim+json” \
     -H “Authorization: Bearer \
     <your oauth2.0 token>” \
     http://tenant1.idcs.internal.oracle.com:8990/admin/v1/Users

 

BONUS: If you are using a UNIX OS, you can append “ | python –m json.tool “ at the end of the cURL command to format the JSON response in a pretty format.  Windows will have to install python or use something else.

 

STEP 4: The JSON Output from the IDCS REST API

In the previous step the REST request sent using cURL returned a response in a JSON format.  JSON is an open standard that can be formatted or parsed per your needs like getting specific attributes required by your application.  This site http://www.json.org has more information to learn more about JSON and I am sure there are many more sites to provide helpful information.  Below is an example JSON response from IDCS.

 

JSON Response

 

Using Postman to Send REST Requests

There are various tools than can be used to send REST requests that make it easier than cURL, and one very useful tool is Postman.  Postman https://www.getpostman.com/docs/will run natively on a Mac or as an add-on in the Chrome Internet browser, so if you can run Chrome you can run Postman.  I don’t want to get deep into all the things Postman can do, but if you want to try it I will show you the configurations needed to setup Postman in order to automate getting the required OAuth 2.0 token to send REST requests.  This little tip will save you a lot of time.

 

1. Install Postman whether it be on a Mac or an add-on in the Chrome browser

2. Click on the Headers tab

3. Add the two new Header and Value as follows.

Key Value
Authorization Bearer
Content-Type application/scim_json

4. Click on the Authorization tab

5. From the Type drop menu, select OAuth 2.0

6. Click on the button “Get New Access Token”

7. Now enter the following GET NEW ACCESS TOKEN values:

GET NEW ACCESS TOKEN

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Parameter Value
Callback URL <use the default>
Token Name Bearer
Auth URL <http/https>://<tenant hostname:port>/oauth2/v1/authorize
Access Token URL <http/https>://<tenant hostname:port>/oauth2/v1/token
Client ID <Your IDCS Client ID>
Client Secret <Your IDCS Client Secret>
Scope (IDCS Required) urn:opc:idm:__myscopes__
Grant Type Client Credentials

 

8. Click the Request Token button

9. Select the Bearer row

10. Click the Use Token; this will update your Authorization header Bearer value.

11. Select the Headers tab and you will see your Authorization Bearer value updated getting

12. Above you should see GET and if not drop the menu and select it

13. Enter an IDCS URL with some REST API endpoint; e.g.
https://tenant.mycompany.com:8990/admin/v1/Users?filter=userName sw “tim”

14. Click the Send button.

 

If everything goes correctly, you should get the data you requested in a JSON response.  Note that besides the GET method shown above, the SCIM REST API allow other methods like POST, PUT, DELETE, PATCH just to name a few.  To learn more about all the possibilities of the IDCS REST API search the official IDCS Oracle documents.

 

IMPORTANT: Your OAuth 2.0 token you get in step 7 and 8 above will expire after 1 hour.  After 1 hour you will need to click on the Get New Access Token button again, click the Request Token, select the new Bearer token row, and finally click the Use Token button again.  You can then resend a request by clicking on the Send button.

 

Summary

Hopefully, you now have some basics on using OAuth 2.0 to access the IDCS REST API.  I used cURL because it is pretty simple to use and it helps you understand all the necessary steps needed to work with the OAuth 2.0 steps though I also introduced Postman which is a great tool.  Something else I hope I illustrated in this article are some basics in sending REST requests to the IDCS REST API endpoints.  As I mentioned in the beginning of this article, this is really a foundational building block to using OAuth 2.0 in order to work with the IDCS REST API.  My next article is going to expand more on the endpoints related to querying Audit Events. Until then enjoy and feel free to play around with Postman to try out some other end points and queries.

IDCS Audit Event REST API

$
0
0

Introduction

This article is to help expand on topics of integration with Oracle’s Cloud Identity Management service called Identity Cloud Service (IDCS). IDCS delivers core essentials around identity and access management through a multi-tenant Cloud platform. As part of the IDCS framework, it collects audit events that capture all significant events, changes, and actions which are sent an audit table. Like any Identity and access service that revolves around security you will eventually need to access audit records for various reasons pertinent to standard security practices and corporate policy. In this article I want to cover what the IDCS audit events provide and how to leverage them using the IDCS REST API Audit Events endpoints.

 

Auditing Overview

The audit events can be accessed using the IDCS SCIM 2.0 compliant REST API. SCIM (System for Cross-domain Identity Management) which is an open standard to simplify user identity management in the cloud. The following is a quick summary of what you should know from a high level.

* Audit events include login events, changes, and actions.
* Audit events are kept up to a maximum of 90 days
* Audit events are managed using REST APIs via OAuth 2.0
* Audit event REST endpoints allow query parameters and filters
* Audit event REST responses are in JSON format

Reporting is a basic feature that comes as part of the IDCS user interface, but only provides some simple reporting. A more powerful way to retrieve Audit records from IDCS is to use the REST API. The REST API endpoint can use optional query parameters and filters to fine tune what information you want, more on this in the next couple of sections.

 

Audit Event Endpoints

The following table covers all possible IDCS audit event endpoints. In addition, you should know that some endpoints can include query parameters that can include schema attributes, which I will cover in the next section.

 

Method

Action

Endpoint

Comment

POST

Create

/admin/v1/AuditEvents

Create a new audit record. Any parameters need to be included in a JSON body.

DELETE

Delete

/admin/v1/AuditEvents/{id}

Delete an audit record using an audit record ID. Any parameters need to be included in a JSON body.

GET

Get Event by ID

/admin/v1/AuditEvents/{id}

Retrieve a single audit record using a unique ID. Any parameters need to be included in query string.

GET

Search by GET

/admin/v1/AuditEvents

Parameters can be included using a query string. Any parameters need to be included in query string.

POST

Search by POST

/admin/v1/AuditEvents/.search

Parameters are posted in the request body using JSON. Any parameters need to be included in a JSON body.

 

Audit Event Query Parameters

The following table provides all the audit event parameters that can be used to query the records. In later sections I will get into some examples on how to use some of these parameters.

Parameter

Type

Description

filter

string

A filter using valid schema attributes to request specific resources.  The filter can include logical operators such as AND and OR. For more details see the SCIM specifications https://tools.ietf.org/html/draft-ietf-scim-api-19#section-3.4.2.2 for more information.

attributes

string

A comma delimited string of valid attributes that specify resources.  The values of the attributes should match the required SCIM schema definition.

sortBy

string

Used to sort the response by some valid audit event attribute.

sortOrder

string

Using the allowed values “ascending” or “descending” to order the sort of the results; the default if sortOrder is not used is ascending.

count

number

Can set the maximum number of records returned per page.  Excluding “count” sets the default maximum number to 50, where the maximum value allowed is 1000.  If the number of records returned is larger than the count value, you must use the startIndex to paginate through the records.

startIndex

number

This determines the first record in the page set.  The default is 1, so if the startIndex is set to 100, the 100th record will be the first in the list returned.  See the Pagination section of the SCIM specifications https://tools.ietf.org/html/draft-ietf-scim-api-19#section-3.4.2.4 for more information.

My First Audit Event Search

Now that I have covered the core endpoints and query parameters let’s get into our first search. Imagine you either work in Info Sec or work with someone who does, in either case there will been times an audit is required. Even if an audit is not something done on a regular basis, the fact that IDCS will only keep a maximum of 90 days of records means if your corporate policy demands say records that be kept for 7 years, you must establish a process to query the IDCS audit records on a regular basis and store them externally so that you can use tools like BICS (Business intelligence Cloud Service) to build reports when needed even if you need to go back 7 years.

There are two methods used to send searches to the IDCS REST API Audit Events endpoints, GET or POST, and though each option provides the same results, how the query parameters are sent differ. The following basic searches should help illustrate the differences between using GET and POST.

 

GET method

     https://tenant1.mycompany.com/admin/v1/AuditEvents?filter=actorName sw “bhaas”

POST method

     https://tenant1.mycompany.com/admin/v1/AuditEvents/.search

JSON body

     JSon Body
     {
          “schemas”: [“urn:ietf:params:scim:api:messages:2.0:SearchRequest”],
          “attributes”: [“actorName”],
          “filter”: “actorName sw \”bhaas\””,
          “startIndex”: 1,
          “count”: 5
     }

Notice the GET method above send all the parameters in a URL query string, while the POST method requires the addition of “/.search” to the endpoint plus the search parameters to be sent in a JSON body. Whether you use GET or POST is up to you, it will most likely depend on your application integration requirements.

Before we jump into sending a search via the REST API, I am going to assume you already know how to get a proper OAuth 2.0 token. If not, for your convenience I have already published an article on how to do this in the article “IDCS OAuth 2.0 and REST API”, which gives easy steps and examples using cURL or Postman. So going forward I am only going to focus on the endpoint and query parameters. Let’s move on to our first audit event search example.

     /admin/v1/AuditEvents?filter=actorName sw “tim”

Let’s break out the above search to understand what we are doing using the following table.

Part

Value

Description

Endpoint

/admin/v1/AuditEvents

The endpoint used to query audit events

Query parameter

?filter=

A parameter used to filter on some SCIM attribute

Attribute

actorName

Attribute used in the filter parameter

Logical Operator

sw

The logical operator “sw” is starts with, but there are many others.

Search value

“tim”

This is the value to search for

 

Now before we finally send that search, I want to point out if you are going to send a GET request using the cURL command, the query string needs to be URL encoded.

Take the following example…

* Will NOT work with cURL:
     ?filter=actorName sw “tim”

 

* Will work with cURL:
     ?filter=actorName%20sw%20%E2%80%9Ctim%E2%80%9D

 

So the below final cURL command can be used to send our first audit event search; be sure to replace the <Your Bearer token> with a real token.

     curl \
     -X GET \
     -H “Content-Type:application/scim+json” \
     -H “Authorization: Bearer <Your Bearer token>” \
     http://tenant1.idcs.my.company.com:8990/admin/v1/AuditEvents?filter=actorName%20sw%20%E2%80%9Ctim%E2%80%9D

 

Once you send your search a response will come back if everything went successful. The format of the response is JSON, an open standard, that can then be parsed or manipulated as needed. This makes building custom interfaces relatively simple. The following is an example of what your response may look like.

JSON RESPONSE

Search by Date Range

Another useful search is to filter by date range. This example is a little more complicated, but I will walk through all the parts of the search.

     /admin/v1/AuditEvents?filter=timestamp ge “2016-06-20T00:00:00Z” and timestamp le “2016-06-22T00:00:00Z”&sortBy=timestamp&sortOrder=descending”

 

Below is a table to help understand more about the parts of the search to help learn what we are sending and why.

Part

Value

Description

Endpoint

/admin/v1/AuditEvents

The endpoint used to query audit events

Query parameter

?filter=

A parameter used to filter using a valid SCIM attribute.

Attribute

timestamp

Attribute used in the filter parameter

Logical operator

“ge” and “le”

The logical operator “ge” for greater than or equal to, and “le” for less than or equal to. 

Search value

“2016-06-20T00:00:00Z” and “2016-06-22T00:00:00Z”

These are the date values used in the search, which must be in UTC format.

sortBy

timestamp

Sorts by a valid SCIM schema attribute, “timestamp” of the audit records.

sortOrder

descending

This the sort order of the results; options are ascending or descending.

 

IMPORTANT:  When using a date range search with IDCS, you should include the “sortBy” parameter as a habit.  The reason is if you are paging through multiple results by setting the startIndex parameter, you will get an error if the page limit goes beyond the first page set.  For example, if there are a total of 152 records returned, and you set the startIndex parameter to 51 to get the second page set of records you will get the following error unless you use the sortBy parameter.

 

     {

     “schemas”: [

     “urn:ietf:params:scim:api:messages:2.0:Error”,

     “urn:ietf:params:scim:api:oracle:idcs:extension:messages:Error

     ],

     “detail”: “Missing \”sortby\”. sortby is mandatory when startIndex is greater than 1.”,

     “status”: “400”,

     “urn:ietf:params:scim:api:oracle:idcs:extension:messages:Error”: {

     “messageId”: “error.common.common.missingSortBy”

     }

     }

 

Again, to send the query string using the cURL command, we need to URL encode our query string as follows before sending it.

     curl \
     -X GET \
     -H “Content-Type:application/scim+json” \
     -H “Authorization: Bearer <Your Bearer token>” \
     http://tenant1.idcs.my.company.com:8990/admin/v1/AuditEvents?filter=timestamp%20ge%20%222016-06-20T00%3A00%3A00Z%22%20and%20timestamp%20le%20%222016-06-22T00%3A00%3A00Z%22%26sortBy%3Dtimestamp%26sortOrder%3Ddescending%E2%80%9D

Understanding the REST API Result Limits

The IDCS REST API has some defaults when returning a large number of records. When you send a search you will notice a couple things in the JSON response. At the top of the result you will see a parameter “totalResults”. This shows the total number of records from the query, but it does not mean that is how many results you got in your response.

       {
          “schemas”: [
          “urn:scim:api:messages:2.0:ListResponse”
          ],
          “totalResults”: 52,

At the bottom of the result there are a couple other parameters, “startIndex” and “itemsPerPage”. The startIndex is the point in the page result you are viewing where the itemsPerPage parameter lets us know there is a maximum of 50 records per page.

     ],
          “startIndex”: 1,
          “itemsPerPage”: 50
     }

 

Every page set that is returned will contain totalResults, startIndex, and itemsPerPage. If we put this together it tells us —

1. The is a total number of records in our search is 52; e.g. “totalResults”: 52.
2. Our current result set is our first; e.g. “startINdex”: 1.
3. Finally our maximum records per page set is 50; e.g. “itemsPerPage”: 50.

An important note is the “itemsPerPage” has a default value of 50, but you can override this by using the “count” parameter. If you were to include the “count” parameter in the query string or JSON body with the value of 200, the total records per page returned would be a maximum of 200 per page set. For example…

 

     /admin/v1/AuditEvents?filter=timestamp ge “2016-06-20T00:00:00Z” and timestamp le “2016-06-22T00:00:00Z”&sortBy=timestamp&sortOrder=descending&count=200”

 

An important note about the “count” parameter is there is a maximum limit of 1000. Even if you say change the count value to 2000, only a maximum of 1000 records per page set is returned. This presents a little work if say 5000 records are returned. So how do you deal with that?

 

How to Deal with Large Record Results?

To build on the previous section let’s try to understand how to deal with large record results. Let’s assume our total record result equals 152. One option is to set the count parameter we learned about earlier to a value of 1000. We certainly would get all our records returned since the total number of records is less than 1000, but if our total record result is greater than 1000 that introduces a problem. To solve this, we need to paginate through the records using the startIndex parameter.

First of all, we won’t really know how many records we are going to get returned do we. So a trick is to use the “count” parameter and set the value to “0”.

 

    /admin/v1/AuditEvents?filter=timestamp ge “2016-06-20T00:00:00Z” and timestamp le “2016-06-22T00:00:00Z”&sortBy=timestamp&sortOrder=descending&count=0”

 

This will not return any records, but it will show us the total number of records returned in your search. In the example below we have 152 records in total returned by the search.

 

     {

        “schemas”: [

        “urn:scim:api:messages:2.0:ListResponse”

        ],

        “totalResults”: 152,
   
“Resources”: [],

        “startIndex”: 1,

        “itemsPerPage”: 0

     }

 

Once we know the total records we can paginate through the records using the startIndex parameter.  Assume we don’t bother with using the count parameter and go with the default of 50 records per page.  We can then do something such as the pseudo code below using awhile loop.

     count = 0
     while count < 152
          startIndex = count +1
          Get 50 records starting at startIndex
          count = startIndex + 50
     end

 

The idea is we can get all our records by paging through them incrementing the startIndex. To continue with the previous query examples we would do something like this.

 

Records 1 – 50

     /admin/v1/AuditEvents?filter=timestamp ge “2016-06-20T00:00:00Z” and timestamp le “2016-06-22T00:00:00Z”&sortBy=timestamp&sortOrder=descending&startIndex=1”

 

Records 51 – 101

     /admin/v1/AuditEvents?filter=timestamp ge “2016-06-20T00:00:00Z” and timestamp le “2016-06-22T00:00:00Z”&sortBy=timestamp&sortOrder=descending&startIndex=51”

 

Records 102 – 152

     /admin/v1/AuditEvents?filter=timestamp ge “2016-06-20T00:00:00Z” and timestamp le “2016-06-22T00:00:00Z”&sortBy=timestamp&sortOrder=descending&startIndex=102”

Audit Events Schema

I have covered a couple common search examples to get audit events, but eventually you may want to do other things. So I already pointed out the Audit Events endpoints and the query parameters, but as you try to build more queries you may realize that is not enough. For example, you already know about a query parameter “filter”, but what attributes can I use? So to complete the full circle you will need to understand the Audit Events schema.

You could reference Oracle’s IDCS documentation, but another tip is to simply send a REST request to return the entire Audit Events schema. Before you can do this your client will require the scope “Identity Domain Administrator”. To do this complete the following steps.

 

1. Login as an Administrator to the IDCS admin console

     2. Select the Applications tab

     3. Click your Client

     4. Select the Configuration tab

     5. Expand the Client Configuration

     6. In the Grant the client access to Identity Cloud Service Admin APIs and add the scope: Identity Domain Administrator

     7. Click the Save button.

Once the above steps are complete you will need to request a new token which will include the new authorization roles. Next you will use the following query string to request the Audit Events schema.

     /admin/v1/Schemas/urn:ietf:params:scim:schemas:oracle:idcs:AuditEvent

Below is an example cURL command you can run to return the schema.

     curl \
     -X GET \
     -H “Content-Type:application/scim+json” \
     -H “Authorization: Bearer <Your Bearer token>” \
     http://tenant1.idcs.my.company.com:8990/admin/v1/Schemas/urn:ietf:params:scim:schemas:oracle:idcs:AuditEvent

Once the REST request is sent a response a JSON format response will be returned that includes the full Audit Events schema. You can now see what attributes and objects are part of the schema in order to learn how to build new filters. The audit event schema JSON output will be fairly large so as another tip is to pipe the JSON response to a file, then copy and paste the data into the JSON field at http://json2table.com/ , and finally click the run button which will display it as a nice table. This should help make it easier to find the attributes you are looking for when building filters or other things with the query parameters.

 

Summary

I hope to have enlightened you on some knowledge to understand how to query the IDCS audit event records. This article should be a good building block on going beyond some of my examples. The rest is up to you…no pun intended. A future article to follow this one I will show how to develop a process integrate with BICS (Business Intelligence Cloud Service) with a process where BICS should leverage the IDCS REST API to retrieve audit records, and use those records to run some nice audit reports. I hope this will add more excitement to IDCS’s open framework. Until then enjoy!

Identity Cloud Service: Configuring SAML

$
0
0

Introduction

As we begin to deliver our Identity Cloud Service (IDCS) to the world(https://www.oracle.com/middleware/identity-management/index.html), we on the A-Team have been working to provide patterns and how-to posts to implement some of the common use cases we see in the field.  One of the more common use cases is integrating with third party Service Providers (SP) with Identity Cloud Service (IDCS).  IDCS is then configured to direct users to an Identity Provider (IdP) to collect credentials. By configuring multiple SPs to IDCS you essentially have a ‘hub and spoke’ paradigm with Security Assertion Markup Language or SAML.

 

Main Article

The use case is simple.  Imagine an enterprise having many different vendors for whom they do business with; these vendor’s have applications in the cloud. Many enterprises choose to keep their user’s identity and password in an internal store such as Active DIrectory.  IDCS can be configured as an intermediary that supports multiple cloud services; which then chains the request to the identity provider.

Let’s look at a picture:

 

PIC1

In this example, service providers are third party vendors with applications exposed in the cloud.  The identity provider collects user credentials and is located on-premise.

Configuration Steps

The steps assume that you have an IdP and SP already configured.  In my test environment I used two Oracle Access Manager (OAM) systems as the IdP and SP.

Configure IDCS Identity provider (OAM)
Extract IdP metadata, again in my case, I’m using OAM as my IdP.  So to obtain the SAML metadata you will need to access a URL like below:

http:///oamfed/idp/metadata

Import the metadata when creating a Identity Provider in IDCS.  Go to ‘Settings’ then select Identity Providers:

Selection_056

 

After clicking ‘Add’ you will have the option to load/import the meta-data you downloaded from your IdP:

Selection_058

Extract meta-data from IDCS

Now we need to extract the SAML meta-data from IDCS.  You can download this via an HTTP call:

http://myTenantID.internal.oracle.com:8943/fed/v1/metadata

The SP meta-data must be imported into your IdP (not shown).  Now the trust has been established between IDCS (SP) and your IdP.

 

Configure an IdP Partner in IDCS

Selection_060

Notice the federated SSO switch must be on.  You can test and validate your new IdP by clicking on the ‘Test Login’ link for the IdP.

 

When I click on the ‘Test Login’ page I should be directed to the IdP configured.  In my case, it is OAM that is using the default identity store, Weblogic embedded LDAP.

Login - Oracle Access Management 11g - Mozilla Firefox_062

 

Configure an SP Partner in IDCS
Extract metadata from IDCS

http://myTenantId.internal.oracle.com:8943/fed/v1/metadata

Import to your SP; I will not get into details on importing the meta-data to your SP.

Once your SP is setup you must export the SP meta-data and import it into IDCS.  Currently there is no UI for importing SP meta-data.  Instead you will need to make two rest calls.  The first call is to obtain the access token to be used in the second call that will actually create the service provider in IDCS.

./curl ‘https://myTenantId.internal.oracle.com:8943/oauth2/v1/token’  \
-X POST \
-H “Content-type: application/x-www-form-urlencoded” \
-H “Accept: application/json” \
-H “Authorization: Basic YzhlNWQ5NjkzNDBkNGEyNDljNmI2YWU0NjMzMjNjNTI6ZDNkYWRjZmEtYTU2Zi00YTZlLWE0Y2ItYTY3OTViNTllNTg1” \
-d ‘username=admin%40oracle.com&scope=urn%3Aopc%3Aidm%3A__myscopes__&password=ABcd1234&grant_type=password’

Notice the -d and -H flags.  The -d flag is the administrator user name and password for the tenant (myTenantId).  The -H flag is a base64 encoded value of the client application ID and the client secret; the format is ‘clientID:ClientSecret’.  The client ID should have already been created with the appropriate grant types.  This post will not get into details on how to create an application is IDCS; this will be discussed as a separate topic.  All you have to know is that in order to obtain the access bearer token, you must authenticate as the administrator with the client ID and secret as described.

 

Once  you have the access token, you can now add you SP to IDCS:

curl ‘https://myTenantId.idcs.internal.oracle.com:8943/admin/v1/ServiceProviders’  \
-X POST \
-H “Content-type: application/scim+json” \
-H “Accept: application/scim+json,application/json” \
-H “Authorization: Bearer eyJ4NXQjUzI1NiI6Ijg1a3E1MFVBVmNSRDJOUTR6WVZMVDZXbndUZmVidjBhNGV2YUJGMjFqbU0iLCJ4NXQiOiJNMm1hRm0zVllsTUJPbjNHZXRWV0dYa3JLcmsiLCJraWQiOiJTSUdOSU5HX0tFWSIsImFsZyI6IlJTMjU2In0.eyJzdWIiOiI2Mjk5ZWViNWU2MjU0ZTI1YTI4NGE4ZWEzNzM3MzQ1YSIsInVzZXIudGVuYW50Lm5hbWUiOiJ0ZW5hbnR2ayIsInN1Yl9tYXBwaW5nYXR0ciI6InVzZXJOYW1lIiwiaXNzIjoiaHR0cHM6XC9cL2lkZW50aXR5Lm9yYWNsZWNsb3VkLmNvbVwvIiwidG9rX3R5cGUiOiJBVCIsImNsaWVudF9pZCI6IjYyOTllZWI1ZTYyNTRlMjVhMjg0YThlYTM3MzczNDVhIiwiYXVkIjpbImh0dHBzOlwvXC90ZW5hbnR2ay5pZGNzLmludGVybmFsLm9yYWNsZS5jb206ODk0MyIsInVybjpvcGM6bGJhYXM6bG9naWNhbGd1aWQ9dGVuYW50dmsiXSwiY2xpZW50QXBwUm9sZXMiOlsiR2xvYmFsIFZpZXdlciIsIkF1dGhlbnRpY2F0ZWQgQ2xpZW50IiwiSWRlbnRpdHkgRG9tYWluIEFkbWluaXN0cmF0b3IiLCJDbG91ZCBHYXRlIl0sInNjb3BlIjoidXJuOm9wYzppZG06dC5vYXV0aCB1cm46b3BjOmlkbTp0Lmdyb3Vwcy5tZW1iZXJzIHVybjpvcGM6aWRtOnQuYXBwIHVybjpvcGM6aWRtOnQuZ3JvdXBzIHVybjpvcGM6aWRtOnQubmFtZWRhcHBhZG1pbiB1cm46b3BjOmlkbTp0LnNlY3VyaXR5LmNsaWVudCB1cm46b3BjOmlkbTp0LnVzZXIuYXV0aGVudGljYXRlIHVybjpvcGM6aWRtOnQuZ3JhbnRzIHVybjpvcGM6aWRtOnQuaW1hZ2VzIHVybjpvcGM6aWRtOnQuYnVsayB1cm46b3BjOmlkbTp0LmJ1bGsudXNlciB1cm46b3BjOmlkbTp0LmpvYi5zZWFyY2ggdXJuOm9wYzppZG06dC5kaWFnbm9zdGljc19yIHVybjpvcGM6aWRtOnQuaWRicmlkZ2UgdXJuOm9wYzppZG06dC5pZGJyaWRnZS51c2VyIHVybjpvcGM6aWRtOnQudXNlci5tZSB1cm46b3BjOmlkbTpnLmFsbF9yIHVybjpvcGM6aWRtOnQudXNlci5zZWN1cml0eSB1cm46b3BjOmlkbTp0LnNldHRpbmdzIHVybjpvcGM6aWRtOnQuYXVkaXRfciB1cm46b3BjOmlkbTp0LmpvYi5hcHAgdXJuOm9wYzppZG06Zy5zaGFyZWRmaWxlcyB1cm46b3BjOmlkbTp0LnVzZXJzIHVybjpvcGM6aWRtOnQucmVwb3J0cyB1cm46b3BjOmlkbTp0LmpvYi5pZGVudGl0eSB1cm46b3BjOmlkbTp0LnNhbWwgdXJuOm9wYzppZG06dC5lbmNyeXB0aW9ua2V5IHVybjpvcGM6aWRtOnQuYXBwb25seV9yIiwiY2xpZW50X3RlbmFudG5hbWUiOiJ0ZW5hbnR2ayIsImV4cCI6MTQ3OTQyNjk2MiwiaWF0IjoxNDc5NDIzMzYyLCJjbGllbnRfbmFtZSI6IklEQ1NDTElfQ01KIiwidGVuYW50IjoidGVuYW50dmsiLCJqdGkiOiI5Y2MyMTQwMC03YjY5LTQzNWMtYWQ2MC1mYTg4MWQ1NzllMDcifQ.UJb5IuumPLG87xlQRYaf-SdWQI4AJ-Be1jvA2gn1zepbqaUy0Hxngc3Av1RX6GcRGSXle0h5GWsF76hec1lVKWpdrMNux9DG0d4w6Js3Wuyd_e2oyHhJZ8BX0_BaDQ7fBVQktjooVGgDJajTEbGX-4tiiA4vMyNWLYZOxJeqUus” \
-H “User-agent: Oracle-IDCS-CLI/0.0” \
-d ‘{“partnerName”: “OAM-SP”, “includeSigningCertInSignature”: true, “nameIdUserstoreAttribute”: “emails.primary.value”, “enabled”: true, “nameIdFormat”: “saml-emailaddress”, “logoutBinding”: “Redirect”, “schemas”: [“urn:ietf:params:scim:schemas:oracle:idcs:ServiceProvider”], “metadata”: “##\n# Host Database\n#\n# localhost is used to configure the loopback interface\n# when the system is booting.  Do not change this entry.\n##\n127.0.0.1\tlocalhost\n255.255.255.255\tbroadcasthost\n::1             localhost \n\n// Rob’s IDCS instance`\n<IP Address>  <Hostname> tenantvk.idcs.internal.oracle.com \n”}’

Keep in mind that you will need to do the above for every SP.  If access token above has expired then you will again need to get the access token from the first rest call.

 

Getting Started with Chatbots

$
0
0

Introduction

At Oracle Open World 2016, Larry Ellison demoed the upcoming Oracle Intelligent Bots Cloud Service (IBCS), if you haven’t seen the demo, you can watch the recording on youtube.

Chatbots employ a conversational interface that is both lean and smart, and if designed properly is even charming. Chat helps people find the things they want and need, as well as delivering great services and information directly into an interface they already know and love.Think about how much work it takes to compare and decide on which app to download. Then actually downloading it is never as easy as it sounds, and then the anxiety of where on your home screen to put it on, and then learning yet another new interface. Chatbots are the singularity that smart devices have been waiting for, the streamlined experience that will finally unshackle us from the burden that our apps put on our devices. For most of what we do on our mobile devices, the chatbot and chat interface are ideal.

Main Article

In this article, I’l go through a step-by-step guide on how to get started with chatbots and build your first Facebook chatbot. We will implement the bot using NodeJS and will deploy to Oracle Application Cloud Service ‘ACCS’, for more information on Oracle ACCS please click here. In a nutshell, below are the discussed topics:

 

  • Create Facebook Page.
  • Create Facebook App.
  • Create Webhook and register with Facebook.
  • Receive Facebook Messages
  • Test using Facebook Messenger
  • Deploy to ACCS

In order to proceed with this tutorial, you need to have a Facebook account and you should install Facebook Messenger on your mobile device.

Create Facebook Page

    • Login to https://www.facebook.com with your Facebook credentials. From upper left corner, click the drop down menu and create a page.

Picture1

    • Choose any category, for example select ‘Company, Organization or Institution’, then specify a category and name and click ‘Get Started’. Make sure name is unique.
02 03

 

    • Fill in the page description and URL. Use your imagination for values then press ‘Save Info’.

04

    • Upload a page profile picture and then select ‘Next’.

05

    • For step (3) and Step (4), you can easily press “Skip” leaving all default values. Now you can see your Facebook page, upload a cover page picture by clicking on the ‘Add Cover’ button.

06

    • Click the ‘Add Button’ to add a ‘Send Message’ button.
07 08
  • We need to get the ‘Page ID’ in order to configure our bot with it. From ‘More’ menu, select ‘Edit Page Info’. From the page details popup, under general tab, scroll to the bottom and click ‘See All Information’. The page details are shown, scroll to the bottom and note the ‘Facebook Page ID’ copy its value and keep it handy because we will use it later on.

09

10 11

Create Facebook App

Now you need to create a Facebook App and link it with your Facebook Page you created earlier, hence when users start chatting with your Facebook page, messages will be redirected to your Facebook app and consequently be delivered to your bot implementation.

12

  • Set the App display name, be creative, and from Category select ‘Messenger Bot

  • Scroll down to the ‘Token Generation’ section and from the Page drop down menu, select the page created earlier, notice that ‘Page access Token’ will get auto populated. Click on the ‘Page Access Token’ to copy it to the clipboard; keep it handy as we will use it later on.

if you navigate away from this page and then go back and selected the same page again, a new token will be generated, but the old token will still work, this is how Facebook designed their system.

 

14

  • Scroll a bit down and you will see a section for ‘webhooks’; this is where you register your bot implementation webhook in order for Facebook to start sending you client & system messages.

This is where you will register your bot implementation web hook later on in order for Facebook to start sending you client & system messages. But we first need to build the web hook before we can register it!

15

Create Webhook

A Facebook webhook is an SSL enabled REST endpoint exposed publicly to the internet so Facebook can access it.For webhook implementation, we will use NodeJS; and in order to expose our webhook securely to Facebook, we will use the free tool ‘ngrok’ to create a secure tunnel (in the real world, your app will be deployed on a public API and a tunnel such as provided by ngrok is not necessary). Let us build now our simple webhook to complete the full Facebook bot lifecycle

  • Create and initialize a new NodeJS app:
tqumhieh~ $mkdir chatbot
tqumhieh~ $cd chatbot
tqumhieh~/chatbot $npm init
This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg> --save` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
name: (chatbot) 
version: (1.0.0) 
description: Facebook chatbot
entry point: (index.js) app.js
test command: 
git repository: 
keywords: 
author: Tamer Qumhieh
license: (ISC) 
About to write to /Users/tqumhieh/chatbot/package.json:

{
  "name": "chatbot",
  "version": "1.0.0",
  "description": "Facebook chatbot",
  "main": "app.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "Tamer Qumhieh",
  "license": "ISC"
}


Is this ok? (yes) 
tqumhieh~/chatbot $touch app.js
tqumhieh~/chatbot $
  • We will be using nodejs ‘express‘ and ‘body-parser‘, so we need to add them to project dependencies:
npm install --save express body-parser

The first step in the Facebook webhook, is to register/subscribe your webhook with your Facebook App.To ensure this process, Your Facebook App and your webhook needs to exchange a couple of tokens. verify_token: this is an arbitrary text you define at Facebook App and within your webhook implementation; when youregister the Facebook App with your webhook, the token will be sent to your webhook, you need to compare the token received with the token defined in your code and if they both match reply back with the ‘hub.challenge’ value. hub.challenge: this is a Facebook auto-generated token that is sent to your webhook as a query param.

 

  • Within the ‘app.js‘ file, copy the below code:
var express = require('express');
var bodyParser = require('body-parser');

var app = express();
app.set('port', 8081);
app.use(bodyParser.json());

var VERIFY_TOKEN = 'ateam';


app.get('/webhook', function (req, res) {
    if (req.query['hub.mode'] === 'subscribe' &&
        req.query['hub.verify_token'] == VERIFY_TOKEN) {
        console.log("Validating webhook");
        res.status(200).send(req.query['hub.challenge']);
    } else {
        console.error("Failed validation. Make sure the validation tokens match.");
        res.sendStatus(403);
    }
});

app.listen(app.get('port'), function () {
    console.log('Node app is running on port', app.get('port'));
});
  • Run your nodejs app.
node app.js
  • now your webhook is almost ready for registration, you need to expose it through an SSL tunnel to Facebook, the easiest is to use the free tool ‘ngrok’. Download ngrok from https://ngrok.com/download , unzip and form a command prompt navigate to the directory and Start ngrok.
    ngrok http 8081

    Note that the port 8081 is the same port defined in your webhook.

  • Make sure the ‘session status’ is ‘online’. Now you have 2 URLS that are publicly accessible form the internet, one is SSL enabled and the other is not. Select and copy the ‘https‘ URL.

Note that you will have different URL combinations.

16

17

  • Set the callback URL to the ‘https’ URL you copied form ‘ngrok’ terminal and don’t forget to append (/webhook) to it which is the REST end point defined in your code. Set the verify token to (ateam) which is also defined in the webhook implementation code.

if you wish to set a different value here, make sure to set the same value for (VERIFY_TOKEN) variable in the webhook implementation code.

  • Select (messages, messages_postback), this will instruct Facebook to forward all (text messages, buttons postbacks) to your webhooks, if you wish to receive more details like delivery/read reports, mark those down.
  • Click ‘Verify and Save’; make sure no errors are thrown. You can check your IDE console, if nothing goes wrong, you should see ‘Validating webhook’ printed in the console window.

18

 

  • Within the same webhooks section, make sure to select your Facebook page and click subscribe, that is how the Facebook Page is linked to the App through the Webhook.

19

  • At this stage, you managed to successfully register your webhook with Facebook.

Note that if you closed your ‘ngrok’ terminal and started it again, you will get different URLS. Hence need to edit the webhook URL within your Facebook App. Surprisingly you need to go to a different window in order to do so. To Edit the webhook URL value, select ‘Webhook’ from the left navigation menu. If you can’t find a ‘webhook’ entry in the menu, then from the Left navigation menu, click ‘Add Product’ , and then look for ‘webhook’ and click the ‘Get Started’ button. Click the ‘Edit’ button in order to change the webhook URL and verification token.

Receive Facebook Messages

At this stage, your environment is successfully configured and setup to receive Facebook messages, now you need to add logic to your webhook implementation to intercept client messages. In order to intercept Facebook messages, you need to add a ‘POST’ endpoint to the same webhook URL defined before, however, this is very low level, we can always use a readymade framework that facilitates this process, a framework like ‘botly’ , check ‘botly’ github repo to learn more how to use it. To install nodeJS ‘botly’ framework.

npm install --save botly
  • Change your webhook implementation code to use botly as below, make sure to update the value of the ACCESS_TOKEN and set it to the access token value copied before.
var VERIFY_TOKEN = 'ateam';
var ACCESS_TOKEN = 'YOUR ACCES TOKEN';


var express = require('express');
var bodyParser = require('body-parser');
var Botly = require('botly');
var botly = new Botly({
    accessToken: ACCESS_TOKEN,
    verifyToken: VERIFY_TOKEN,
    webHookPath: '/',
    notificationType:  Botly.CONST.REGULAR
});

botly.on('message', function (userId, message, data) {
    console.log(data);
});

botly.on("postback", function (userId, message, postback) {
    console.log(postback);
});


var app = express();
app.use(bodyParser.json());
app.use("/webhook", botly.router());
app.listen(8081);
  • Run your updated code.

Test Using Facebook Messenger

You need to install Facebook Messenger on your mobile device. Start Facebook Messenger and within the Home screen, search for the Facebook page you created before, if you followed this guide naming, then this name is (A-Team). Notice the Page Description/ Picture/Cover photo that showed up are the same you specified in previous steps.

24 25 26
  • Send any text message, for example (hello from Facebook messenger) and notice the same message printed in your IDE console window.
27 28
  • To reply back with a text message, you can modify your botly.on(‘message’,…) block code as below:
botly.on('message', function (userId, message, data) {
    console.log(data);
    botly.sendText({id: userId , text : 'Hello back from BOT!!!'} , function(error , data)
    {
        if(error)
        {
            console.log(error);
        }
        else
        {
            console.log('message sent...');
        }
    });
});
  • Now back to Facebook Messenger, if you send the message again, you should see a reply back.

29

  • The power of Facebook Messenger is that it doesn’t only supports Text messages, but it also supports different kind of templates and message formats. To learn more about these templates visit Facebook Messenger Platform Guide / Templates.The easiest way to generate these templates from your BOT is to use ‘botly’ framework helper methods (sendButton, sendText, sendAttachment…) to learn the signature of these methods check the botly framework documentation on github. One last hint, if you generated and used buttons template; when user click on a button, the payload defined for that button is sent to your BOT and specifically to the below method.
botly.on("postback", function (userId, message, postback) {
    console.log(postback);
});

Deploy to ACCS

  • In order to deploy to ACCS, create a new (manifest.json) file at root level

  • Use the code below as the content of the newly create (manifest.json) file. Note the value of the “command” attribute, you need to replace ‘app.js’ with the name of your nodejs main file.
{
  "runtime":{"majorVersion" : "6.3"},
  "command":"node app.js",
  "release" : {},
  "notes":""
}
  • Within your main nodejs file, modify the ‘express’ app configuration as below (line 4). This will instruct your code to use a random port generated by ACCS if ACCS decides to do so for a reason or another.
var app = express();
app.use(bodyParser.json());
app.use("/webhook", botly.router());
app.listen( process.env.PORT || 8081);
  • Package the content of your application as a zip file, you should zip the content of the root folder and not the folder itself. Navigate to ACCS console and make sure it is configured correctly with StorageCS. Within ‘ACCS’ console, select ‘Applications’ for the upper navigation menu. Then click ‘Create Application’. From the ‘Create Application dialog’, select ‘Node’.

  • In the ‘Create Application’ dialog, set a ‘Name’, and upload the zip archive you created before. Set the ‘Memory’ to 5 GB. Click ‘Create’.

41

  • After your application is deployed to ACCS, copy the application URL.

  • Now you need to edit Facebook webhook and use the ACCS BOT URL instead of the URL generated by ngrok.

Summary

In this tutorial you learned the steps to create a basic Facebook chatbot and how to deploy it to Oracle ACCS. You can enrich the bot by introducing natural language processing ‘NLP’ frameworks to analyze and understand users intent when sending free text messages.


Integrating Commerce Cloud using ICS and WebHooks

$
0
0

Introduction:

Oracle Commerce Cloud is a SaaS application and is a part of the comprehensive CX suite of applications. It is the most extensible, cloud-based ecommerce platform offering retailers the flexibility and agility needed to get to market faster and deliver desired user experiences across any device.

Oracle’s iPaaS solution is the most comprehensive cloud based integration platform in the market today.  Integration Cloud Service (ICS) gives customers an elevated user experience that makescomplex integration simple to implement.

Commerce Cloud provides various webhooks for integration with other products. A webhook sends a JSON notification to URLs you specify each time an event occurs. External systems can implement the Oracle Commerce Cloud Service API to process the results of a webhook callback request. For example, you can configure the Order Submit webhook to send a notification to your order management system every time a shopper successfully submits an order.

In this article, we will explore how ICS can be used for such integrations. We will use the Abandoned Cart Web Hook which is triggered when a customer leaves the shopping cart idle for a specified period of time. We will use ICS to subscribe to this Web Hook.

ICS provides pre-defined adapters, easy to use visual mechanism for transforming and mapping data, fan out mechanism to send data to multiple end points. It also provides and ability to orchestrate and encrich the payload.

Main Article:

For the purpose of this example, we will create a task in Oracle Sales Cloud (OSC), when the Idle Cart Web Hook is triggered.

The high level steps for creating this integration are:

  1. Register an application in Commerce Cloud
  2. Create a connection to Commerce Cloud in ICS
  3. Create a connection to Sales Cloud in ICS
  4. Create an integration using the 2 newly created connections
  5. Activate the integration and register its endpoint with Abandoned Cart Web Hook

Now let us go over each of these steps in detail

 

Register an application in Commerce Cloud

Login to Admin UI of commerce cloud. Click on Settings

01_CCDashBoard

 

 

 

Click on Web APIs

02_CCSettings

 

 

 

 

 

 

 

 

 

 

 

Click on Registered Applications

03_CCWebAPIs

 

 

 

 

 

 

 

Click on Register Application

04_CCWebAPIsRegisteredApps

 

 

 

 

 

Provide a name for the application and click Save

05_CCNewApp

 

 

 

 

 

A new application is registered and a unique application id and key is created. Click on Click to reveal to view the application key

06_CCNewAppKey1

 

 

 

 

 

Copy the application key that is revealed. This will later be provided while configuring connection to Commerce Cloud in ICS

07_CCNewAppKey2

 

 

 

 

 

You can see the new application is displayed in the list of Registered Applications

08_CCWebAPIsRegisteredApps2

 

 

 

 

 

Create a connection to Commerce Cloud in ICS

From the ICS Dashboard, click Connections to get to the connections section

01_ICSDashboard

 

 

 

 

 

Click Create New Connection

02_Connections

 

 

 

 

 

 

Create Connection – Select Adapter page is displayed. This page lists all the available adapters

03_ICSCreateNewConn

 

 

 

 

 

 

 

 

 

Search for Oracle Commerce Cloud and click Select

04_ICSNewConnCC

 

 

 

 

 

 

 

 

 

Provide a connection name and click Create

05_ICSNewConnCCName

 

 

 

 

 

 

ICS displays the message that connection was created successfully. Click Configure Connectivity

06_ICSNewConnCCCreated

 

 

 

 

 

Provide the Connection base URL. It is of the format https://<site_hostname>:443/ccadmin/v1. Click OK

07_ICSNewConnCCURL

 

 

 

Click Configure Security

08_ICSNewConnCCConfigureSecurity

 

 

 

 

Provide the Security Token. This is the value we copied after registering the application in Commerce Cloud. Click OK

09_ICSNewConnCCOAuthCreds

 

 

 

 

The final step is to test the connection. Click Test

10_ICSNewConnCCTest

 

 

ICS displays the message, if connection test is successful. Click Save

11_ICSNewConnCCTestResult

 

 

 

Create a connection to Sales Cloud in ICS

For details about this step and optionally how to use Sales Cloud Events with ICS, review this article

Create an integration using the 2 newly created connections

From the ICS Dashboard, click Integrations to get to the integrations area

01_Home

 

 

 

 

 

 

 

Click Create New Integration

02_CreateIntegration

 

 

Under Basic Map My Data, click Select

03_Pattern

 

 

 

 

 

Provide a name for the integration and click Create

04_Name

 

 

 

 

 

 

Drag the newly create Commerce Cloud connection from the right, to the trigger area on the left

05_SourceConn

 

 

 

 

 

Provide a name for the endpoint and click Next

06_EP1

 

 

 

 

 

 

Here you can chose various business objects that are exposed by the Commerce Cloud adapter. For the purpose of this integration, chose idleCart and click Next

07_IdleCartEvent

 

 

 

 

 

 

Review the endpoint summary page and click Done

08_EP1ConfigSummary

 

 

 

 

 

 

 

Similarly, drag and drop a Sales Cloud connection to the Invoke

09_TargetConn

 

 

 

 

 

Provide a name for the endpoint and click Next

10_EP2Name

 

 

 

 

 

Chose ActivityService and createActivity operation and click Next

11_CreateActivity

 

 

 

 

 

 

 

 

 

Review the summary and click Done

12_EP2Summary

 

 

 

 

 

Click the icon to create a map and click the “+” icon

This opens the mapping editor. You can create the mapping as desired. For the purpose of this article, a very simple mapping was created:

ActivityFunctionCode was assigned a fixed value of TASK. Subject was mapped to orderId from idleCart event.

22_ICSCreateIntegration

 

 

 

 

 

 

Add tracking fields to the integration and save the integration

25_ICSCreateIntegration

 

 

 

 

 

 

Activate the integration and register its endpoint with Abandoned Cart Web Hook

In the main integrations page, against the newly created integration, click Activate

26_ICSCreateIntegration

 

 

 

 

Optionally, check the box to enable tracing and click Yes

27_ICSCreateIntegration

 

 

 

 

ICS displays the message that the activation was successful. You can see the status as Active.

28_ICSCreateIntegration

 

 

 

Click the information icon for the newly activated integration. This displays the endpoint URL for this integration. Copy the URL. Remove the “/metadata” at the end of the URL. This URL will be provided in the Web Hook configuration of Commerce Cloud.

29_ICSCreateIntegration

 

 

 

 

In the Commerce Cloud admin UI, navigate to Settings -> Web APIs -> Webhook tab -> Event APIs -> Cart Idle – Production. Paste the URL and provide the ICS credentials for Basic Authorization

Webhook

 

 

 

 

 

 

By default, Abandoned cart event fires after 20 minutes. This and other settings can be modified. Navigate to Settings -> Extension Settings -> Abandoned Cart Settings. You can now configure the minutes until the webhook is fired. For testing, you can set it to a low value.

 

CCAbandonedCartSettings

 

 

 

 

 

 

 

 

This completes all the steps required for this integration. Now every time a customer adds items to a cart and leaves it idle for the specified time, this integration will create a task in OSC.

 

References / Further Reading:

Using Commerce Cloud Web Hooks

Using Event Handling Framework for Outbound Integration of Oracle Sales Cloud using Integration Cloud Service

Loading Data into Oracle BI Cloud Service using BI Publisher Reports and REST Web Services

$
0
0

Introduction

This post details a method of loading data that has been extracted from Oracle Business Intelligence Publisher (BIP) into the Oracle Business Intelligence Cloud Service (BICS). The BIP instance may either be Cloud-Based or On-Premise.

It builds upon the A-Team post Extracting Data from Oracle Business Intelligence 12c Using the BI Publisher REST API. This post uses REST web services to extract data from an XML-formatted BIP report.

The method uses the PL/SQL language to wrap the REST extract, XML parsing commands, and database table operations. It produces a BICS staging table which can then be transformed into star-schema object(s) for use in modeling.  The transformation processes and modeling are not discussed in this post.

Additional detailed information, including the complete text of the procedure described, is included in the References section at the end of the post.

Rationale for using PL/SQL

PL/SQL is the only procedural tool that runs on the BICS / Database Schema Service platform. Other wrapping methods e.g. Java, ETL tools, etc. require a platform outside of BICS to run on.

PL/SQL can utilize native SQL commands to operate on the BICS tables. Other methods require the use of the BICS REST API.

Note: PL/SQL is a very good at showcasing functionality. However, it tends to become prohibitively resource intensive when deploying in an enterprise production environment.

For the best enterprise deployment, an ETL tool such as Oracle Data Integrator (ODI) should be used to meet these requirements and more:

* Security

* Logging and Error Handling

* Parallel Processing – Performance

* Scheduling

* Code Re-usability and Maintenance

The steps below depict how to load a BICS table.

About the BIP Report

The report used in this post is named BIP_DEMO_REPORT and is stored in a folder named Shared Folders/custom as shown below: BIP Report Location

The report is based on a simple analysis with three columns and output as shown below:

BIP Demo Analysis

Note: The method used here requires all column values in the BIP report to be NOT NULL for two reasons:

* The XPATH parsing command signals either the end of a row or the end of the data when a null result is returned.

* All columns being NOT NULL ensures that the result set is dense and not sparse. A dense result set ensures that each column is represented in each row.

Additional information regarding dense and sparse result sets may be found in the Oracle document Database PL/SQL Language Reference.

One way to ensure a column is not null is to use the IFNull function in the analysis column definition as shown below:

BIP IFNULL Column Def

Call the BIP Report

The REST API request used here is similar to the one detailed in Extracting Data from Oracle Business Intelligence 12c Using the BI Publisher REST API. The REST API request should be constructed and tested using a REST API testing tool e.g. Postman

This step uses the APEX_WEB_SERVICE package to issue the REST API request and return the result in a CLOB variable. The key inputs to the package call are:

* The URL for the report request service

* Two request readers to be sent for authorization and content.

* The REST body the report request service expects.

* An optional proxy override

An example URL is below:

http://hostname/xmlpserver/services/rest/v1/reports/custom%2FBIP_DEMO_REPORT/run

Note: Any ASCII special characters used in a value within a URL, as opposed to syntax, needs to be referenced using its ASCII code prefixed by a % sign. In the example above, the slash (/) character is legal in the syntax but not for the value of the report location. Thus the report location, “custom/BIP_DEMO_REPORT” must be shown as custom%2FBIP_DEMO_REPORT where 2F is the ASCII code for a slash character.

An example request Authorization header is below.

apex_web_service.g_request_headers(1).name := ‘Authorization’;          apex_web_service.g_request_headers(1).value :=  ‘Basic cHJvZG5leTpBZG1pbjEyMw==‘;

Note: The authorization header value is the string ‘Basic ‘ concatenated with a Base64 encoded representation of a username and password separated by a colon e.g.  username:password

Encoding of the Base64 result should first be tested with a Base64 encoding tool e.g. base64encode.org

An example of the Content-Type header is below:

apex_web_service.g_request_headers(2).name := Content-Type’;            apex_web_service.g_request_headers(2).value := ‘multipart/form-data; boundary=”Boundary_1_1153447573_1465550731355“‘;

Note: The boundary value entered here in the header is for usage in the body below. The boundary text may be any random text not used elsewhere in the request.

An example of a report request body is below:

Boundary_1_1153447573_1465550731355                                                                 Content-Type: application/json                                                                              Content-Disposition: form-data; name=ReportRequest”                        {“byPassCache”:true,”flattenXML”:false}                                         —Boundary_1_1153447573_1465550731355

An example proxy override is below:

www-proxy.us.oracle.com

 An example REST API call:

f_report_clob  := apex_web_service.make_rest_request ( p_url => p_report_url, p_body => l_body,        p_http_method => ‘POST’,  p_proxy_override => l_proxy_override );

Parse the BIP REST Result

The BIP REST result is the report XML data embedded in text with form-data boundaries.

This step uses the :

* INSTR function to determine the beginning and end of the embedded XML

* SUBSTR function to extract just the embedded XML and store it in a CLOB variable

* XMLTYPE.createXML function to convert and return the XML.

The key inputs to this step are:

* The CLOB returned from BIP REST call above

* The XML root name returned from the BIP report, e.g. DATA_DS

An example of the REST result returned is below:

–Boundary_2_1430729833_1479236681852

Content-Type: application/json

Content-Disposition: form-data; name=”ReportResponse”

{“reportContentType”:”text/xml”}

–Boundary_2_1430729833_1479236681852

Content-Type: application/octet-stream

Content-Disposition: form-data; filename=”xmlp2414756005405263619tmp”; modification-date=”Tue, 15 Nov 2016 19:04:41 GMT”; size=1242; name=”ReportOutput”

<?xml version=”1.0″ encoding=”UTF-8″?>

<!–Generated by Oracle BI Publisher 12.2.1.1.0 -Dataengine, datamodel:_custom_BIP_DEMO_MODEL_xdm –>

<DATA_DS><SAW.PARAM.ANALYSIS></SAW.PARAM.ANALYSIS>

<G_1>

<COLUMN0>Accessories</COLUMN0><COLUMN1>5161697.87</COLUMN1><COLUMN2>483715</COLUMN2>

</G_1>

<G_1>

         <COLUMN0>Smart Phones</COLUMN0><COLUMN1>6773120.36</COLUMN1><COLUMN2>633211</COLUMN2>

</G_1>

</DATA_DS>

–Boundary_2_1430729833_1479236681852– >

Examples of the string functions to retrieve and convert just the XML are below. The f_report_clob variable contains the result of the REST call. The p_root_name variable contains the BIP report specific XML rootName.

To find the starting position of the XML, the INSTR function searches for the opening tag consisting of the root name prefixed with a ‘<’ character, e.g. <DATA_DS:

f_start_position := instr ( f_report_clob, ‘<‘ || p_root_name );

To find the length of the XML, the INSTR function searches for the position of the closing tag consisting of the root name prefixed with a ‘</’ characters, e.g. </DATA_DS, determines and adds the length of the closing tab using the LENGTH function, and subtracts the starting position:

f_xml_length := instr ( f_report_clob, ‘</’ || p_root_name ) + length( ‘</’ || p_root_name || ‘>’) f_start_position ;

To extract the XML and store it in a CLOB variable, the SUBSTR function uses the starting position and the length of the XML:

f_xml_clob := substr(f_report_clob, f_start_position, f_xml_length );

To convert the CLOB into an XMLTYPE variable:

f_xml := XMLTYPE.createXML( f_xml_clob );

Create a BICS Table

This step uses a SQL command to create a simple staging table that has 20 identical varchar2 columns. These columns may be transformed into number and date data types in a future transformation exercise that is not covered in this post.

A When Others exception block allows the procedure to proceed if an error occurs because the table already exists.

A shortened example of the create table statement is below:

execute immediate ‘create table staging_table ( c01 varchar2(2048), … , c20 varchar2(2048)  )’;

Load the BICS Table

This step uses SQL commands to truncate the staging table and insert rows from the BIP report XML content.

The XML content is parsed using an XPATH command inside two LOOP commands.

The first loop processes the rows by incrementing a subscript.  It exits when the first column of a new row returns a null value.  The second loop processes the columns within a row by incrementing a subscript. It exits when a column within the row returns a null value.

The following XPATH examples are for a data set that contains 11 rows and 3 columns per row:

//G_1[2]/*[1]/text()          — Returns the value of the first column of the second row

//G_1[2]/*[4]/text()          — Returns a null value for the 4th column signaling the end of the row

//G_1[12]/*[1]/text()        — Returns a null value for the first column of a new row signaling the end of the — data set

After each row is parsed, it is inserted into the BICS staging table.

An image of the staging table result is shown below:

BIP Table Output

Summary

This post detailed a method of loading data that has been extracted from Oracle Business Intelligence Publisher (BIP) into the Oracle Business Intelligence Cloud Service (BICS).

Data was extracted and parsed from an XML-formatted BIP report using REST web services wrapped in the Oracle PL/SQL APEX_WEB_SERVICE package.

A BICS staging table was created and populated. This table can then be transformed into star-schema objects for use in modeling.

For more BICS and BI best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for BICS.

References

Complete Text of Procedure Described

Extracting Data from Oracle Business Intelligence 12c Using the BI Publisher REST API

Database PL/SQL Language Reference

Reference Guide for the APEX_WEB_SERVICE

REST API Testing Tool

XPATH Testing Tool

Base64 decoding and encoding Testing Tool

 

 

Best Practices – Data movement between Oracle Storage Cloud Service and HDFS

$
0
0

Introduction

Oracle Storage Cloud Service should be the central place for persisting raw data produced from another PaaS services and also the entry point for data that is uploaded from the customer’s data center. Big Data Cloud Service ( BDCS ) supports data transfers between Oracle Storage Cloud Service and HDFS. Both Hadoop and Oracle provides various tools and Oracle engineered solutions for the data movement. This document outlines various tools and describes the best practices to improve data transfer usability between Oracle Storage Cloud Service and HDFS.

Main Article

Architectural Overview

 

new_oss_architecture

Interfaces to Oracle Storage Cloud Service

 

Interface

Resource

odcp

Accessing Oracle Storage Cloud Service Using Oracle Distributed Copy

Distcp

Accessing Oracle Storage Cloud Service Using Hadoop Distcp

Upload CLI

Accessing Oracle Storage Cloud Service Using the Upload CLI Tool

Hadoop fs -cp

Accessing Oracle Storage Cloud Service Using hadoop File System shell copy

Oracle Storage Cloud Software Appliance

Accessing Oracle Storage Cloud Service Using Oracle Storage Cloud Software Appliance

Application Programming platform

Java Library – Accessing Oracle Storage Cloud Service Using Java Library
File Transfer Manager API – Accessing Oracle Storage Cloud Service Using File Transfer Manager API
REST API – Accessing Oracle Storage Cloud Service Using REST API

 

Oracle Distributed Copy (odcp)

Oracle Distributed Copy (odcp) is a tool for copying very large data files in a distributed environment between HDFS and an Oracle Storage Cloud Service.

  • How does it work?

odcp tool has two main components.

(a) odcp launcher script

(b) conductor application

odcp launcher script is a bash script serving as a launcher for the spark application which provides a fully parallel transfer of files.

Conductor application is an Apache Spark application to copy large files between HDFS and an Oracle Storage Cloud Service.

For end users it is recommended to use the odcp launcher script. The odcp launcher script simplifies the usage of Conductor application by encapsulating environment variables setup for hadoop/Java, spark-submit parameters setup and invoking spark application etc. The conductor application is an ideal approach while performing data movement using spark application.

blog3

odcp takes the given input file (source) and splits it into smaller file chunks. Each input chunk is then transferred by one executor over the network to destination store.

basic-flow

When all chunks are successfully transferred, executors take output chunks and merge them into final output files.

flow

  • Examples

Oracle Storage Cloud Service is based on Swift, the open-source Open Stack Object Store. The data stored in Swift can be used as the direct input to a MapReducer job by simply using the “swift:// <URL>” to declare the source of the data. In a Swift File system URL, the hostname part of the URL identifies the container and the service to work with; the path identifies the name of the object.

Swift syntax:

Swift://<MyContainer.MyProvider>/<filename>

odcp launcher script

Copy file from HDFS to Oracle Storage Cloud Service

odcp hdfs:///user/oracle/data.raw swift://myContainer.myProvider/data.raw

Copy file from Oracle Storage Cloud Service to HDFS:

odcp swift://myContainer.myProvider/data.raw hdfs:///user/oracle/odcp-data.raw

Copy directory from HDFS to Oracle Storage Cloud Service:

odcp hdfs:///user/data/ swift://myContainer.myProvider/backup

In case the system has more than 3 nodes, transfer speed can be increased by specifying a higher number of executors. For 6 nodes, use the following command:

odcp –num-executors=6 hdfs:///user/oracle/data.raw swift://myContainer.myProvider/data.raw

 

Highlight of odcp launcher script Options
–executor-cores: This option is called number of executor cores. This specifies the number of thread counts which depends on vCPU. This allows scripts to run in parallel based on the thread count.  The default value is 30.
–num-executors: This option is called number of executors. This will be the same as the number of physical node/VMs. The default value is 3.

 

Conductor application

Usage: Conductor [options] <source URI...> <destination URI>
<source URI...> <destination URI>
source/destination file(s) URI, examples:
hdfs://[HOST[:PORT]]/<path>
swift://<container>.<provider>/<path>
file:///<path>
-i <value> | --fsSwiftImpl <value>
swift file system configuration. Default taken from etc/hadoop/core-site.xml (fs.swift.impl)
-u <value> | --swiftUsername <value>
swift username. Default taken from etc/hadoop/core-site.xml fs.swift.service.<PROVIDER>.username)
-p <value> | --swiftPassword <value>
swift password. Default taken from etc/hadoop/core-site.xml (fs.swift.service.<PROVIDER>.password)
-i <value> | --swiftIdentityDomain <value>
swift password. Default taken from etc/hadoop/core-site.xml (fs.swift.service.<PROVIDER>.tenant)
-a <value> | --swiftAuthUrl <value>
swift auth URL. Default taken from etc/hadoop/core-site.xml (fs.swift.service.<PROVIDER>.auth.url)
-P <value> | --swiftPublic <value>
indicates if all URLs are public - yes/no (default yes). Default taken from etc/hadoop/core-site.xml (fs.swift.service.<PROVIDER>.public)
-r <value> | --swiftRegion <value>
swift Keystone region
-b <value> | --blockSize <value>
destination file block size (default 268435456 B), NOTE: remainder after division of partSize by blockSize must be equal to zero
-s <value> | --partSize <value>
destination file part size (default 1073741824 B), NOTE: remainder after division of partSize by blockSize must be equal to zero
-e <value> | --srcPattern <value>
copies file when their names match given regular expression pattern, NOTE: ignored when used with --groupBy
-g <value> | --groupBy <value>
concatenate files when their names match given regular expression pattern
-n <value> | --groupName <value>
group name (use only with --groupBy), NOTE: slashes are not allowed
--help
display this help and exit

 

One can submit a spark conductor application to a spark deployment environment for execution of spark applications. Below is an example of how to submit a spark conductor application.

spark-submit
–conf spark.yarn.executor.memoryOverhead=600
–jars hadoop-openstack-spoc-2.7.2.jar,scopt_2.10-3.4.0.jar
–class oracle.paas.bdcs.conductor.Conductor
–master yarn
–deploy-mode client
–executor-cores <number of executor core e.g. 5>
–executor-memory <memory size e.g. 40G>
–driver-memory < driver memory size e.g. 10G>
original-conductor-1.0-SNAPSHOT.jar
–swiftUsername <oracle username@oracle.com>
–swiftPassword <password>
–swiftIdentityDomain <storage ID assigned to this user>
–swiftAuthUrl https://<Storage cloud domain name e.g. storage.us2.oraclecloud.com:443>/auth/v2.0/tokens 
–swiftPublic true
–fsSwiftImpl org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem
–blockSize <block size e.g. 536870912>
swift://<container.provider e.g. rstrejc.a424392>/someDirectory
swift:// <container.provider e.g. rstrejc.a424392>/someFile
hdfs:///user/oracle/

  • Limitations

odcp consumes a lot of resources of the cluster. While running other Spark/MapReduce jobs parallel to odcp, one needs to adjust the number of executors, the amount of memory available to the executors or the number of executor cores using the options –executor-cores,  –executor-memory and –num-executors parameter value for better performance.

 

Distcp

Distcp (distributed copy) is a Hadoop utility used for inter/intra-cluster copying of large amounts of data in parallel. The Distcp command submits a regular MapReducer job that performs a file-by-file copy.

  • How does it work?

Distcp involves two steps:

(a) Building the list of files to copy (known as the copy list)

(b) Running a MapReduce job to copy files, with the copy list as input

distcp

The MapReduce job that does the copying has only mappers—each mapper copies a subset of files in the copy list. By default, the copy list is a complete list of all files in the source directory parameters of Distcp.

 

  • Examples

 

Copying data from HDFS to Oracle Storage Cloud Service syntax:

hadoop distcp hdfs://<hadoop namenode>/<source filename> swift://<MyContainer.MyProvider>/<destination filename>

Allocation of JVM heap-size:   

export HADOOP_CLIENT_OPTS=”-Xms<start heap memory size> –Xmx<max heap memory size>”

Setting timeout syntax:

hadoop distcp – Dmapred.task.timeout=<time in milliseconds>  hdfs://<hadoop namenode>/<source filename> swift://<MyContainer.MyProvider>/<destination filename>

Hadoop getmerge syntax:

bin/hadoop fs -getmerge [nl] <source directory> <destination directory>/<output filename>

Hadoop getmerge command takes a source directory and a destination file as an input and concatenates source files into the destination local file. The parameter  –nl can be set to add a newline character at the end of each file.

 

  • Limitations

For a large file copy, one has to make sure that the task has a termination strategy in case the task doesn’t read an input, write an output, or update its status string. The option  -Dmapred.task.timeout=<time in milliseconds>  can be used to set the maximum timeout value. In case of 1TB file size use -Dmapred.task.timeout=60000000 (approximately 16 hours) with Distcp command.

Distcp might run out of memory while copying very large files. To get around this, consider changing the -Xmx JVM heap-size parameters before executing hadoop distcp command. This value must be multiple of 1024

In order to improve the transfer speed of very large file, one has to split the file at source and copy these split files to destination. Once the files are successfully transferred, at the destination end, Hadoop performs merge operation.

Upload CLI

 

  • How does it work?

The Upload CLI tool is a cross-platform Java-based command line tool that you can use to efficiently upload files to Oracle Storage Cloud Service. This tool optimized uploads through segmentation and parallelization to maximize network efficiency and reduce overall upload time. During the large file transfer process,  if the system gets interrupted, upload CLI tool maintain the state and resumes from the point where the file transfer get interrupted. This tool has an automatic retry option on failures.

  • Example:

Syntax of upload CLI:

java -jar uploadcli.jar -url REST_Endpoint_URL -user userName -container containerName file-or-files-or-directory

To upload a file named file.txt to a standard container myContainer in the domain myIdentityDomain as the user abc.xyz@oracle.com, run the following command:

java -jar uploadcli.jar -url https://foo.storage.oraclecloud.com/myIdentityDomain-myServiceName -user abc.xyz@oracle.com -container myContainer file.txt

When running the Upload CLI tool on a host that’s behind a proxy server, specify the host name and port of the proxy server by using the https.proxyHost and https.proxyPort Java parameters.

 

Syntax of upload CLI behind proxy server:

java -Dhttps.proxyHost=host -Dhttps.proxyPort=port -jar uploadcli.jar -url REST_Endpoint_URL -user userName -container containerName file-or-files-or-directory

  • Limitations

Upload CLI is a java tool and will only run on hosts which satisfy the prerequisites for uploadcli tool.

 

Hadoop fs -cp

 

  • How does it work?

Hadoop fs -cp is a family of Hadoop file system shell commands that can run from source operating system’s command line interface. Hadoop fs -cp is not distributed across cluster. This command transfer data byte by byte from the source machine where the command has been issued.

  • Example

hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2

 

  • Limitations

The byte by byte transfer takes a very long time to copy large file from HDFS to Oracle Storage Cloud Service.

 

Oracle Storage Cloud Software Appliance

 

  • How does it work?

Oracle Storage Cloud Software Appliance is a product that facilitates easy, secure, reliable data storage and retrieval from Oracle Storage Cloud Service. Businesses can use Oracle Cloud Storage without changing their data center applications and workflows. The applications which use standard file-based network protocol like NFS to store and retrieve data, can use Oracle Storage Cloud Software Appliance as a bridge between Oracle Storage Cloud Service which uses object storage and standard file storage. Oracle Storage Cloud Software Appliance caches frequently retrieved data on the local host, minimizing the number of REST API calls to Oracle Storage Cloud Service and enabling low-latency, high-throughput file I/O.

The application host instance can mount directory to the Oracle Storage Cloud Software Appliance that acts as a cloud storage gateway. This enables the application host instance to access Oracle Cloud Storage container as a standard NFS file system.

 

Architecture

blog2

 

  • Limitations

The appliance is ideal for backup and archive use cases that require the replication of infrequently accessed data to cloud containers. Read-only and read-dominated content repositories are ideal target. Once the Oracle Storage Cloud Service container is mapped to a filesystem in Oracle Storage Cloud Software Appliance, other data movement tools like REST API, odcp, distcp, java library can’t be used for the specific container. Doing so would cause the data in the appliance to become inconsistent with data in Oracle Storage Cloud Service.

 

Application Programming Platform

Oracle provides various java library APIs to access Oracle Storage Cloud Services. The following interfaces summarizes various APIs one can use programmatically to access Oracle storage cloud service.

Interface

Description

Java library

Accessing Oracle Storage Cloud Service Using Java Library

File Transfer Manager API

Accessing Oracle Storage Cloud Service Using File Transfer Manager API

REST API

Accessing Oracle Storage Cloud Service Using REST API


Java Library  

 

  • How does it work?

The Java library is useful for Java Applications which prefer to use Oracle Cloud Java API for Oracle Storage Cloud Service instead of tools provided by Oracle and Hadoop. The Java library wraps the RESTful web service API. Most of the major RESTful API features to Oracle Storage Cloud Service are available through the Java Library. The Java Library is available via separate Oracle Cloud Service Java SDK.

 

java library

  • Example

Sample Code snippet

package storageupload;
import oracle.cloud.storage.*;
import oracle.cloud.storage.model.*;
import oracle.cloud.storage.exception.*;
import java.io.*;
import java.util.*;
import java.net.*;
public class UploadingSegmentedObjects {
public static void main(String[] args) {
try {
CloudStorageConfig myConfig = new CloudStorageConfig();
myConfig.setServiceName(“Storage-usoracleXXXXX”)
.setUsername(“xxxxxxxxx@yyyyyyyyy.com”)
.setPassword(“xxxxxxxxxxxxxxxxx”.toCharArray())
.setServiceUrl(“https://xxxxxx.yyyy.oraclecloud.com&#8221;);
CloudStorage myConnection = CloudStorageFactory.getStorage(myConfig);
System.out.println(“\nConnected!!\n”);
if ( myConnection.listContainers().isEmpty() ){
myConnection.createContainer(“myContainer”);
}
FileInputStream fis = new FileInputStream(“C:\\temp\\hello.txt”);
myConnection.storeObject(“myContainer”, “C:\\temp\\hello.txt”, “text/plain”, fis);
fis = new FileInputStream(“C:\\temp\\hello.txt”);
myConnection.storeObject(“myContainer”, “C:\\temp\\hello1.txt”, “text/plain”, fis);
fis = new FileInputStream(“C:\\temp\\hello.txt”);
myConnection.storeObject(“myContainer”, “C:\\temp\\hello2.txt”, “text/plain”, fis);
List myList = myConnection.listObjects(“myContainer”, null);
Iterator it = myList.iterator();
while (it.hasNext()) {
System.out.println((it.next().getKey().toString()));
}
} catch (Exception e) {
e.printStackTrace();
}
}
}

 

  • Limitations

Java API cannot create Oracle storage Cloud Service archive container. Appropriate JRE version is required for the Java Library.

 

File Transfer Manager API

 

  • How does it Work?

The File Transfer Manager (FTM) API is a Java library that simplifies uploading to and downloading from Oracle Storage Cloud Service. The File Transfer Manager provides both synchronous and asynchronous APIs to transfer files. It provides a way to track the operations for asynchronous version. The Java Library is available via separate Oracle Cloud Service Java SDK.

 

  • Example

Uploading a Single File Sample Code snippet

FileTransferAuth auth = new FileTransferAuth
(
"email@oracle.com", // user name
"xxxxxx", // password
"yyyyyy", //  service name
"https://xxxxx.yyyyy.oraclecloud.com", // service URL
"xxxxxx" // identity domain
);
FileTransferManager manager = null;
try {
manager = FileTransferManager.getDefaultFileTransferManager(auth);
String containerName = "mycontainer";
String objectName = "foo.txt";
File file = new File("/tmp/foo.txt");
UploadConfig uploadConfig = new UploadConfig();
uploadConfig.setOverwrite(true);
uploadConfig.setStorageClass(CloudStorageClass.Standard);
System.out.println("Uploading file " + file.getName() + " to container " + containerName);
TransferResult uploadResult = manager.upload(uploadConfig, containerName, objectName, file);
System.out.println("Upload completed successfully.");
System.out.println("Upload result:" + uploadResult.toString());
} catch (ClientException ce) {
System.out.println("Upload failed. " + ce.getMessage());
} finally {
if (manager != null) {
manager.shutdown();
}
}

 

REST API

 

  • How does it work?

The REST API can be accessed from any application or programming platform that correctly and completely understands the Hypertext Transfer Protocol (HTTP). The REST API uses advanced facets of HTTP such as secure communication over HTTPS, HTTP headers, and specialized HTTP verbs (PUT, DELETE). cURL is one of the many applications that meet these requirement.

 

  • Example

cURL syntax:

curl -v -s -X PUT -H “X-Auth-Token: <Authorization Token ID>” https://Oracle Cloud Storage domain name>/v1/<storage ID associated to user account/<container name>”

 

Some Data Transfer Test results

The configuration used to measure performance and data transfer rates are as following:

Test environment configuration:

- BDCS 16.2.5
- Hadoop Swift driver 2.7.2
- US2 production data center
- 3 nodes cluster that is running in BDA
- Every node has 256GB memory/30 vCPU
- File size: 1TB (Terabyte)
- File contains all zeros

#

Interface

Source

Destination

Time

Comment

 1  odcp HDFS Oracle Storage Cloud Service 54 minutes Transfer rate :

2.47 GB/sec

1.11 TB/hour

2 hadoop Distcp Oracle Storage Cloud Service HDFS failed Not Enough memory (after 1h)
3 hadoop Distcp HDFS Oracle Storage Cloud Service Failed
4 hadoop Distcp HDFS Oracle Storage Cloud Service 3 hours Based on splitting 1TB files into 50 files with each file size of 10GB. Each 10GB file took 18 minutes (and with partition size 256MB)
5 Upload CLI HDFS Oracle Storage Cloud Service 5 hours  55 minutes Data was read from Big Data Cloud Service HDFS mounted using fuse_dfs
6 hadoop fs -cp HDFS Oracle Storage Cloud Service 11 hours 50 minutes 50 seconds Parallelism 1, Transfer rate: 250 Mb/sec

 

Summary

One can make following conclusions from the above analysis.

Data File size and Data transfer time are two main components on deciding the appropriate interface for data movement between HDFS and Oracle Storage Cloud Service.

There is no additional overhead of data manipulation and processing using odcp interface.

Loading Identity Data Into Oracle IDCS: A Broad High-level Survey

$
0
0

Introduction

Oracle Identity Cloud Service (IDCS) – Oracle’s comprehensive Identity and Access Management platform for the cloud – was released recently. Populating identity data – such as user identities, groups and group memberships – is one of most important tasks that is typically needed initially and on an on-going basis in any identity management system.

IDCS provides multiple ways for uploading identity data. The purpose of this post is to provide a high-level survey of these options. Customers who are starting to use IDCS can use this information to select the mechanism(s) that is (are) best suited for their specific requirements and use-cases.

Please note that any particular method isn’t described in great detail in this article. The goal is to present all the available methods at once and in front of the readers for quick and handy reference. However, links (documentation, tutorials etc) where more information can be found are provided. Also, this post doesn’t describe the authentication and authorization needed for delegated administrators to be able to perform these operations against the IDCS platform. It is assumed that administrators performing these operations have taken necessary steps to gain appropriate privileges.

IDCS supports the following methods for loading identity data:

Bulk Identity Data Upload Using CSV Files

Delegated administrators can perform bulk import of identity data in the CSV format from IDCS Administration Console. CSV for importing user profiles should contain users’ attributes. Groups and user-group memberships can be imported by using a CSV file that contains groups’ attributes and a list of their members. Please refer to the following documentation links for more detailed information:

Bulk Identity Data Upload Using REST APIs

Bulk REST end points can also be used to manage IDCS resources. Bulk end points can be used to mix different kinds of requests together. Please refer to REST API documentation for more information.

AD ID Bridge

Microsoft Active Directory (AD) is a popular identity data store used by enterprises. Customers interested in synchronizing their on-premise AD with IDCS can use IDCS Identity (ID) Bridge to perform initial and on-going (schedule based) automatic identity data synchronization. Please note that as of this writing this is a one-way synchronization – from AD to IDCS and it doesn’t synchronize user passwords. More information about IDCS ID Bridge is available at:

OIM Connector for IDCS

OIM Customers can use the OIM IDCS connector for bi-directional integration with IDCS. Identity information can be reconciled from IDCS into OIM. Identity information can also be managed in IDCS from OIM using OIM’s provisioning capabilities. Other use-cases like hybrid certification and reporting are also possible as a result of integration between OIM and IDCS. More information about IDCS and OIM integration could be found in the following tutorials and videos:

IDCS REST APIs

IDCS exposes all of its identity management capabilities over REST APIs. These APIs are the most generic and flexible way to integrate with IDCS. In fact, all the above IDCS identity management mechanisms use the REST APIs to provide their functionality. IDCS REST APIs can be used to implement custom solutions (for example – custom UIs based on various JavaScript frameworks) that integrate with or make use of IDCS functionality. IDCS REST API documentation is available at:

More IDCS documentation links:

IDCS Getting Started – http://docs.oracle.com/cloud/latest/identity-cloud/index.html

IDCS Tutorials – http://docs.oracle.com/cloud/latest/identity-cloud/identity-cloud-tutorials.htm

IDCS Video Tutorials – http://docs.oracle.com/cloud/latest/identity-cloud/identity-cloud-videos.htm

IDCS Manuals – http://docs.oracle.com/cloud/latest/identity-cloud/identity-cloud-docs.html

IDCS REST APIs – http://docs.oracle.com/cloud/latest/identity-cloud/IDCSA/index.html

Publishing business events from Supply Chain Cloud’s Order Management through Integration Cloud Service

$
0
0

Introduction

In Supply Chain Cloud (SCM) Order Management, as a sales order’s state changes or it becomes ready for fulfillment, events could be generated for external systems. Integration Cloud Service offers Pub/Sub capabilities that could be used to reliably integrate SaaS applications. In this post, let’s take close look at these capabilities in order to capture Order Management events for fulfillment and other purposes. Instructions provided in this post are applicable to SCM Cloud R11 and ICS R16.4.1.

Main Article

SCM Cloud Order Management allows registering endpoints of external systems and assignment of these endpoints to various business events generated during order orchestration. For more information on business event features and order orchestration in general, refer to R11 document at this link. ICS is Oracle’s enterprise-grade iPaaS offering, with adapters for Oracle SaaS and other SaaS applications and native adapters that allow connectivity to all SaaS and on-premise applications. To learn more about ICS, refer to documentation at this link. Figure 1 provides an overview of the solution described in this post.

000

Figure 1 – Overview of the solution

Implementation of the solution requires the following high-level tasks.

  • Download WSDL for business events from SCM cloud.
  • Implement an ICS ‘Basic Publish to ICS’ integration with trigger defined using WSDL downloaded in previous step.
  • Optionally, Implement one or more ICS ‘Basic Subscribe to ICS’ integrations for external systems that desire event notification.
  • Configure SCM Cloud to generate events to the ‘Basic Publish to ICS’ endpoint.
  • Verify generation of Business Events.

For the solution to work, network connectivity between SCM Cloud and ICS and ICS to External systems, including any on-premise systems, must be enabled. ICS agents can easily enable connectivity to on-premise systems.

Downloading WSDL for business events

Order Management provides two WSDL definitions for integration with external systems, one for fulfillment systems and another for other external systems that wish to receive business events. One example for use of business events is generation of invoices by an ERP system, upon fulfillment of an order. For the solution described in this post, a Business Event Connector is implemented. To download WSDLs, follow these steps.

  • Log into SCM Cloud instance.
  • Navigate to ‘Setup and maintenance’ page, by clicking the drop-down next to username on top right of the page.
  • In the search box of ‘Setup and maintenance’ page, type in ‘Manage External Interface Web Service Details’ and click on search button or hit enter.
  • Click on ‘Manage External Interface Web Service Details’ task in search results.
  • On ‘Manage External Interface Web Service Details’ page, click on ‘Download WSDL for external integration’.
  • Two download options are provided as shown in Figure 2.
  • Download ‘Business Event Connector’.

001

Figure 2 – Download Business Event connector WSDL.

Implementing an ICS ‘Basic Publish to ICS’ integration

ICS allows publishing of events through an ICS trigger endpoint. Events published to ICS could be forwarded to one or more registered subscribers. For the solution, business event connector WSDL downloaded in the previous section is configured to a trigger connection for ‘Publish to ICS’ integration’. These are the overall tasks to build the integration:

  • Create a connection and configure WSDL and security.
  • Create new integration using the previously created connection as trigger and ‘ICS Messaging Service’ as invoke.
  • Activate the Integration and test.

Follow these instructions to configure the integration:

  • Navigate to ‘Designer’ tab and click ‘Connections’ from menu on left.
  • Click on ‘New Connection’. Enter values for required fields.

002Upload the WSDL file previously downloaded from SCM Cloud.

004

  • Configure security, by selecting the “Username Password Token’ as security policy.  Note that the Username and Password entered on this page are irrelevant for a trigger connection.  Since a trigger connection is used to initiate integration in ICS, an ICS username and password must be provided for SCM configuration.

005

  • Save the connection and test. Connection is ready for use in integration.
  • Navigate to “Integrations” page. Click “New Integration” to create a new integration.
  • Select “Basic Publish to ICS” pattern for new integration.

006

  • On integration editor, a “Publish to ICS” flow is displayed. On the left of the flow is the trigger, an entry into the flow. Drag the connection created previously to the trigger.

007

 

  • Configure the trigger. The steps are straightforward, as shown in following screenshots.

008

  • Configure SOAP Operation.

009

  • Click ‘Done’ on summary page.

010

  • Drag and drop ‘ICS Messaging Service’ to the right of the integration flow. No mappings are necessary for this integration pattern.
  • Add a business identifier for tracking and save the integration.

011

  • Add a field that could help uniquely identify the message.

012

  • Activate the integration, by clicking on slider button as shown.

013

  • Note the URL of the integration, by clicking on the info icon. This URL will be used by SCM Cloud as an external web service endpoint.

014

ICS integration to receive business events from SCM Cloud is ready for use.

Implementing an ICS ‘Subscribe to ICS’ integration

Subscribing to events published to ICS can be done through few simple steps. Events could be sent to target connection, for example, a DB connection or a web service endpoint. Here are steps to receive events in a web service.

  • Ensure that there is a “Basic Publish to ICS” integration activated and an Invoke connection to receive events is active.
  • Create a new integration in ICS and pick “Basic Subscribe to ICS” pattern. Enter a name and description for the integration.
  • ICS prompts to select one of available “Basic Publish to ICS” integrations. Select an integration and click on “Use”.

015

  • Integration editor shows a flow with “ICS messaging service” as trigger on left. Drag the Invoke connection to the right of the flow. Following screenshot shows how to define a REST connection for invoke. ICS displays several screens to configure the connection. Steps to configure the connection depend on the type of connection receive the events.

016

  • Complete request and response mappings.
  • Add a tracking field, save and activate the integration. It is now ready to receive events.

Configure SCM Cloud to generate business events

The final task is to configure SCM Cloud to trigger Business Events. Follow these instructions:

 

  • Log into SCM and navigate to Setup and Maintenance.
  • Search for “Manage External Interface Web Service Details”.
  • Click on “Manage External Interface Web Service Details”.

SCM-config-001

  • Add an entry for the external interface web service. Use the endpoint to the “Basic Publish to ICS” integration. Enter credentials to ICS as username and password.

SCM-config-002

  • Search for “Manage Business Event Trigger Points” and click on result.
  • Let’s select “Hold” as a trigger for business event.
  • Click “Active” checkbox next to “Hold”.
  • Click on hold and add a connector under “Associated Connectors”
  • Under “Associated Connectors”, “Actions”, select “Add Row”.
  • Select the “SCM_BusinessEvent” external web service added in previous steps.

SCM-config-004

  • Save the configuration and close.
  • SCM Cloud is now configured to send business events.

Verify generation of Business Events

The solution is ready for testing. SCM Cloud and the “Basic Publish to ICS” integration are sufficient to test the solution. If an ICS subscription flow is implemented, ensure that the event has been received in the target system as well.

 

  • Navigate to “Order Management” work area in SCM Cloud.

Test001

  • Select a sales order and apply hold.

Test002

  • Log into ICS and navigate to “Monitoring” and then to “Tracking” page.
  • Verify that the event has been received under “Tracking”.

Test003

ICS has received a SOAP message from Order Management similar to this one:

<Body xmlns="http://schemas.xmlsoap.org/soap/envelope/">
    <results xmlns="http://xmlns.oracle.com/apps/scm/doo/decomposition/DooDecompositionOrderStatusUpdateComposite" xmlns:ns4="http://xmlns.oracle.com/apps/scm/doo/decomposition/DooDecompositionOrderStatusUpdateComposite">
        <ns4:OrderHeader>
            <ns4:EventCode>HOLD</ns4:EventCode>
            <ns4:SourceOrderSystem>OPS</ns4:SourceOrderSystem>
            <ns4:SourceOrderId>300000011154333</ns4:SourceOrderId>
            <ns4:SourceOrderNumber>39050</ns4:SourceOrderNumber>
            <ns4:OrchestrationOrderNumber>39050</ns4:OrchestrationOrderNumber>
            <ns4:OrchestrationOrderId>300000011154333</ns4:OrchestrationOrderId>
            <ns4:CustomerId xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:nil="true"/>
            <ns4:OrderLine>
                <ns4:OrchestrationOrderLineId>300000011154334</ns4:OrchestrationOrderLineId>
                <ns4:OrchestrationOrderLineNumber>1</ns4:OrchestrationOrderLineNumber>
                <ns4:SourceOrderLineId>300000011154334</ns4:SourceOrderLineId>
                <ns4:SourceOrderLineNumber>1</ns4:SourceOrderLineNumber>
                <ns4:OrderFulfillmentLine>
                    <ns4:SourceOrderScheduleId>1</ns4:SourceOrderScheduleId>
                    <ns4:FulfillmentOrderLineId>300000011154335</ns4:FulfillmentOrderLineId>
                    <ns4:FulfillmentOrderLineNumber>1</ns4:FulfillmentOrderLineNumber>
                    <ns4:HoldCode>TD_OM_HOLD</ns4:HoldCode>
                    <ns4:HoldComments>Mani test hold </ns4:HoldComments>
                    <ns4:ItemId>300000001590006</ns4:ItemId>
                    <ns4:InventoryOrganizationId>300000001548399</ns4:InventoryOrganizationId>
                </ns4:OrderFulfillmentLine>
            </ns4:OrderLine>
        </ns4:OrderHeader>
    </results>
</Body>

Summary

This post explained how to publish Order Management events out of Supply chain Management cloud and use ICS publish and subscribe features to capture and propagate those events. This approach is suitable for R11 of SCM cloud and ICS R16.4.1. Subsequent releases of these products might offer equivalent or better event-publishing capabilities out-of-box. Refer to product documentation for later versions before implementing a solution based on this post.

Viewing all 376 articles
Browse latest View live