Quantcast
Channel: ATeam Chronicles
Viewing all 376 articles
Browse latest View live

Common WSDL Issues in ICS and How to Solve Them

$
0
0

Introduction

When using SOAP Web Services, WSDL documents play a very important role. Therefore, while using SOA services it is important to have knowledge about how to handle WSDL documents. While working with Oracle ICS (Integration Cloud Service) having a good handle on WSDL basics and troubleshooting will be very helpful. The fundamental reasoning behind this is because most of the built-in SaaS adapters available in Oracle ICS connect with those applications using SOAP. Salesforce.com, Oracle CPQ and Oracle Sales Cloud are some good examples, not to mention the generic SOAP adapter. Thus, most of these adapters as their first setup step require a WSDL document that describes the structural elements necessary to perform SOAP-based message exchanges, such as message types, port types and bindings.

Properly parsing a WSDL in ICS is a critical step for three reasons.

1) Because It describes how ICS will connect with the application, leveraging bindings and SOAP addresses within it.

2) Because It allows ICS to discover the business objects and operations, which eventually are used in the mapping phase.

3) For those adapters that provide automatic mapping recommendations, it is imperative that the adapter correctly parse all complex types available in the types section of the WSDL document.

Failing to parse the WSDL document of an application pretty much invalidates any further work in ICS. This blog will present common issues found while handling WSDL documents, and what can be done to solve those issues.

Rule Of Thumb: Correctly Inspect the WSDL

Regardless of which issue you are having with WSDL documents, one of the best practices is to always inspect the WSDL content. Most people wrongly assume that if the WSDL is accessible via its URL, then it will be valid. The verification process is basically entering the URL in the browser and checking if any content is being displayed. If any content is displayed it means that the WSDL is accessible, and no network restrictions are in place. However, the content shown in the browser can significantly differ from the raw content generated by the WSDL, so what you see is not what you want.

Tip: From the ICS perspective, the raw content of the WSDL is what the adapters rely on to generate and build the runtime artifacts. Keep this in mind if you are working with any SOAP-based adapter.

This happens because most modern browsers have built-in formatting features that are applied to the content received by the servers. These features present the content in a much better view for end users, such as removing empty lines, coloring the text or breaking down structured contents (such as XML derived documents) into a tree view. For instance, figure 1 shows a WSDL document opened in Google Chrome, where formatting took place while accessing the content.

fig1

Figure 1: Formatted WSDL content being shown in Google Chrome.

Do not rely on what the browser displays; this is a huge mistake, since the browser may obfuscate some issues in the WSDL. A better way to inspect the WSDL is getting access to its raw content. This can be accomplished using several techniques; but from the browser, you can access an option called “View Page Source” that displays the content in raw format, which allows you to copy-and-paste the content into a text editor. Save the content AS-IS in a file with a .wsdl extension. That file must be your starting point to troubleshoot any WSDL issue.

#1 Common Issue: Bad Generated WSDL Documents

Although SOAP-based Web Services are regulated by a W3C specification, which technology to use and how the Web Services are implemented, that is entirely up to the developer. Thus; there are thousands of ways to implement them, and their WSDL can also be created using different approaches. A common practice is having the WSDL automatically generated on-demand. This means that the WSDL is created when its URL is invoked. While this is good practice, since it ensures that the WSDL is always up-to-date with the Web Service implementation, it can also allow for issues on the consumer side.

For example, there are cases where the WSDL is generated with empty lines in the beginning of the document. Issues like this generates parsing errors; because according to the W3C specification, nothing can be contained before the XML declaration (i.e.: <?xml). If that happens, you have to make sure that those empty lines are removed from the WSDL, before using it in the ICS’s connections page. Figure 2 shows an example of a bad generated WSDL document.

fig2

Figure 2: Bad generated WSDL document, with illegal empty lines before the XML declaration.

While being in ICS’s connection page, if you use the WSDL shown in figure 2 and try to hit the “Test” button, ICS will throw an error related to parsing. This pretty much invalidates the connection because in order to be used in integrations, a connection needs to be 100% complete. Figure 3 shows the error thrown by ICS.

fig3

Figure 3: Error thrown by ICS after testing the connection.

To solve this issue, make sure that the generated WSDL has no empty lines before the XML declaration. While this is really simple, it can sometimes be really hard to accomplish that if the people responsible for the Web Service have zero control over the WSDL generation. It is not an uncommon scenario where the exposed Web Service is part of a product that cannot be easily changed. If that happens, another alternative can be hosting a modified version of the WSDL in a HTTP Web Server, and having ICS pointing to that server instead. As long the port types don’t have their SOAP addresses changed, this might work. The counterpart of this approach is that it introduces additional overhead over the implementation, with another extra layer to implement, patch and monitor.

#2 Common Issue: Non Well formed WSDL Documents

Whether having the WSDL automatically generated; or having it statically defined, it is the responsibility of the service provider to make sure that the WSDL document is well formed. If the WSDL is not well formed, then the ICS parser will not be able to validate the document and an error will be thrown. Just like the first common issue; this leads to connection invalidation, which cannot be used when building integrations.

In this site, there are a set of rules that state what XML documents must adhere to, in order to be considered well formed. It also contains a validator tool that you can leverage to make sure a WSDL document is valid.

#3 Common Issue: Types in Separated Documents

Some Web Services that have their WSDL automatically generated create the types used within the WSDL in a separate document. This means that when you get the WSDL, the WSDL only mentions the types by their names; but the types are defined someplace else. Typically, these types are defined in a XML schema document that the WSDL only points to using the import clause. This practice improves the reusability of the element types, and allows to be used in more than one web service definition.

While this practice is great from the service provider point of view, this might cause some issues for the service consumer. If for some reason ICS is not able to completely retrieve the types used in the WSDL, then it will not be able to create the business objects and operations for the integration. This might happen if ICS is not able to access the URL mentioned due to network connectivity issues such as firewall, proxies, etc. Figure 4 shows an example of WSDL that accesses its types using the import clause.

fig4

Figure 4: WSDL document using the import clause for the types.

This situation can be tricky to foresee because any error related to this practice will only occur when you start building the integration. The connection page will inform a user that the connection is “complete”; because for the sake of the test performed in the connections page, it does not establish any physical connection. It only checks if the WSDL document is valid. But when you start building your integration, an error might be thrown when the wizard tries to retrieve the business objects that must be invoked for a given operation. If that happens, make sure that any URL used in the import clause is reachable. If the error still persists, you will have no choice besides including all types directly in the WSDL, manually.

Conclusion

Most built-in adapters found in ICS allow native connection with SaaS applications using the SOAP Web Services technology. Because of this characteristic, being familiar with WSDL documents is essential to obtain its maximum benefits. However, there are issues related to WSDL that most users using ICS might face. This blog explored a few issues discovered  while using WSDL in ICS,  and presented how to solve these issues.


Automating Data Loads from Taleo Cloud Service to BI Cloud Service (BICS)

$
0
0

Introduction

This article will outline a method for extracting data from Taleo Cloud Service, and automatically loading that data into BI Cloud Service (BICS).  Two tools will be used, the Taleo Connect Client, and the BICS Data Sync Utility.   The Taleo Connect Client will be configured to extract data in CSV format from Taleo, and save that in a local directory.  The Data Sync tool will monitor that local directory, and once the file is available, it will load the data into BICS using an incremental load strategy.  This process can be scheduled to run, or run on-demand.

 

Main Article

This article will be broken into 3 sections.

1. Set-up and configuration of the Taleo Connect Client,

2. Set-up and configuration of the Data Sync Tool,

3. The scheduling and configuration required so that the process can be run automatically and seamlessly.

 

1. Taleo Connect

The Taleo Connect Tool communicates with the Taleo backend via web services and provides an easy to use interface for creating data exports and loads.

Downloading and Installing

Taleo Connect tool can be downloaded from Oracle Software Delivery Cloud.

a. Search for ‘Oracle Taleo Platform Cloud Service – Connect’, and then select the Platform.  The tool is available for Microsoft Windows and Linux.

1

 

b. Click through the agreements and then select the ‘Download All’ option.

 

1

c. Extract the 5 zip files to a single directory.

d. Run the ‘Taleo Connect Client Application Installer’

2

e. If specific Encryption is required, enter that in the Encryption Key Server Configuration screen, or select ‘Next’ to use default encryption.

f. When prompted for the Product Packs directory, select the ‘Taleo Connect Client Application Data Model’ folder that was downloaded and unzipped in the previous step, and then select the path for the application to be installed into.

 

Configuring Taleo Connect

a. Run the Taleo Connect Client.  By default in windows, it is installed in the “C:\Taleo Connect Client” directory.  The first time the tool is run, a connection needs to be defined.  Subsequent times this connection will be used by default.

b. Enter details of the Taleo environment and credentials.  Important – the user must have the ‘Integration Role’ to be able to use the Connect Client.

c. Select the Product and correct version for the Taleo environment.  In this example ‘Recruiting 14A’.

d. Select ‘Ping’ to confirm the connection details are correct.

3

 

Creating Extracts from Taleo

Exporting data with Taleo Connect tool requires an export definition as well as an export configuration.  These are saved as XML files, and can then be run from a command line to execute the extract.

This article will walk through very specific instructions for this use case.  More details on the Connect Client can be found in this article.

1. Create The Export Definition

a. Under the ‘File’ menu, select ‘New Export Wizard’

1

b. Select the Product and Model, and then the object that you wish to export.  In this case ‘Department’ is selected.

Windows7_x64

c. To select the fields to be included in the extract, chose the ‘Projections’ workspace tab, as shown below, and then drag the fields from the Entity Structure into that space.  In this example the whole ‘Department’ tree is dragged into the Projections section, which brings all the fields in automatically.

 

Windows7_x64

d. There are options to Filter and Sort the data, as well as Advanced Options, which include using sub-queries, grouping, joining, and more advanced filtering.  For more information on these, see the Taleo Product Documentation.  In the case of a large transaction table, it may be worth considering building a filter that only extracts the last X period of data, using the LastModifiedDate field, to limit the size of the file created and processed each time.  In this example, the Dataset is small, so a full extract will be run each time.

 

Windows7_x64

e. Check the ‘CSV header present’ option.  This adds the column names as the first row of the file, which makes it easier to set up the source in the Data Sync tool.

Windows7_x64

f. Once complete, save the Export Definition with the disk icon, or under the ‘File’ menu.

 

2. Create The Export Configuration

a. Create the Export Configuration, by selecting ‘File’ and the ‘New Configuration Wizard’.

6

b. Base the export specification on the Export Definition created in the last step.

Windows7_x64

c. Select the Default Endpoint, and then ‘Finish’.

8

d. By default the name of the Response, or output file, is generated using an identifier, with the Identity name – in this case Department – and a timestamp.  While the Data Sync tool can handle this type of file name with a wildcard, in this example the ‘Pre-defined value’ is selected so that the export creates the same file each time – called ‘Department.csv’.

Windows7_x64

e. Save the Export Configuration.  This needs to be done before the schedule and command line syntax can be generated.

f. To generate the operating system dependent syntax to run the extract from a command line, check the ‘Enable Schedule Monitoring’ on the General tab, then ‘Click here to configure schedule’.

g. Select the operating system, and interval, and then ‘Build Command Line’.

h. The resulting code can be Copied to the clipboard.  Save this.  It will be used in the final section of the article to configure the command line used by the scheduler to run the Taleo extract process.

Windows7_x64

i.  Manually execute the job by selecting the ‘gear’ icon

 

Menubar

 

j. Follow the status in the monitoring window to the right hand side of the screen.

In this example, the Department.csv file was created in 26 seconds.  This will be used in the next step with the Data Sync tool.

Windows7_x64

 

2. Data Sync Tool

The Data Sync Tool can be downloaded from OTN through this link.

For more information on installing and configuring the tool, see this post that I wrote last year.  Use this to configure the Data Sync tool, and to set up the TARGET connection for the BICS environment where the Taleo data will be loaded.

 

Configuring the Taleo Data Load

a. Under “Project” and “File Data”, create a new source file for the ‘Department.csv’ file created by the Taleo Connect tool.

1

Windows7_x64

b. Under ‘Import Options’, manually enter the following string for the Timestamp format.

yyyy-MM-dd’T’HH:mm:ssX

This is the format that the Taleo Extract uses, and this needs to be defined within the Data Sync tool so that the CSV file can be parsed correctly.

1

c. Enter the name of the Target table in BICS.  In this example, a new table called ‘TALEO_DEPARTMENT’ will be created.

Windows7_x64

d. The Data Sync tool samples the data and makes a determination of the correct file format for each column.  Confirm these are correct and change if necessary.

Windows7_x64

e. If a new table is being created in BICS as part of this process, it is often a better idea to let the Data Sync tool create that table so it has the permissions it requires to load data and create any necessary indexes.  Under ‘Project’ / ‘Target Tables’ right click on the Target table name, and select ‘Drop/Create/Alter Tables’

Windows7_x64

f. In the resulting screen, select ‘Create New’ and hit OK.  The Data Sync tool will connect to the BICS Target environment and execute the SQL required to create the TALEO_DEPARTMENT target table

2

g. If an incremental load strategy is required, select the ‘Update table’ option as shown below

Windows7_x64

h. Select the unique key on the table – in this case ‘Number’

Windows7_x64

i. Select the ‘LastModifiedDate’ for the ‘Filters’ section.  Data Sync will use this to identify which records have changed since the last load.

Windows7_x64

In this example, the Data Sync tool suggests a new Index on the target table in BICS.  Click ‘OK’ to let it generate that on the Target BICS database.

Windows7_x64

 

Create Data Sync Job

Under ‘Jobs’, select ‘New’ and name the job.  Make a note of the Job name, as this will be used later in the scheduling and automation of this process

 

Windows7_x64

 

Run Data Sync Job

a. Execute the newly created Job by selecting the ‘Run Job’ button

Windows7_x64

b. Monitor the progress under the ‘Current Jobs’ tab.

Windows7_x64

c. Once the job completes, go the the ‘History’ tab, select the job, and then in the bottom section of the screen select the ‘Tasks’ tab to confirm everything ran successfully.  In this case the ‘Status Description’ confirms the job ‘Successfully completed’ and that 1164 rows were loading into BICS, with 0 Failed Rows.  Investigate any errors and make changes before continuing.

Windows7_x64

 

3. Configuring and Scheduling Process

As an overview of the process, a ‘.bat’ file will be created and scheduled to run.  This ‘bat’ file will execute the extract from Taleo, with that CSV file being saved to the local file system.  The second step in the ‘.bat’ file will create a ‘touch file’.  The Data Sync Tool will monitor for the ‘touch file’, and once found, will start the load process.  As part of this, the ‘touch file’ will automatically be deleted by the Data Sync tool, so that the process is not started again until a new CSV file from Taleo is generated.

a. In a text editor, create a ‘.bat’ file.  In this case the file is called ‘Taleo_Department.bat’.

b. Use the syntax generated in step ‘2 h’ in the section where the ‘Taleo Export Configuration’ was created.

c. Use the ‘call’ command before this command.  Failure to do this will result in the extract being completed, but the next command in the ‘.bat’ file not being run.

d. Create the ‘touch file’ using an ‘echo’ command.  In this example a file called ‘DS_Department_Trigger.txt’ file will be created.

Windows7_x64

e. Save the ‘bat’ file.

f. Configure the Data Sync tool to look for the Touch File created in step d, by editing the ‘on_demand_job.xml’, which can be found in the ‘conf-shared’ directory within the Data Sync main directory structure.

Windows7_x64

g. At the bottom of the file in the ‘OnDemandMonitors’ section, change the ‘pollingIntervalInMinutes’ to be an appropriate value. In this case Data Sync will be set to check for the Touch file every minute.

h. Add a line within the <OnDemandMonitors> section to define the Data Sync job that will be Executed once the Touch file is found, and the name and path of the Touch file to be monitored.

Windows7_x64

In this example, the syntax looks like this

<TriggerFile job=”Taleo_Load” file=”C:\Users\oracle\Documents\DS_Department_Trigger.txt”/>

 

The Data Sync tool can be configured to monitor for multiple touch files, each that would trigger a different job.  A separate line item would be required for each.

h. The final step is to Schedule the ‘.bat’ file to run at a suitable interval.  Within Windows, the ‘Task Scheduler’ can be found beneath the ‘Accessories’ / ‘System Tools’ section under the ‘All Programs’ menu.  In linux, use the ‘crontab’ command.

 

Summary

This article walked through the steps for configuring the Taleo Connect Client to download data from Taleo and save to a location to be automatically consumed by the Data Sync tool, and loaded to BICS.

 

Further Reading

Taleo Product Documentation

Getting Started with Taleo Connect Client

Configuring the Data Sync Tool for BI Cloud Services

EDI Processing with B2B in hybrid SOA Cloud Cluster integrating On-Premises Endpoints

$
0
0

Executive Overview

SOA Cloud Service (SOACS) can be used to support the B2B commerce requirements of many large corporations. This article discusses a common use case of EDI processing with Oracle B2B within SOA Cloud Service in a hybrid cloud architecture. The documents are received and sent from on-premises endpoints using SFTP channels configured using SSH tunnels.

Solution Approach

Overview

The overall solution is described in the diagram shown here.

B2BCloudFlow(1)(1)An XML file with PurchaseOrder content is sent to a SOACS instance running in Oracle Public Cloud (OPC) from an on-premise SFTP server.

The XML file is received by an FTP Adapter in a simple composite for hand-off to B2B. The B2B engine within SOACS then generates the actual EDI file and transmits it over an SFTP delivery channel back to an on-premise endpoint.

In reality, the endpoint can be any endpoint inside or outside the corporate firewall. Communication with an external endpoint is trivial and hence left out of the discussion here. Using the techniques of SSH tunnels, the objective here is to demonstrate the ease by which any on-premises endpoint can be seamlessly integrated into the SOA Cloud Service hybrid solution architecture.

Our environment involves a SOACS domain on OPC with 2 managed servers. Hence, the communication with an on-premise endpoint is configured using SSH tunnels as described in my team-mate, Christian Weeks’ blog on SSH tunnel for on-premises connectivity in SOA Cloud clusters[1].

If the SOACS domain contains only a single SOACS node, then a simpler approach can also be used to establish the on-premise connectivity via SSH tunneling, as described in my blog on simple SSH tunnel connectivity for on-premises databases from SOA Cloud instance[2].

The following sections walk through the details of setting up the flow for a PurchaseOrder XML document from an on-premise back-end application, like eBusiness Suite to the 850 X12 EDI generated for transmission to an external trading partner.

Summary of Steps

  • Copy the private key of SOACS instance to the on-premise SFTP server
  • Update the whilelist for SOACS compute nodes to allow traffic flow between the SOACS compute nodes and the on-premise endpoints via the intermediate gateway compute node, referred to as CloudGatewayforOnPremTunnel in rest of this post from here onwards. This topic has also been extensively discussed in Christian’s blog[1].
  • Establish an SSH tunnel from the on-premise SFTP Server (OnPremSFTPServer) to the Cloud Gateway Listener host identified within the SOA Cloud Service compute nodes (CloudGatewayforOnPremTunnel). The role of this host to establish the SSH tunnel for a cluster has been extensively discussed in Christian’s blog[1]. This SSH tunnel, as described, will specify a local port and a remote port. The local port will be the listening port of SFTP server, (default is 22) and the remote port can be any port that is available within the SOACS instance (e.g. 2522).
  • Update FTP Adapter’s outbound connection pool configuration to include the new endpoint and redeploy. Since we have a cluster within the SOA Cloud service, the standard JNDI entries for eis/ftp/HAFtpAdapter should be used.
  • Define a new B2B delivery channel for the OnPremise SFTP server using the redirected ports for SFTP transmission.
  • Develop a simple SOA composite to receive the XML  payload via FTP adapter and hand-off to B2B using B2B Adapter.
  • Deploy the B2B agreement and the SOA composite.
  • Test the entire round-trip flow for generation of an 850 X12 EDI from a PurchaseOrder XML file.

sftpTunnel

Task and Activity Details

The following sections will walk through the details of individual steps. The environment consists of the following key machines:

  • SOACS cluster with 2 managed servers and all the dependent cloud services within OPC.
  • A compute node within SOACS instance is identified to be the gateway listener for the SSH tunnel from on-premise hosts (CloudGatewayforOnPremTunnel)
  • Linux machine inside the corporate firewall, used for hosting the On-Premise SFTP Server (myOnPremSFTPServer)

I. Copy the private key of SOACS instance to the on-premise SFTP server

When a SOACS instance is created, a public key file is uploaded for establishing SSH sessions. The corresponding private key has to be copied to the SFTP server. The private key can then be used to start the SSH tunnel from the database server to the SOACS instance.

Alternatively, a private/public key can be generated in the SFTP server and the public key can be copied into the authorized_keys file of the SOACS instance. In the example here, the private key for the SOACS instance has been copied to the SFTP server. A transcript of a typical session is shown below.

slahiri@slahiri-lnx:~/stage/cloud$ ls -l shubsoa_key*
-rw——- 1 slahiri slahiri 1679 Dec 29 18:05 shubsoa_key
-rw-r–r– 1 slahiri slahiri 397 Dec 29 18:05 shubsoa_key.pub
slahiri@slahiri-lnx:~/stage/cloud$ scp shubsoa_key myOnPremSFTPServer:/home/slahiri/.ssh
slahiri@myOnPremDBServer’s password:
shubsoa_key                                                                                100% 1679        1.6KB/s     00:00
slahiri@slahiri-lnx:~/stage/cloud$

On the on-premise SFTP server, login and confirm that the private key for SOACS instance has been copied in the $HOME/.ssh directory.

[slahiri@myOnPremSFTPServer ~/.ssh]$ pwd
/home/slahiri/.ssh
[slahiri@myOnPremSFTPServer ~/.ssh]$ ls -l shubsoa_key
-rw——-+ 1 slahiri g900 1679 Jan  9 06:39 shubsoa_key
[slahiri@myOnPremSFTPServer ~/.ssh]$

II. Create whitelist entries to allow communications between different SOACS compute nodes and on-premise SFTP server

The details about creation of a new security application and rule have been discussed extensively in Christian’s blog[1]. For the sake of brevity, just the relevant parameters for the definition are shown here. These entries are created from the Compute Node Service Console under Network tab.

Security Application
  • Name: OnPremSFTPServer_sshtunnel_sftp
  • Port Type: tcp
  • Port Range Start: 2522
  • Port Range End: 2522
  • Description: SSH Tunnel for On-Premises SFTP Server
Security Rule
  • Name: OnPremSFTPServer_ssh_sftp
  • Status: Enabled
  • Security Application: OnPremSFTPServer_sshtunnel_sftp (as created in last step)
  • Source: Security Lists – ShubSOACS-jcs/wls/ora-ms (select entry that refers to all the managed servers in the cluster)
  • Destination: ShubSOACS-jcs/lb/ora_otd (select the host designated to be CloudGatewayforOnPremTunnel, which could be either the DB or LBR VM)
  • Description: ssh tunnel for On-Premises SFTP Server

III. Create an SSH Tunnel from On-Premise SFTP Server to the CloudGatewayforOnPremTunnel VM’s public IP

Using the private key from Step I, start an SSH session from the on-premise SFTP server host to the CloudGatewayforOnPremTunnel, specifying the local and remote ports. As mentioned earlier, the local port is the standard port for SFTP daemon, e.g. 22. The remote port is any suitable port that is available in the SOACS instance. The syntax of the ssh command used is shown here.

ssh -R :<remote-port>:<host>:<local port> -i <private keyfile> opc@<CloudGatewayforOnPremTunnel VM IP>

The session transcript is shown below.

[slahiri@myOnPremSFTPServer ~/.ssh]$ ssh -v -R :2522:localhost:22 -i ./shubsoa_key opc@CloudGatewayforOnPremTunnel
[opc@CloudGatewayforOnPremTunnel ~]$ netstat -an | grep 2522
tcp        0      0 127.0.0.1:2522              0.0.0.0:*                   LISTEN
tcp        0      0 ::1:2522                         :::*                            LISTEN
[opc@CloudGatewayforOnPremTunnel ~]$

After establishing the SSH tunnel, the netstat utility can confirm that the remote port 2522 is enabled in listening mode within the Cloud Gateway VM. This remote port, 2522 and localhost along with other on-premises SFTP parameters can now be used to define an endpoint in FTP Adapter’s outbound connection pool in Weblogic Adminserver (WLS) console.

IV. Define a new JNDI entry for FTP Adapter that uses the on-premise SFTP server via the SSH  tunnel

From WLS console, under Deployments, update FtpAdapter application by defining parameters for the outbound connection pool JNDI entry for clusters, i.e eis/Ftp/HAFtpAdapter.

The remote port from Step II is used in defining the port within the JNDI entry for FTP Adapter. It should be noted that the host specified will be CloudGatewayforOnPremTunnel instead of the actual on-premise hostname or address of the SFTP server, since the port forwarding with SSH tunnel is now enabled locally within the SOACS instance in Step III.

It should be noted that SOA Cloud instances do not use any shared storage. So, the deployment plan must be copied to the file systems for each node before deployment of the FTP Adapter application.

The process to update the FtpAdapter deployment is fairly straightforward and follows the standard methodology. So, only the primary field values that are used in the JNDI definition are provided below.

  • JNDI under Outbound Connection Pools: eis/Ftp/HAFtpAdapter
  • Host:CloudGatewayforOnPremTunnel
  • Username: <SFTP User>
  • Password: <SFTP User Password>
  • Port:2522
  • UseSftp: true

V. Configure B2B Metadata

Standard B2B configuration will be required to set up the trading partners, document definitions and agreements. The unique configuration pertaining to this test case involves setting up the SFTP delivery channel to send the EDI document to SFTP server residing on premises inside the corporate firewall. Again, the remote port from Step III is used in defining the port for the delivery channel. The screen-shot for channel definition is shown below.

edicloud6After definition of the metadata, the agreement for outbound 850 EDI is deployed for runtime processing.

VI. Verification of SFTP connectivity

After the deployment of the FTP Adapter. another quick check of netstat for port 2522 may show additional entries indicating an established session corresponding to the newly created FTP Adapter. The connections are established and disconnected based on the polling interval of the FTP Adapter. Another alternative to verify the SFTP connectivity will be to manually launch an SFTP session from the command-line as shown here.

[opc@shubsoacs-jcs-wls-1 ~]$ sftp -oPort=2522 slahiri@CloudGatewayforOnPremTunnel
Connecting to CloudGatewayforOnPremTunnel…
The authenticity of host ‘[cloudgatewayforonpremtunnel]:2522 ([10.196.240.130]:2522)’ can’t be established.
RSA key fingerprint is 93:c3:5c:8f:61:c6:60:ac:12:31:06:13:58:00:50:eb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘[cloudgatewayforonpremtunnel]:2522′ (RSA) to the list of known hosts.
slahiri@cloudgatewayforonpremtunnel’s password:
sftp> quit
[opc@shubsoacs-jcs-wls-1 ~]$

While this SFTP session is connected, a quick netstat check on the CloudGatewayforOnPremTunnel host will confirm the established session for port 2522 from the SOACS compute node.

[opc@CloudGatewayforOnPremTunnel ~]$ netstat -an | grep 2522
tcp        0       0 0.0.0.0:2522                       0.0.0.0:*                               LISTEN
tcp        0      0 10.196.240.130:2522         10.196.246.186:14059        ESTABLISHED
tcp        0       0 :::2522                                 :::*                                       LISTEN
[opc@CloudGatewayforOnPremTunnel ~]$

VII. Use the newly created JNDI to develop a SOA composite containing FTP Adapter and B2B Adapter to hand-off the XML payload from SFTP Server to B2B engine

The simple SOA composite diagram built in JDeveloper for this test case is shown below.

The JNDI entry created in step IV (eis/ftp/HAFtpAdapter) is used in the FTP Adapter Wizard session within JDeveloper to set up a receiving endpoint from the on-premises SFTP server. A simple BPEL process is included to transfer the input XML payload to B2B. The B2B Adapter then hands-off the XML payload to the B2B engine for generation of the X12 EDI in native format.

edicloud4

Deploy the composite via EM console to complete the design-time activities. We are now ready for testing.

VIII. Test the end-to-end EDI processing flow

After deployment, the entire flow can be tested by copying a PurchaseOrder XML file in the polling directory for incoming files within the on-premise SFTP server. An excerpt from the sample XML file used as input file to trigger the process, is shown below.

[slahiri@myOnPremSFTPServer cloud]$ more po_850.xml
<Transaction-850 xmlns=”http://www.edifecs.com/xdata/200″ xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” XDataVersion=”1.0″ Standard=”X12” Version=”V4010” CreatedDate=”2007-04-10T17:16:24″ CreatedBy=”ECXEngine_837″>
     <Segment-ST>
           <Element-143>850</Element-143>
           <Element-329>16950001</Element-329>
      </Segment-ST>
      <Segment-BEG>
           <Element-353>00</Element-353>
           <Element-92>SA</Element-92>
           <Element-324>815455</Element-324>
           <Element-328 xsi:nil=”true”/>
           <Element-373>20041216</Element-373>
        </Segment-BEG>
–More–(7%)

The FTP Adapter of the SOA composite from SOACS instance will pick up the XML file via the SSH tunnel and process it in Oracle Public Cloud within Oracle B2B engine to generate the EDI. The EDI file will then be transmitted back to the on-premise SFTP server via the same SSH tunnel.

Results from the completed composite instance should be visible in the Enterprise Manager, as shown below.

edicloud2

Content of the EDI file along with the SFTP URL used to transmit the file can be seen in the B2B console, under Wire Message Reports section.

edicloud1

Summary

The test case described here is a quick way to demonstrate the concept that SOA Cloud Service can be very easily used in a hybrid architecture for modelling common B2B use cases, that require access to on-premise endpoints. The EDI generation process and all the business layer orchestration can be done in Oracle Public Cloud (OPC) with SOA Suite. Most importantly, integration with on-premise server endpoints can be enabled as needed via SSH tunnels to provide a hybrid cloud solution.

Acknowledgements

SOACS Product Management and Engineering teams have been actively involved in the development of this solution for many months. It would not have been possible to deliver such a solution to the customers without their valuable contribution.

References

  1. 1. Setting up SSH tunnels for cloud to on-premise with SOA Cloud Service clusters – Christian Weeks, A-Team
  2. 2. SOA Cloud Service – Quick and Simple Setup of an SSH Tunnel for On-Premises Database Connectivity - Shub Lahiri, A-Team

Integrating Oracle Sales Cloud with Oracle Business Intelligence Cloud Service (BICS) – Part 2

$
0
0

Introduction

 

This article provides a fresh approach on the subject of integrating Oracle Sales Cloud with Oracle Business Intelligence Cloud Service (BICS).


Integrating Oracle Sales Cloud with Oracle Business Intelligence Cloud Service (BICS) – Part 1,
showcased how to use Oracle Transactional Business Intelligence (OTBI) to extract data from Sales Cloud and load it into BICS.


This article tackles the reverse data movement pattern – loading data from a BICS dashboard into Sales Cloud.


Data is inserted into Sales Cloud using the REST API for Oracle Sales Cloud. This is the more conventional part of the solution, using similar concepts covered in past BICS integration blogs such as:


1)    PL/SQL is used for the ETL.


2)    A Database Stored Procedure is triggered by a Database Function.


3)    The Database Function is referenced in the Modeler using EVALUATE.


4)    The data-load is triggered from a Dashboard using an Action Link.


5)    Dashboard Prompts are used to pass selected values from the Dashboard

       to the Stored Procedure using Request and Session Variables.

sales_cloud_blog

The more ambitious component of this article is replicating the user experience of scraping data from a dynamically filtered Dashboard Analysis Request. Write-back is emulated by replicating what the user views on the Dashboard in a stored procedure SQL SELECT.


1)    The Dashboard Consumer refines the results on the Dashboard with a Prompt that represents a unique record identifier.


2)    The Dashboard Prompt Sections are passed to the Stored Procedure SELECT and used in a WHERE CLAUSE to replicate the refinement that the Dashboard Consumer makes on the Dashboard.

Snap16


Note: When replicating Dashboard SQL in the stored procedure, be cautious of data that has had row level security applied in the Modeler. To avoid erroneous results, all customized permissions must be manually enforced through the stored procedure SQL WHERE clause.


The following steps walk-through the creation of the necessary BICS and PL/SQL artifacts needed to load data from a BICS Dashboard into Sales Cloud. The example provided interprets the contact information from the Dashboard and creates a new matching contact in Sales Cloud. This example could be easily modified to support other REST API methods.


Part A – Configure BICS Dashboard


1)    Create BICS Table

2)    Insert Records into BICS Table

3)    Create Analysis Request

4)    Create Dashboard


Part B – Configure PL/SQL


5)    Review REST API Documentation

6)    Test POST Method

7)    Create Stored Procedure

8)    Test Stored Procedure

9)    Create Function

10)  Test Function


PART C – Configure Action Link Trigger


11)  Create DUMMY_PUSH Table

12)  Create Variable

13)  Reference Variable

14)  Create Model Expression

15)  Create DUMMY_PUSH Analysis Request

16)  Create Action Link

17)  Execute Update

 

Main Article


Part A – Configure BICS Dashboard


Step 1 – Create BICS Table


Create a simple “contacts” table in Oracle Application Express (Apex) -> SQL Workshop -> SQL Commands.

Where CONTACT_KEY is the unique record identifier that will be used to refine the data on the Dashboard. This must be something that the Dashboard Consumer can easily recognize and decipher.

CREATE TABLE BICS_CONTACTS(
FIRST_NAME VARCHAR2(500),
LAST_NAME VARCHAR2(500),
ADDRESS1 VARCHAR2(500),
CITY VARCHAR2(500),
COUNTRY VARCHAR2(500),
STATE VARCHAR2(500),
CONTACT_KEY VARCHAR2(500));


Step 2 – Insert Records into BICS Table


Insert a selection of sample contact records into the contacts table.

For a text version of both SQL snippet click here.

INSERT INTO BICS_CONTACTS(FIRST_NAME,LAST_NAME,ADDRESS1,CITY,COUNTRY,STATE,CONTACT_KEY)
VALUES (‘Jay’,’Pearson’,’7604 Technology Way’,’Denver’,’US’,’CO’,’Pearson-Jay’);

Snap17


Step 3 – Create Analysis Request


Add the BICS_CONTACTS table to the Model and join it to another table.

Create an Analysis Request based on the BICS_CONTACTS table.

Add a filter on CONTACT_KEY where Operator = “is prompted”.

Snap5

 Snap2

Snap3

Step 4 – Create Dashboard


Create a Dashboard. Add the BICS_Contacts Analysis and a Prompt on CONTACT_KEY. To keep the example simple, a List Box Prompt has been used. Additionally, “Include All Column Values” and “Enable user to select multiple values” are disabled. It is possible to use both these options, with extra manual SQL in the stored procedure. A workaround for passing multiple values to session variables has been previously discussed in Integrating Oracle Social Data and Insight Cloud Service with Oracle Business Intelligence Cloud Service (BICS).

Snap7

Part B – Configure PL/SQL


Step 5 – Review REST API Documentation


Begin by reviewing the REST API for Oracle Sales Cloud documentation. This article only covers using:

Task: Create a contact
Request: POST
URI: crmCommonApi/resources/<version>/contact

That said there are many other tasks / requests available in the API that may be useful for various integration scenarios.

Step 6 – Test POST Method


From Postman:

Snap14

Snap15

Snap13

From Curl:

For a text version of the curl click here.

curl -u user:pwd -X POST -v -k -H “Content-Type: application/vnd.oracl
e.adf.resourceitem+json” -H “Cache-Control: no-cache” -d@C:\temp\contact.json ht
tps://abcd-fap1234-crm.oracledemos.com:443/crmCommonApi/resources/11.1.10/contacts

Where C:\temp\contact.json is:

For a text version of the JSON click here.

{
“FirstName”: “John Barry”,
“LastName”: “Smith”,
“Address”: [
{
"Address1": "100 Oracle Parkway",
"City": "Redwood Shores",
"Country": "US",
"State": "CA"
}
]
}

Confirm Contact was created in Sales Cloud.

Snap9 Snap10

Snap11

Snap12

Step 7 – Create Stored Procedure


Replace Sales Cloud Server Name, Username, and Password.

For a text version of the code snippet click here.

create or replace PROCEDURE PUSH_TO_SALES_CLOUD(p_selected_records varchar2, o_status OUT varchar2) IS
l_ws_response_clob CLOB;
l_ws_url VARCHAR2(500) := ‘https://abc1-fap1234-crm.oracledemos.com:443/crmCommonApi/resources/11.1.10/contacts';
l_body CLOB;
v_array apex_application_global.vc_arr2;
v_first_name VARCHAR2(100);
v_last_name VARCHAR2(100);
v_address1 VARCHAR2(100);
v_city VARCHAR2(100);
v_country VARCHAR2(100);
v_state VARCHAR2(100);
v_status VARCHAR2(100);

v_array := apex_util.string_to_table(p_selected_records, ‘,’);
FOR j in 1..v_array.count LOOP
SELECT FIRST_NAME,LAST_NAME,ADDRESS1,CITY,COUNTRY,STATE
INTO v_first_name, v_last_name, v_address1, v_city, v_country, v_state
FROM BICS_CONTACTS
WHERE CONTACT_KEY = v_array(j);
l_body := ‘{
“FirstName”: “‘ || v_first_name ||
‘”,”LastName”: “‘ || v_last_name ||
‘”,”Address”: [{"Address1": "' || v_address1 ||
'","City": "' || v_city ||
'","Country": "' || v_country ||
'","State": "' || v_state ||
'"}]}';
–dbms_output.put_line(‘Body:’ || dbms_lob.substr(l_body));
apex_web_service.g_request_headers(1).name := ‘Content-Type';
apex_web_service.g_request_headers(1).value := ‘application/vnd.oracle.adf.resourceitem+json';
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => l_ws_url,
p_body => l_body,
p_username => ‘User‘,
p_password => ‘Pwd‘,
p_http_method => ‘POST’
);
v_status := apex_web_service.g_status_code;
–dbms_output.put_line(‘Status:’ || dbms_lob.substr(v_status));
END LOOP;
o_status :=v_status;
COMMIT;
–;

Step 8 – Test Stored Procedure


For a text version of the code snippet click here.

declare
o_status integer;

PUSH_TO_SALES_CLOUD(‘Pearson-Jay’,o_status);
dbms_output.put_line(o_status);
–;

RETURNS: 201 – indicating successful creation of contact

Step 9 – Create Function


For a text version of the code snippet click here.

create or replace FUNCTION FUNC_PUSH_TO_SALES_CLOUD
(
p_selected_records IN VARCHAR2
) RETURN VARCHAR
IS PRAGMA AUTONOMOUS_TRANSACTION;
o_status VARCHAR2(100);

PUSH_TO_SALES_CLOUD(p_selected_records,o_status);
COMMIT;
RETURN o_status;
–;

Step 10 – Test Function


For a text version of the SQL click here.

select FUNC_PUSH_TO_SALES_CLOUD(‘Pearson-Jay’)
from dual;

RETURNS: 201 – indicating successful creation of contact


Part C – Configure Action Link Trigger


Quick Re-Cap:


It may be useful to revisit the diagram provided in the intro to give some context to where we are at.

“Part A” covered building the Dashboard show in #4

“Part B” covered building items #1 & #2.

“Part C” will now cover the remaining artifacts show in #3, and #4.


Step 11 – Create DUMMY_PUSH Table


This tables main purpose is to trigger the database function. It must have a minimum of one column and maximum one row. It is important that this table only has one row as the function will be trigger for every row in this table.

For a text version of the SQL click here.

CREATE TABLE DUMMY_PUSH (REFRESH_TEXT VARCHAR2(255));

INSERT INTO DUMMY_PUSH(REFRESH_TEXT) VALUES (‘Status:’);

 

Step 12 – Create Variable


From the Modeler create a variable called “r_selected_records”. Provide a starting Value and define the SQL Query.

Snap19

Step 13 – Reference the Variable


On the Dashboard Prompt (created in Part A – Step 4) set a “Request Variable” matching the name of the Variable (created in Part C – Step 12). i.e. r_selected_records

 Snap32

 

Step 14 – Create Model Expression


Add the DUMMY_PUSH table to the Model. Join it to another table.

Add an Expression Column called PUSH_TO_SALES_CLOUD.

Use EVALUATE to call the database function “FUNC_PUSH_TO_SALES_CLOUD” passing through the variable “r_selected_records”.

Snap20

For a text version of the EVALUATE statement click here.

EVALUATE(‘FUNC_PUSH_TO_SALES_CLOUD(%1)’,VALUEOF(NQ_SESSION.”r_selected_records”))

Snap18


Step 15 – Create DUMMY_PUSH Analysis Request


Add both fields to the Analysis – hiding column headings if desired.

Snap21

Snap30

Snap31

Step 16 – Create Action Link


Add an Action Link to the Dashboard. Choose Navigate to BI Content.

Snap24

 Check “Run Confirmation” and customize message if needed.

Snap26

Customize the Link Text and Caption (if desired).

Snap27

17)  Execute Update


Select the user to insert into Sales Cloud.

*** Important ***

CLICK APPLY

Apply must be hit to set the variable on the prompt!

Snap28

Click the “Push to Sales Cloud” Action Link”.

Confirm the Action

Snap29

Status 201 is returned indicating the successful creation of the contact.

Snap35

Confirm contact was created in Sales Cloud.

Snap34

Further Reading


Click here for the REST API for Oracle Sales Cloud guide.

Click here for the Application Express API Reference Guide – MAKE_REST_REQUEST Function.

Click here for more A-Team BICS Blogs.


Summary


This article provided a set of examples that leverage the APEX_WEB_SERVICE_API to integrate Oracle Sales Cloud with Oracle Business Intelligence Cloud Service (BICS) using the REST API for Oracle Sales Cloud.

The use case shown was for BICS and Oracle Sales Cloud integration. However, many of the techniques referenced could be used to integrate Oracle Sales Cloud with other Oracle and non-Oracle applications.

Similarly, the Apex MAKE_REST_REQUEST example could be easily modified to integrate BICS or standalone Oracle Apex with any other REST web services.

Techniques referenced in this blog could be useful for those building BICS REST ETL connectors and plug-ins.

Integration Cloud Service (ICS) On-Premise Agent Installation

$
0
0

The Oracle On-Premises Agent (aka, Connectivity Agent) is necessary for Oracle ICS to communicate to on-premise resources without the need for firewall configurations or VPN. Additional details about the Agent can be found under New Agent Simplifies Cloud to On-premises Integration. The purpose of this A-Team blog is to give a consolidated and simplified flow of what is needed to install the agent and provide a foundation for other blogs (e.g., E-Business Suite Integration with Integration Cloud Service and DB Adapter). For the detailed online documentation for the On-Premises Agent, see Managing Agent Groups and the On-Premises Agent.

On-Premises Agent Installation

The high-level steps for getting the On-Premises Agent installed on your production POD consist of two activities: 1. Create an Agent Group in the ICS console, and 2. Run the On-Premises Agent installer. Step 2 will be done on an on-premise Linux machine and the end result will be a lightweight WebLogic server instance that will be running on port 7001.

Create an Agent Group

1. Login to the production ICS console and view landing page.
 ICSConnectivityAgent_001
2. Verify that the ICS version is 15.4.5 or greater.
ICSConnectivityAgent_002
ICSConnectivityAgent_003
3. Scroll down on ICS Home page and select Create Agents. Notice this brings you to the Agents page of the Designer section.
ICSConnectivityAgent_004
ICSConnectivityAgent_005
4. On the Agents page click on Create New Agent Group.
5. Provide a name for your agent group (e.g., AGENT_GROUP).
ICSConnectivityAgent_006
6. Review the Agent page containing new group.
ICSConnectivityAgent_007

Run the On-Premises Agent Installer

1. Click on the Download Agent Installer drop down on the Agent page, select Connectivity Agent, and save file to an on-premise Linux machine where the agent will be installed/running.
ICSConnectivityAgent_008
ICSConnectivityAgent_009
2. Extract the contents of the zip file for the cloud-connectivity-agent-installer.bsx.  This .bsx is the installation script that will be executed in the on-premise machine where the agent will reside.  A .bsx is a self extracting Linux bash script:
ICSConnectivityAgent_010
3. Make sure the cloud-connectivity-agent-installer.bsx file is executable (e.g., chmod +x cloud-connectivity-agent-installer.bsx) and execute the shell script.  NOTE: It is important to specify the SSL port (443) as part as the host URL.  For example:
./cloud-connectivity-agent-installer.bsx -h=https://<ICS_HOST>:443 -u=[username] -p=[password] -ad=AGENT_GROUP
ICSConnectivityAgent_011
4. Return to the ICS console and the Agents configuration page.
ICSConnectivityAgent_012
5. Review the Agent Group.
ICSConnectivityAgent_013
ICSConnectivityAgent_014
6. Click on Monitoring and select the Agents icon on the left-hand side.
ICSConnectivityAgent_0151
7. Review the Agent monitoring landing page.
ICSConnectivityAgent_016
8. Review the directory structure for the agent installation.
ICSConnectivityAgent_017
As you can see this is a standard WLS installation.  The agent server is a single-server configuration where everything is targeted to the Admin server and is listening on port 7001.  Simply use the scripts in the ${agent_domain}/bin directory to start and stop the server.

We are now ready to leverage the agent for things like the Database or EBS Cloud Adapter.

E-Business Suite Integration with Integration Cloud Service and DB Adapter

$
0
0

Introduction

Integration Cloud Service (ICS) is an Oracle offering for a Platform-as-a-Service (PaaS) to implement message-driven integration scenarios. This article will introduce into the use of ICS for integrating an on-premise E-Business Suite (EBS) instance via Database Adapter. While EBS in recent releases offers a broad set of integration features like SOAP and REST support (i.e. via Integrated SOA Gateway), these interfaces are not available in older versions like 11.5.x. In the past it has been a proven approach to use Oracle Fusion Middleware Integration products (SOA, OSB etc.) running on-premise in a customer data center to connect to an EBS database via DB Adapter. In a short time this feature will be available also in a cloud based integration solution as we will discuss in this article.

Unless we focus on EBS integration here the DB Adapter in ICS will work similarly against any other custom database. Main reason to use an EBS context is the business case shown below, where ICS is connected to Mobile Cloud Service (MCS) to provide a mobile device solution.

Business Case and Architecture

Not hard to imagine that Oracle customers running EBS 11.5.x might have a demand to add a mobile channel for their end-users. One option could be an upgrade to a recent release of EBS. As this will be in most cases a bigger project, an alternative could be the creation of a custom mobile solution via Oracle Jet and MCS as figured below. MCS is a PaaS offering and requires access to an underlying database via REST/JSON. This is the situation where ICS appears in this architecture.

01_Architecture

In absence of native SOAP or REST capabilities being available in EBS 11.5.x tech stack, the integration via ICS would close that gap. Any database access activities (retrieving data, CRUD operations etc.) can run via an ICS/DB Adapter connection to an EBS on-premise database. ICS itself will provide a REST/JSON interface for the external interaction with EBS. This external interface is generic and not restricted to MCS as caller at all. However in our business case the ICS with DB Adapter fulfills the role of a data access layer for a mobile solution.

As shown in the architecture figure above the following components are involved in this end-to-end mobile solution:

  • DB Adapter uses a local component to be installed on-premise in EBS data center named ICS Agent. This agent communicates via JCA with the database and retrieves/sends data from DB Adapter in ICS from/to database
  • Communication between ICS Agent and DB Adapter is setup via Oracle Messaging Service tunneled through HTTPS
  • DB Adapter provides a standard SQL interface for database access
  • Part of the embedded features in ICS are data mapping and transformation capabilities
  • The external REST endpoint in ICS will be made public through REST Adapter in ICS

The ICS configuration and communication in architecture figure stands for a generic approach. In this sample the mobile solution for EBS 11.5.x makes use of the described data access capabilities as follows (mobile components and Jet are not in scope of this document as we focus on the ICS part here):

  • MCS connects to ICS via a connector or generic REST interface
  • EBS data will be processed and cached in MCS
  • Mobile devices communicate with MCS via REST to render the EBS data for visualization and user interaction

In the following article we will focus purely on the ICS and DB Adapter integration and leave the mobile features out of scope. The technical details of ICS and DB Adapter implementation itself won’t be handled here too, as they will become the main content of another blog. Instead we will show how the implementation features can be used from an Application Integration Developer’s perspective.

ICS Configuration Overview

At the beginning of an ICS based integration there are some configuration activities to be done like creation of connections. This is a one-time or better first-time task in order to make ICS ready for creation of integration flows. This is probably not really an Application Developer’s activity. In most cases a dedicated ICS Administrator will perform the following actions by himself.

02_ICS_ConnectionsAt least two connections must be setup for this EBS integration via database communication

  • Database Adapter pointing to the EBS database – database connection parameters will be used by ICS Agent running in-house on customers datacenter
  • REST Adapter to provide a REST interface for external communication

Screenshot below shows a sample configuration page for DB Adapter connected to an EBS instance. The main parameters can be seen as a local connection from ICS Agent to database: hostname, port, SID.

By using this configuration page there must be also an assignment of a local ICS Agent to this DB Adapter made.

03_1_ICSDBAdapterEBiz

03_2_ICSDBAdapterEbizIn most cases it will make sense to use EBS database user APPS for this connection as this credential provides the most universal and context-sensitive access to EBS data model.

04_ICSDBAdapterEBizCredentials

The other connection to setup is a REST interface (further listed as ICS LocalRest in this article) used for inbound requests and outbound responses. As showing in screenshot below this is a quite straightforward task without extensive configuration in our case. Variances are possible – especially for Security Policies, Username etc:

  • Connection Type: REST API Base URL
  • Connection URL: https://<hostname>:<port>/ics
  • Security Policy: Basic Authentication
  • Username: <Weblogic_User>
  • Password: <password>
  • Confirm Password: <password>

05_ICSLocalRestAdapterConfig

After setting up two connections we are good to create an integration between EBS database and any other system being connected via REST.

DB Adapter based Integration with EBS

During our activities we created some good practices that are probably worth to be shared this way. In general we made some good experience with a top-down approach that looks like follows for creation of an integration flow:

  • Identify the parameter in REST call to become part of the JSON payload (functionality of this integration point) for the external interface
  • Identify the EBS database objects being involved (tables, views, packages, procedures etc)
  • Create a JSON sample message for inbound and another one for outbound
  • Design the data mapping between inbound/outbound parameters and SQL statement or PLSQL call
  • Create a EBS DB integration endpoint, enter the SQL statement or call the PLSQL procedure/function dedicated to perform the database activity
  • Create a local REST integration endpoint to manage the external communication
  • Assign the previously created inbound and outbound sample JSON messages to the request and response action
  • Create a message mapping for inbound parameters to SQL/PLSQL parameters
  • Do the same for outbound parameters
  • Add a tracking activity, save the integration and activate it for an external usage

The DB adapter is able to handle complex database types for a mapping to record and array structures in JSON. This means there won’t be any obvious limitations to pass nested data structures to PLSQL packages via JSON.

Here is a sample. In PLSQL we define a data type like follows:

TYPE timeCard IS RECORD (
startTime VARCHAR2(20),
stopTime VARCHAR2(20),
tcComment VARCHAR2(100),
tcCategoryID VARCHAR2(40));
TYPE timeCardRec IS VARRAY(20) OF timeCard;

The parameter list of the procedure looks embeds this datatype in addition to plain type parameters:

procedure createTimecard(
userName   in varchar2,
tcRecord   in timeCardRec,
timecardID out NUMBER,
status     out varchar2,
message     out varchar2 );

The JSON sample payload for the IN parameters would look like this:

{
"EBSTimecardCreationCollection": {
   "EBSTimecardCreationInput": {
       "userName": "GEVANS",
       "timeEntries" : [
           {
             "startTime": "2015-08-17 07:30:00",
             "stopTime": "2015-08-17 16:00:00",
             "timecardComment": "Regular work",
             "timecardCategoryID": "31"
           },{
             "startTime": "2015-08-18 09:00:00",
             "stopTime": "2015-08-18 17:30:00",
             "timecardComment": "",
             "timecardCategoryID": "31"
           },{
             "startTime": "2015-08-19 08:00:00",
             "stopTime": "2015-08-19 16:00:00",
             "timecardComment": "Product Bugs Fixing",
             "timecardCategoryID": "31"
           },{
             "startTime": "2015-08-20 08:30:00",
             "stopTime": "2015-08-20 17:30:00",
             "timecardComment": "Customers Demo Preparation",
             "timecardCategoryID": "31"
           },{
             "startTime": "2015-08-21 09:00:00",
             "stopTime": "2015-08-21 17:00:00",
             "timecardComment": "Holiday taken",
             "timecardCategoryID": "33"
           }
           ] }
     }
}

The JSON sample below will carry the output informtion from PLSQL package back inside the response message:

{
   "EBSTimecardCreationOutput":
   {
       "timecardID": "6232",
       "status": "Success",
       "message": "Timecard with ID 6232 created for User GEVANS”
   }
}

As shown we can use complex types in EBS database and are able to create an according JSON structure that can be mapped 1:1 for request and response parameters.

Creating an EBS Integration

To start with the creation of an EBS integration an Application Developer must login to the assigned Integration Services Cloud instance with the username and password as provided.

06_Login_ICS

Entry screen after login shows the available activities that are

  • Connections
  • Integrations
  • Dashboard

As an Applications Developer we will chose Integrations to create, modify or activate integration flows. Connections handling has been shown earlier in this article and Dashboard is usually an option to monitor runtime information.

07_MainScreenICSTo create a new integration flow choose Create New Integration and Map My Data. This will create an empty integration where you have the opportunity to connect to adapters/endpoints and to create data mappings.

08_1_NewIntegrationEnter the following information

  • Integration Name : Visible Integration name, can be changed
  • Identifier : Internal Identifier, not changeable once created
  • Version :  Version number to start with
  • Package Name (optional) : Enter name if integration belongs to a package
  • Description (optional) : Additional explanatory information about integration

08_2_NewIntegration_CapabilitiesScreenshot below shows an integration which is done by 100% and ready for activation. When creating a new integration both sides for source and target will be empty. Suggestion is to start creating a source as marked on left side in figure below.

09_LocalRestAdapterIntegrationConfig

As mentioned before it might be a good practice to follow a top-down approach. In this case the payload for REST service is defined and exists in form of a JSON sample.

The following information will be requested when running the Oracle REST Endpoint configuration wizard:

  • Name of the endpoint (what do you want to call your endpoint?)
  • Description of this endpoint
  • Relative path of this endpoint like /employee/timecard/create in our sample
  • Action for this endpoint like GET, POST, PUT, DELETE
  • Options to be configured like
    • Add and review parameters for this endpoint
    • Configuration of a request payload
    • Configure this endpoint to receive the response

Sample screenshot below shows a configuration where a POST operation will be handled by this REST endpoint including the request and response.

10_LocalRestAdapterIntegrationConfigThe next dialog window configures the request parameter and the JSON sample is taken as a payload file. The payload content will appear later in mapping dialog as the input structure.

11_LocalRestAdapterIntegrationRequestParamThe response payload will be configured similar to the request payload. As mentioned the input/output parameters are supposed to be defined in a top-down approach for this endpoint. In the response payload dialog we assign the sample JSON payload structure as defined for output payload for this REST service.

12_LocalRestAdapterIntegrationResponseParamFinally the summary dialog window appears and we con confirm and close this configuration wizard.

13_LocalRestAdapterIntegrationSummaryNext action is a similar configuration for target – in our sample the DB adapter connected to EBS database.

14_EBSDbAdapterPackageConfigDB adapter configuration wizard starts with a Basic Information page where the name of this endpoint is requested and general decisions has to be made whether the service will use a SQL statement or make a PLSQL procedure/function call.

As shown in screenshot below the further dialog for a PLSQL based database access will basically start by choosing the schema, package and procedure/function to be used. For EBS databases the schema name for PLSQL packages and procedures is usually APPS.

15_EBSDbAdapterPackageConfigAfter making this choice the configuration is done. Any in/out parameter and return values of a specific function become part of the request/response payload and appear in message mapping dialog later.

16_EBSDbAdapterPackageConfigIn case the endpoint will run a plain SQL statement just choose Run a SQL statement in basic information dialog window.

A different dialog window will appear which allows the entering of a SQL statement that might be a query or even a DML operation. Parameter must be passed in a JCA notation with a preceding hash-mark (#).

17_EBSDbAdapterSQLValidationAfter entering the SQL statement it must be validated by activating Validate SQL Query button. As long as any validation error messages appear those must be corrected first in order to finalize this configuration step. Once the statement has been successfully validated a schema file will be generated.

18_EBSDbAdapterSQLSummaryBy clicking on the schema file URL a dialog window shows the generated structure as shown below. The elements of this structure have to be mapped in transformation component later, once the endpoint configuration has is finished.

19_EBSDbAdapterSQLXSDGeneratedThe newly created integration contains two transformations after endpoint configuration has been finished – one for requests/inbound and another one for response/outbound mappings.

20_MessageMappingThe mapping component itself follows the same principles like the comparable XSLT mapping tools in Fusion Middleware’s integration products. As shown in screenshot below the mapped fields are marked with a green check mark. The sample shows an input structure with a single field (here: userName) and a collection of records.

21_MessageMappingInputSample below shows the outbund message mapping. In the according PLSQL procedure three parameters are marked as type OUT and will carry the return information in JSON output message.

22_MessageMappingOutParamsOnce finished with the message mappings, the final step for integration flow completion is the addition of at least one tracking information (see link on top of page). This means one field in message payload has to be identified for monitoring purposes. The completion level will change to 100% afterwards. The integration must be saved and Application Developer can return to integration overview page.

23_IntegrationOverviewLast step is the activation of an integration flow – supposed to be a straightforward task. Once the completion level of 100% has been reached for completion level the integration flow is ready to be activated.

24_Activate_TimecardAfter clicking on Activate button a Confirmation dialog appears asking whether this flow should be traced or not.

25_Activate_TimecardOnce activated the REST endpoint for this integration is enabled and ready for invocation.

26_IntegrationsOverview

Entering the following URL in a browser window will test the REST interface and return a sample:

  • https://<hostname>:<port>/integration/flowapi/rest/<Integration Identifier>/v<version>/metadata

Testing the REST integration workflow requires a tool like SoapUI to post a JSON message to REST service. In this case the URL from above changes in terms of adding the integration access path as configured in REST connection wizard:

  • https://<hostname>:<port>/integration/flowapi/rest/<Integration Identifier>/v<version>/employee/timecard/create

Security Considerations

Earlier in this document we discussed the creation of a DB endpoint in EBS and the authentication as APPS user. In general it is possible to use other DB users alternately. The usage of a higher privileged user like SYSTEM is probably not required and also not recommended due to the impact if this connection might be hacked.

There are multiple factors having an influence on the security setup tasks to be done:

  • What are the security requirements in terms of accessed data via this connection?
    • Gathering of non-sensitive information vs running business-critical transactions
    • Common data access like reading various EBS configuration information vs user specific and classified data
  • Does this connection have to provide access to all EBS objects in database (packages, views across all modules) or can it be restricted to a minimum of objects being accessed?
  • Is the session running in a specific user context or is it sufficient to load data as a feeder user into interface tables?

Depending on the identified integration purpose above the security requirements demand might range in a span from extremely high to moderate. To restrict user access to a maximum it would be possible to create a user with a limited access to a few objects only like APPLSYSPUB. Access to PLSQL packages would be given on demand of accessibility.

If access to database is required to run in a specific context the existing EBS DB features to put a session into a dedicated user or business org context via FND_GLOBAL.APPS_INITIALIZE or MO_GLOBAL.INIT (R12 onward) must be used. That will probably have an impact on the choice to run a plain SQL statement vs a PLSQL Procedure. With the requirement to perform a preceding call of FND_GLOBAL also a SELECT statement has to run inside a procedure this way and the result values must be declared as OUT parameters as shown previously.

In general the requirement to perform a user authentication is outside of scope of this (EBS) database adapter. In practice the upper layer on top of ICS must support that no unsolicited user access will be given. While connection encryption via SSL is supposed to be the standard there could be obviously a need to create a full logical session management for end-user access including user identification, authentication and session expiration.

Such a deep-dive security discussion was out-of-scope for this blog and should be handled in another article.

For non-EBS databases similar considerations will obviously apply.

Contribution and Conclusion

This blog posting was dedicated to give an overview on the quite new DB adapter in ICS. While recent EBS releases will have a benefit to integrate via EBS adapter or built-in tools the older versions probably won’t. Using the DB adapter will be possibly the preferred method to create a cloud based access to a legacy on-premise EBS database.

At this point I’d like to thank my team mate Greg Mally for his great contribution! We worked and still work closely together in our efforts to provide some good practices for ICS adoption by our customers. Greg has recently published a great own blog to give a deeper technical look behind the scenes of ICS and DB adapter. So it will be very worth to read his blog too!

Implementing an SFDC Upsert Operation in ICS

$
0
0

Introduction

While designing SOA services; especially those ones that represent operations around a business object, a common implementation pattern used is upsert. Upsert is an acronym that means the union of “update plus insert”. The idea behind is having a unique operation that decides which action to take – either update the existing record or insert a new one – based on information available in the message. Having one operation instead of two, makes the SOA service interface definition clearer and simpler.

Some SaaS applications offer upsert capabilities in their exposed services, and leveraging these capabilities can considerably decrease the amount of effort required while designing SOA services in an integration platform such as ICS. For instance, if you need to develop an upsert operation and the SaaS application does not have this functionality; you will have to implement that logic using some sort of conditional routing (see Content-Based Router in ICS) or via multiple update and insert operations.

ics_cbr_sample

Figure 1: Implementing upsert using CBR in ICS.

Salesforce.com (or SFDC for short) is one of those SaaS applications that offers built-in support for the upsert operation. This post will show how to leverage this support with ICS.

Setting up External Identifiers in SFDC

Every business object in SFDC can have custom fields. This allows business objects from SFDC to be customized to afford specific customer requirements regarding data models. As part of this feature SFDC allows that any custom field can act as a record identifier for systems outside of SFDC. These systems can identify any record through this custom field instead of using the SFDC internal primary key, which for security reasons is unknown. Therefore, if you need to perform transactions against business objects in SFDC from ICS, you need to make sure that the business object carries a custom field with the External ID attribute set. This is a requirement if you want to make the upsert operation work in SFDC.

In order to create a custom field with the External ID attribute, you need to access your SFDC account and click on the setup link on the upper right corner of the screen. Once there, navigate to the left side menu and look for the build section, which is below the administer section. Within that section, expand the customize option and SFDC will list all the business objects that can be customized. Locate the business object that you want to perform the upsert operation on. This blog will use the contact business object as example.

Go ahead and expand the business object. From the options listed, choose fields. That will bring you the page that allows the fields personalization for the selected business object. In this page, navigate to the bottom of it to access the section in which you can create custom fields, as shown in figure 2.

creating_custom_field_in_sfdc_1

Figure 2: Creating custom fields for the contact business object in SFDC.

To create a new custom field, click in the new button. This will invoke the custom field creation wizard. The first step of the wizard will ask which field type you will want to use. In this example we are going to use Text. Figure 3 shows the wizard’s step one. After setting the field type click next.

creating_custom_field_in_sfdc_2

Figure 3: Creating a custom field in SFDC, step one.

The second step is entering the field details. In this step you will need to define the field label, name, length and what special attributes it will have. Set the field name to “ICS_Ext_Field”. The most important attribute is the External ID one. Make sure that this option is selected. Also select Required and Unique since this is a record identifier. Figure 4 shows the wizard’s step two. Click next twice and then save the changes.

creating_custom_field_in_sfdc_3

Figure 4: Creating a custom field in SFDC, step two.

After the custom field creation, the next step is generating the SFDC Enterprise WSDL. This is the WSDL that must be used in ICS to connect to SFDC. The generated WSDL will include the information about the new custom field and ICS will be able to rely on that information to perform the upsert operation.

Creating a REST-Enabled Upsert Integration

In this section, we are going to develop an ICS REST-enabled source endpoint that will perform insertion and updates on the target contact business object, leveraging the upsert operation available in SFDC. Make sure to have two connections configured in ICS; one for the integration source which is REST-based and another for the integration target, which should be SFDC-based. You must have an SFDC account to properly set the connection up in ICS.

Create a new integration, and select the Map My Data pattern. From the connections palette, drag the REST-based connection onto the source icon. This will bring the new REST endpoint wizard. Fill the fields according as to what is shown in figure 5 and click next.

source_wizard_1

Figure 5: New REST endpoint wizard, step one.

Step two of the wizard will ask for the request payload file. Choose JSON Sample and upload a JSON file that contains the following payload:

request_payload_sample

Figure 6: Sample JSON payload for the request.

Click next. Step three of the wizard will ask for the response payload file. Again, choose JSON Sample and upload a JSON file that contains the following payload:

response_payload_sample

Figure 7: Sample JSON payload for the response.

Click next. The wizard will wrap up the options chosen and display for confirmation. Click on the done button to finish the wizard.

source_wizard_4

Figure 8: New REST endpoint wizard, final step.

Moving further, from the connections palette, drag the SFDC-based connection onto the target icon. That will bring the new SFDC endpoint wizard. Fill the fields according as to what is shown in figure 9 and click next.

target_wizard_1

Figure 9: New Salesforce endpoint wizard, step one.

Step two of the wizard will ask for which operation must be performed in SFDC. You need to choose the upsert operation. To accomplish that, first select the option Core in the operation type field and then select the upsert option in the list of operations field. Finally, select the business object in which you would like to perform upserts, as shown in figure 10.

target_wizard_2

Figure 10: New Salesforce endpoint wizard, step two.

Click next twice and then the wizard will wrap up the options chosen and display for confirmation. Click on the done button to finish the wizard.

target_wizard_4

Figure 11: New Salesforce endpoint wizard, final step.

Save all the changes made so far in the integration. With the source and target properly configured, we can now start the mapping phase, in which we will configure how the integration will handle the request and response payloads. Figure 12 shows what we have done so far.

integration_before_mapping

Figure 12: Integration before the mapping implementation.

Create a new request mapping in the integration. This will bring the mapping editor, in which you will perform the upsert implementation. Figure 13 shows how this mapping should be implemented.

request_mapping

Figure 13: Request mapping implementation.

Let’s understand the mapping implementation details. The first thing that needs to be done is set into the externalIDFieldName the field name from the business object that will be used to identify the record. You must use any valid custom field that has the External ID attribute set. Any other field will not work here. To set the value into the field, click on top of the field link to open the expression editor.

setting_external_field_value

Figure 14: Setting the “externalIDFieldName” using the expression editor.

The best way to set the value is using the concat() XLST function. Set the first parameter of the concat() function to the custom field name and the second parameter to a empty string.

Keep in mind that the field name in ICS can be different from what you set in SFDC. When the SFDC Enterprise WSDL is generated, it appends into the custom fields a suffix to make them unique. In most cases, this suffix is a “__c” but a better way to figure this out is reviewing the WSDL for the field.

The next step is making sure that the custom field cited in the externalIDFieldName field has a value set. This is necessary because that field will be used by SFDC to decide which action to take. If no value is set in that field, it means that SFDC will create a new record for that business object. Otherwise if that field has a value; then SFDC will try to locate that record and once found, it will update the record with the data set in the other fields. In this example, we will populate the custom field with the identifier value from the request payload, as shown in figure 13. Map the remaining fields accordingly. Once you finish the mapping, save the changes in click on the exit mapper button to come back to the integration.

Now create a new response mapping in the integration. This will bring the mapping editor, in which you will perform the mapping implementation for the response. Figure 15 shows how this mapping should be implemented.

response_mapping

Figure 15: Response mapping implementation.

Simply map the success field from the source with the result field from the target. According to the SFDC documentation, the success field is set to true if the operation is successfully performed into the record, and it is set to false of any issues happen during the operation. Once you finish the mapping, save the changes in click on the exit mapper button to come back to the integration. Figure 16 shows the integration after the mapping.

integration_after_mapping

Figure 16: Integration after the mapping implementation.

Finish the integration implementation by setting the tracking information and optionally mapping any faults from the SFDC connection. Save all the changes and go ahead and activate the integration in ICS. Once activated, you should be able to get the information from the REST endpoint exposed by ICS. Just access the integrations page and click in the exclamation link situated on the upper right corner of the integration entry.

checking_endpoint_details

Figure 17: Getting the information from REST endpoint exposed by ICS.

Before testing the endpoint, keep in mind that the URL of the REST endpoint does not contain the “metadata” suffix present in the end of the URL shown in figure 17. Remove that suffix before using the URL to avoid any HTTP 403 errors.

Conclusion

The upsert operation is a very handy way to handle insert and update operations within a single API, and it is a feature present in most SaaS applications that expose services for external consumption. SFDC is one of those applications. This blog showed how to leverage the upsert support found in SFDC and the steps required to invoke the upsert operation using the externalIDFieldName element from ICS.

Using JSP on Oracle Compute and Oracle DBaaS – End to End Example

$
0
0

Introduction

Many customers request to get a quick demo how they would deploy their custom Java application in the Oracle Cloud. A great way to do this is Oracle Compute Service which can easily be combined with the Oracle Database as a Service offering. In this example two VMs will be deployed. One for the application server – GlassFish. The second on is a DBaaS VM to hold the database. A simple JSP will be created to display data from the database on the client browser as shown below.

Drawing1

 

Deploying Oracle DBaaS

For this example a simple Database will be deployed in the Cloud. In order to achieve this; first login to “My Services” from cloud.oracle.com.

image1

Enter your Identity Domain

image2

And provide your username and password.

image3

Scroll down until you find the Oracle Database Cloud Service and click on “Service Console”

image4

For this example select “Oracle Database Cloud Service” and based on your billing preference choose Monthly or Hourly. Finalise your selection by clicking “Next”.

image5

For this example we will use the Pluggable Database Demos and hence will select “Oracle Database 12c Release 1”

image6

 

For this example any edition can be selected.

image7

 

In this screen select a service name, e.g. “clouddb” and fill the other information as per screenshot below. Make sure to select the checkbox “Include “Demos” PDB”. Once done click the Edit Button next to SSH Public Key.

image8

Download Putty Key Generate (PuTTYgen.exe) from http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

After the download start the puttygen.exe and click generate with SSH-2 RSA and 1024 select. Randomly move your mouse over the blank area to generate the key.

The procedure for other Operating Systems can be found here: http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/javaservice/JCS/JCS_SSH/create_sshkey.html

image9

Once generated save the private key – it is recommended to use a pass-phrase in most cases and copy and paste the public key into a new text file to be saved with the private key.  Save the keys in a secure location and take a backup as there will be no access to the VMs if the keys are lost. Note using the Save public key option will not save the public key in the required format.

image10

Keep the public key in your clipboard and paste it into the “Public key input for VM access” finish by clicking “Enter”.

image11

Review and Confirm all details in the next screen and click “Create” to start the build of the database.

image12

You will see when the database is provisioned in the Service Console. Open the detail page and note the Public IP Address of the Database for later use.

image13a

 

Compute Provisioning

Scroll down in the Main Menu to “Oracle Compute Cloud Service” and open the Service Console.

image14

Create a new Compute Instance and give it a meaningful name. For this example the Oracle Linux 6.6 Image with oc1m shape is sufficient. Click Next.

image15

Select the DNS Hostname Prefix carefully as this will be the hostname of the VM that gets provisioned. Select a Persistent Public IP Reservation or choose Auto Generated to have the first available Public IP assigned to the VM.

image16

You can add additional storage – however this example won’t require additional storage.

image17

At the SSH Public Keys point copy and paste the Public Key generated above, give it a name and click Next. You can create a separate Key for this VM, if you prefer following the steps outlined earlier.

image18

Review the selection and press “Create” to start the VM provisioning.

 

image19

Once done the Instance will show up in the Service Console. Write down the public IP it will be required shortly.

image20

 

Connect to the VMs

In this example Putty is used to connect to the VMs. Putty can be downloaded here: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

Start Putty and enter the Public IP of the Database VM in the Hostname field.

image21

Navigate to Connection – SSH – Auth and click browse to select your previously created Private Key.

image22

Head back to the Session Menu – Enter a name for this session – Click Save followed by Open.

image21

For the first connection you will recive a Security Alert – this is expected. Answer with Yes.

image23

Next the Terminal opens and you can simply enter the username opc.

image24

Security Configuration

Validate the listener is running by executing “lsnrctl status” with the user oracle as shown below on the Database VM – click on the picture to enlarge. You should see a separate service for the demos PDB.

image25

Open the Compute Service Console and select the Network Tab. Click on Create Security IP List to start.

image26

Enter the IP of all hosts that you would like to give similar Network Security Settings. For this example it’s sufficent to use the Compute VM. Enter a description that describes the list.

image27

Switch to the Security Rules tab and hit the “Create Security Rule” button.

image28

The application in this example will use port 1521 for the communication with the database, which is provisioned as part of the database provisioning for each instance with the name ora_dblistener. Select the Security IP List created above as the source and select the Database VM as the destination for this Security Rule.

image29

Creating this list will enable the communication between the VMs on Port 1521.

Testing communication between the compute and Database VM.

A great and simple way to test communication is telnet. Please note you might have to install it on your Compute VM with “yum install telnet” as shown here:

image30

Once installed the connection can be tested with telnet as shown below. If you receive the “Escape character is …” message the connection is working.

image31

In order to verify that there are no superseeding rules to the rule just created it is useful to disable the rule and test wheter communication is still possible.

Disable the rule via the Security Rules screen from the Network section. Click on the context symbol next to the rule and select update from the context menu.

image32

In the Update Security Rule screen, set the status to Disabled and submit the change by clicking the Update button.

image33

The communcation should be closed between the Compute VM and the Database VM. Try to telnet – if the rule is setup correctly the communication will timeout.

image34

Reopen the Update Security Rule Dialog and set the status to Enabled.image35

Using telnet verify that the communication is running again, if the rule is correct it should work again. This confirms that the rule is working as expected and that there are no superseeding rules.

image31

Allow access to Compute VM to the Public Internet

In this example the application will be exposed to the public internet. Consider this carefully when using your own data in the backend.

Create a Security Application from the Network Tab. This example is using Port 4848 and 8080. Port 4848 is the Administration Port for the Glassfish Server – this rule should be disabled after the configuration is finished.

image36

Create a Security List with the Inbound Policy of Deny to block all traffic except the explicitly allowed traffic. You can allow packets to travel outbound from the Cloud VM by selecting Permit in the Outbound Policy.

image37

Once the Security List is created, you will need to add the Compute VM to the List. Do this by opening the Service Console for the Instance and click “Add to Security List”.

image38

This open a drop-down list, where the created Security list needs to be selected. Attach the Security List to the VM.

image39

Combine the Security List, Security Application and Security IP list by creating a new Security Rule. Make sure to select the predefinded Security IP List “public-internet” to grant access to every host. The destination has to be the Security List created above to allow access to the Compute VM. Ensure to select the correct Security Application has been selected to allow access on Port 4848 for this example.

image40

Repeat the process for Port 8080 to allow application access.

image41

Create the corresponding Security Rule:

image42

Application Deployment

Download the latest JDK from here: http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

Download the latest Glassfish release here:

https://glassfish.java.net/download.html

Download the latest Oracle JDBC Driver here (ojdbc7.jar only)

http://www.oracle.com/technetwork/database/features/jdbc/jdbc-drivers-12c-download-1958347.html

 

Extract both archives to /u01.

sudo su – oracle
tar xvzpf jdk-8u71-linux-x64.tar.gz
unzip glassfish-4.1.1.zip
export PATH=/u01/jdk1.8.0_71/bin:${PATH}
glassfish4/bin/asadmin create-domain clouddomain
glassfish4/bin/asadmin start-domain clouddomain
glassfish4/bin/asadmin --host localhost --port 4848 enable-secure-admin
cp ojdbc7.jar glassfish4/glassfish/domains/clouddomain/lib
glassfish4/bin/asadmin restart-domain clouddomain
 <a href="http://www.ateam-oracle.com/wp-content/uploads/2016/02/image43.png" rel="attachment wp-att-36642"><img src="http://www.ateam-oracle.com/wp-content/uploads/2016/02/image43.png" alt="image43" width="600" height="485" class="alignnone wp-image-36642" />
</a>

You can now login to the Admin Console from your local browser:

image44

Create JDBC Connection

Using the asadmin tool the JDBC Connection is created quickly:

glassfish4/bin/asadmin create-jdbc-connection-pool --restype javax.sql.DataSource --datasourceclassname oracle.jdbc.pool.OracleDataSource --property "user=hr:password=hr: url=jdbc\\:oracle\\:thin\\:@<your-cloud-ip>\\:1521\\/demos.rdb.oraclecloud.internal" CloudDB-Pool
glassfish4/bin/asadmin ping-connection-pool CloudDB-Pool
glassfish4/bin/asadmin create-jdbc-resource --connectionpoolid CloudDB-Pool jdbc/CloudDB

This application is based on this example only modified to connect to the HR Schema in the Demo PDB: https://docs.oracle.com/cd/E17952_01/connector-j-en/connector-j-usagenotes-glassfish-config-jsp.html

Create a folder with the following directory structure:

index.jsp
WEB-INF
   |
   - web.xml
   - sun-web.xml

The code for sun-web.xml is:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE sun-web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Application Server 8.1 Servlet 2.4//EN" "http://www.sun.com/software/appserver/dtds/sun-web-app_2_4-1.dtd">
<sun-web-app>
  <context-root>HelloWebApp</context-root>
  <resource-ref>
    <res-ref-name>jdbc/CloudDB</res-ref-name>
    <jndi-name>jdbc/CloudDB</jndi-name>  
  </resource-ref> 
</sun-web-app>

The code for web.xml is:

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd">
  <display-name>HelloWebApp</display-name>  
  <distributable/>
  <resource-ref>
    <res-ref-name>jdbc/CloudDB</res-ref-name>
    <res-type>javax.sql.DataSource</res-type>
    <res-auth>Container</res-auth>
    <res-sharing-scope>Shareable</res-sharing-scope>                
  </resource-ref>
</web-app>

The index.jsp contains:

<%@ page import="java.sql.*, javax.sql.*, java.io.*, javax.naming.*" %>
<html>
<head><title>Data from the cloud with JSP JSP</title></head>
<body>
<%
  InitialContext ctx;
  DataSource ds;
  Connection conn;
  Statement stmt;
  ResultSet rs;

  try {
    ctx = new InitialContext();
        ds = (DataSource) ctx.lookup("jdbc/CloudDB");
    conn = ds.getConnection();
    stmt = conn.createStatement();
    rs = stmt.executeQuery("SELECT * FROM DEPARTMENTS");

    while(rs.next()) {
%>
    <h3>Department Name: <%= rs.getString("DEPARTMENT_NAME") %></h3>
    <h3>Department ID: <%= rs.getString("DEPARTMENT_ID") %></h3>
<%    
    }
  }
  catch (SQLException se) {
%>
    <%= se.getMessage() %>
<%      
  }
  catch (NamingException ne) {
%>  
    <%= ne.getMessage() %>
<%
  }
%>
</body>
</html>

Zip up the entire folder and login to the Glassfish Admin Console

image44

Select the option Applications in the tree on the left hand side and click on deploy.

image45

Select the previously created zip file and set “Web Application” as Type.

image47

Clicking OK will take you to the Deployed Applications screen from where you can press “Launch” to open a new browser window. The JSP will show the rows of the department table from the Cloud DB proving end-to-end communication.

image48

This concludes this example. It should illustrate how simple it is to deploy a custom application with an Oracle Cloud Database in the backend.

 

Further Reading

 

Oracle Cloud Documentation

https://docs.oracle.com/cloud/latest/

Glassfish

https://glassfish.java.net/


Integration Cloud Service – Promote Integrations from Test to Production (T2P)

$
0
0

The purpose of this blog is to provide simple steps to move Oracle Integration Cloud Service (ICS) integrations between different ICS environments. Oracle ICS provides export and import utilities to achieve integration promotion.

A typical use-case is to promote tested integrations from Test ICS Environment to Production ICS Environment, in preparation for a project go-live. Usually the Connection endpoints used by the integrations will be different on Test and Production Environments.

The main steps involved in code promotion for this typical use-case are as follows

  • Export an integration from Test ICS
  • Import the integration archive on Prod ICS
  • Update Connection details and activate the integration on Prod ICS Environment

Export an integration from Test ICS

Login to Test ICS
Search and locate the integration on Test ICS
Select ‘Export’ and save the integration archive to the file system.

Step2-BrowseAndExport-Integration-TestICS

 

The integration is saved with a “.iar” extension.

Step3-Save-IAR_new

 

 

 

 

 

 

 

During export, basic information about the connections, like identifier, connection type are persisted.

 

Import the integration archive on Prod ICS Environment

Login to Prod ICS
Navigate to ‘Integrations’
Select ‘Import Integration’ and choose the integration archive file that was saved in the previous step

Step4-Import-Saved-IAR-ProdICS

 

Since connection properties and security credentials are not part of the archive, the imported integration is typically not ready for activation.
An attempt to activate will error out and the error message indicates the connection(s) with missing information

Step6-Incomplete-Connections-Warning

Note that, if the connections used by the archive are already present and complete in Prod ICS, then the imported integration is ready for activation.

 

Update Connection details and activate the integration on Prod ICS Environment

After importing the archive, the user needs to update any incomplete connections before activating the flow.
Navigate to “Connections” and locate the connection to be updated

Step7-Find_incompleConn_Edit

 

Select ‘Edit’ and update the connection properties, security credentials and other required fields, as required in the Prod ICS Environment.
‘Test’ Connection and ensure that connection status shows 100%

Step8-Review-And-Complete-Conn-ICSProd

Note that, the connection identifier and Connection type were preserved during import and cannot be changed.

 

Once the connection is completed, then the imported integration is ready to be activated and used on Prod ICS environment.

Intgn-Ready

 

We have seen above the steps for promoting a completed integration for the T2P use-case.
Note that, even incomplete integrations can be moved between ICS environments using the same steps outlined above. This could be useful during development to move integration code reliably between environments.

Also, multiple integrations can be moved between environments using the ‘package’ export and import. This requires that integrations to be organized within ICS packages.

Export-Import-Packages
Finally, Oracle ICS provides a rich REST API which can be used to automate code promotion between ICS environments.

 

Tips for ODI in the Cloud: ODI On-Premise with DBCS

$
0
0

As describe in the article Integrating Oracle Data Integrator (ODI) On-Premise with Cloud Services, if you are considering connecting to the Cloud and using Oracle DBCS – Oracle Database Cloud Service – one of the good news is that you can use ODI on-premise to do the work. David Allan has published a very interesting article (ODI 12c and DBaaS in the Oracle Public Cloud) about connecting ODI to DBCS using a custom driver which performs the SSL tunneling. In my investigations on how to use ODI in the Cloud I have followed David’s Blog and my goal here is to share some tips that may be useful when trying to do that.

Connect to DBCS using SSL tunneling driver

When connecting to Oracle Database Cloud Service (DBCS) from ODI it is possible to use the “default” JDBC driver but, in that case, between the machine where the ODI Agent is running and the DBCS Cloud service.

To avoid performing those manual steps, David Allan has written a Blog on how to use a driver which performs the SSL tunneling.

Here are the steps I have done to make it work, using my ODI 12.1.3 on-premise Agent.

Get ready!

1- Create an OpenSSH Key

When you created your Database Cloud Service Instance you had to provide a Private Key. You can use either PuttyGen or SSH to convert it in OpenSSH key which is the only format supported by the tunneling driver

2- Download the driver (odi_ssl.jar) from java.net here and save it in any temporary folder.

Install the Driver

1- Stop all ODI processes.

2- Copy odi_ssl.jar into the appropriate directory:

— For ODI Studio (Local, No Agent), place the files into the “userlib” directory

On UNIX/Linux operating systems, go to the following directory

$HOME/.odi/oracledi/userlib

On Windows operating systems, go to the following

%APPDATA%\odi\oracledi\userlib

%APPDATA% is the Windows Application Data directory for the user (usually C:\Documents and Settings\user\Application Data)

— For ODI standalone Agent, place the files into the “drivers” directory:

For ODI 12c: $ODI_HOME/odi/agent/lib

For ODI 11g: $ODI_HOME/oracledi/agent/drivers

— For ODI J2EE Agent, and ODI 12c colocated Agent:

The JDBC driver files must be placed into the Domain Classpath.
For details refer to documentation: http://docs.oracle.com/middleware/1212/wls/JDBCA/third_party_drivers.htm#JDBCA706

Use the Driver

1- Create the properties file with text below and save it (for example c:\dbcs\dbcs.properties)

You need to check the ip of your DBCS instance, from the DBCS console:

sslUser=oracle (DBCS user)
sslHost=<your_dbcs_ip_address>
sslRHost=<your_dbcs_ip_address>
sslRPort=1521 (DBCS Listener Port)
sslPassword=your_private_key_password
sslPrivateKey= <url to OpenSSHKey> (ex: D:\\DBCS\\myOpenSSHKey.ppk)
sslLPort=5656 (Local port used in the JDBC url)

it is possible to define two hosts – one that you SSH to (sslHost) and one is where the Oracle listener resides (sslRHost). The DBCS infrastructure today has the same host, but the driver supports different hosts (for example if there is a firewall that you SSH to and then the Oracle listener is on another machine).

2- Create a new Data Server under the Oracle Technology

JDBC driver: oracle.odi.example.SSLDriver
JDBC url: jdbc:odi_ssl_oracle:thin:@localhost:5656/YourPDB.YourIdentityDomain

(ex: jdbc:odi_ssl_oracle:thin:@localhost:5656/ODIPDB1.usoraclexxx.oraclecloud.internal)

Property: PropertiesFile = c:\dbcs\dbcs.properties

You can refer to David’s Blog for more details: ODI 12c and DBaaS in the Oracle Public Cloud

this connection can only be used when there is a direct JDBC connection. It means that, if you are planning to use the SQL Loader utility from ODI, this tunneling driver cannot be used.

Connect to DBCS using Native JDBC driver

The beauty of the previous method is that no extra step is needed in order to connect to DBCS as the tunneling Driver is doing the job for you.

The only limitation is, if you wish to use a loader utility, such as SQL Loader, then the tunnel must be created BEFORE using the native JDBC connection. In that case the connection is not made through the tunneling Driver but directly through SQL*Net

Define a tunnel

Refer to Creating an SSH Tunnel to a Port in the Virtual Machine but with following changes:

— the ip you need is the DBCS one

— the tunnel will be between Local Host 5656 and Remote Host 1521 (or the one defined in your organization as SQL*Net port).

Open the tunnel and then you are ready to go to connect to DBCS safely through SSH.

Use the native Driver

Once you have created your tunnel, ODI Agent can connect to DBCS as any other Oracle database. As the tunnel is set you can connect directly to localhost:5656.

This step is not mandatory when willing to use using LKM File to Oracle (SQLLDR), but as the tunnel now exists it is easy to use it in ODI.
Now, let me share some tips as well on how to use that KM in a DBCS environment.

Steps to use LKM File to Oracle (SQLLDR)

Apply Patch

If you are in ODI 12.1.3 then apply Patch 18330647: ODI JOBS FAILS CALLING SQLLDR ON WINDOWS 7, WINDOWS 2008.

Note: if you are already using LKM File to Oracle (SQLLDR) a “copy of” will be created by the Patch. The LKM build must be 45.1 or higher.

This issue is fixed in ODI 12.2.1

Define the tnsnames.ora entry for DBCS

When using LKM File to Oracle (SQLLDR), the connection to DBCS is made directly through the tnsnames.ora and not the ODI Topology.
So, in order to use SQL Loader, an

MyDBAAS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 5656))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = YourPDB.YourIdentityDomain)
)
)

As we are going through the tunnel the Host=localhost and the Port=5656 (the local port defined for the tunnel).

Do not forget to set “MyDBAAS” in your ODI Topology as the Instance Name of your Oracle Data Server.

It is then possible to use SQL Loader utility, through LKM File to Oracle (SQLLDR).

Note that, as the connection to DBCS is done through SSH, the performance is not equivalent to an internal network.

Conclusion

Using these methods, it is pretty easy to connect to Oracle Database Cloud Service to load or extract data in the Cloud.

For more ODI best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for ODI.

Acknowledgements

Special thanks to David Allan, Oracle Data Integration Architect, for his help and support.

Using Oracle BI Answers to Extract Data from HCM via Web Services

$
0
0

Introduction

Oracle BI Answers, also known as ‘Analyses’ or ‘Analysis Editor’, is a reporting tool that is part of the Oracle Transactional Business Intelligence (OTBI), and available within the Oracle Human Capital Management (HCM) product suite.

This article will outline an approach in which a BI Answers report will be used to extract data from HCM via web services.  This provides an alternative method to the file based loader process (details of which can be found here)

This can be used for both Cloud and On-Premise versions of Oracle Fusion HCM.

Main Article

During regular product updates to Oracle HCM, underlying data objects may be changed.  As part of the upgrade process, these changes will automatically be updated in the pre-packaged reports that come with Oracle HCM, and also in the OTBI ‘Subject Areas’ – a semantic layer used to aid report writers by removing the need to write SQL directly against the underlying database.

As a result it is highly recommended to use either a pre-packaged report, or to create a new report based on one of the many OTBI Subject Areas, to prevent extracts subsequently breaking due to the changing data structures.

Pre-Packaged Reports

Pre-packaged reports can be found by selecting ‘Catalog’, expanding ‘Shared Folders’ and looking in the ‘Human Capital Management’ sub-folder.  If a pre-packaged report is used, make a note of the full path of the report shown in the ‘Location’ box below.  This path, and the report name, will be required for the WSDL.

Windows7_x64

Ad-Hoc Reports

To create an Ad-Hoc report, a user login with the minimum of BI Author rights is required.

a. Select ‘New’ and then ‘Analysis’

Windows7_x64

b. Select the appropriate HCM Subject Area to create a report.

Windows7_x64

c. Expand the folders and drag the required elements into the report.

d. Save the report into a shared location.  In this example this is being called ‘Answers_demo_report’ and saved into this location.

/Shared Folders/Custom/demo

This path will be referenced later in the WSDL.

Edit_Post_‹_ATeam_Chronicles_—_WordPress

Building Web Service Request

To create and test the Web Service, this post will use the opensource tool SoapUI.  This is free and can be downloaded here:

https://www.soapui.org

Within SoapUI, create a new SOAP project.  For the Initial WSDL address, use the Cloud or On-Premise URL, appending  ‘/analytics-ws/saw.dll/wsdl/v7′

For example:

https://cloudlocation.oracle.com/analytics-ws/saw.dll/wsdl/v7

or

https://my-on-premise-server.com/analytics-ws/saw.dll/wsdl/v7

This will list the available WSDLs

 

Calling the BI Answers report is a 2 step process

1. Within SoapUI, expand out the ‘SAWSessionService’ and then ‘logon’.  Make a copy of the example ‘Request’ WSDL, then update it to add the username and password for a user with credentials to run the BI Answers report.

Run that WSDL and a sessionID is returned:

SoapUI_4_6_4

2. In SoapUI expand ‘XmlViewService’ / ‘executeXMLQuery’.  Make a copy of the example ‘Request’ WSDL.  Edit that, insert the BI Answers report name and path into the <v7:reportPath> variable, and the SessionID from the first step into the <v7:sessionID> variable.

Note that while in the GUI the top level in the path was called ‘Shared Reports’, in the WSDL that is replaced with ‘shared’.  The rest of the path will match the format from the GUI.

You will notice a number of other options available.  For this example we are going to ignore those.

You can then execute the web service request.  The report returns the data as an XML stream, which can then be parsed by your code.

3

Summary

This post demonstrated a simple method to leverage BI Answers and the underlying OTBI Subject Areas within Oracle HCM, to create and call a report via web service to extract data for a down stream process.

Cloud Security: Federated SSO for Fusion-based SaaS

$
0
0

Introduction

To get you easily started with Oracle Cloud offerings, they come with their own user management. You can create users, assign roles, change passwords, etc.

However, real world enterprises already have existing Identity Management solutions and want to avoid to maintain the same information in many places. To avoid duplicate identities and the related security risks, like out of sync passwords, outdated user information, or rogue user accounts of locked accounts, single sign-on solutions are mandatory.

This post explains how to setup Federated Single Sign-on with Oracle SaaS to enable users present in existing Identity Management solutions to work with the Oracle SaaS offerings without additional user setup. After a quick introduction to Federated Single Sign-on based on SAML, we explain the requirements and the setup of Oracle SaaS for Federated Single Sign-on.

Federated Single Sign-on

Federated Single Sign-on or Federated SSO based on SAML Web Browser Single Sign-on is a widely-used standard in many enterprises world-wide.

The SAML specification defines three roles: the principal (typically a user), the Identity Provider, and the Service Provider. The Identity Provider and the Service Provider form a Circle of Trust and work together to provide a seamless authentication experience for the principal.

SAML Login Flows

The most comomly used SAML login flows are Service Provider Initiated Login and Identity Provider Initiated Login as shown below.

Service Provider Initiated Login

The Service Provider Initiated Login is the most common login flow and will be used by users without explicitely starting it. Pointing the browser to an application page is usually all that is needed.

Here the principal requests a service from the Service Provider. The Service Provider requests and obtains an identity assertion from the Identity Provider and decides whether to grant access to the service.

SAML_IdP_Initiated_Login_0

Identity Provider Initiated Login

SAML allows multiple Identity Provider configured for the same Service Provider. Deciding which of these Identity Providers is the right one for the principal is possible but not always easy to setup. The Identity Provider Initiated Login allows the principal to help here by picking the correct Identity Provider as a starting point. The Identity Provider creates the identity assertion and redirects to the Service Provider which is now able to decide whether to grant access to the service.

SAML_IdP_Initiated_Login_0

Oracle SaaS and Federated SSO

Here Oracle SaaS acts as the Service Provider and builds a Circle of Trust with a third-party, on-premise Identity Provider. This setup applies to all Fusion Applications based SaaS offerings (like Oracle Sales Cloud, Oracle HCM Cloud, or Oracle ERP Cloud) and looks like this.

SaaS_SP_OnPrem_IDP
The setup requires a joint effort of the customer and Oracle Cloud Support.

Scenario Components

The components of this scenario are:

  • Oracle SaaS Cloud (based on Fusion Applications, for example, Oracle Sales Cloud, Oracle HCM Cloud, Oracle ERP Cloud)
  • Any supported SAML 2.0 Identity Provider, for example:
    • Oracle Identity Federation 11g+
    • Oracle Access Management 11gR2 PS3+
    • AD FS 2.0+
    • Shibboleth 2.4+
    • Okta 6.0+
    • Ping Federate 6.0+
    • Ping One
    • etc.

The list of the supported SAML 2.0 Identity Providers for Oracle SaaS is updated regularly, and is available as part of the Fusion Applications Technology: Master Note on Fusion Federation (Support Doc ID 1484345.1).

Supported SAML 2.0 Identity Providers

The Setup Process

To setup this scenario Oracle Cloud Support and the Customer work together to create a operational scenario.

Setup of the On-Premise Identity Provider

To start the setup, the on-premise Identity Provider must be configured to fulfill these requirements:

  • It must implement SAML 2.0 of the Federation protocol.
  • The SAML 2.0 browser artifact SSO profile has been configured.
  • The SAML 2.0 Assertion NameID element must contain one of the following:
    • The user’s email address with the NameID Format being Email Address
    • The user’s Fusion uid with the NameID Format being Unspecified
  • All Federation Identity Provider endpoints must use SSL.
Setup of the Oracle SaaS Cloud Service Provider

Once the on-premise Identity Provider has been configured successfully, the following table outlines the process to request the setup of Oracle SaaS as Service Provider for Federated SSO with the customers on-premise Identity Provider:

Step Customer Oracle
1. File a Service Request to enable the required Oracle SaaS instance as Service Provider. The Service Request must follow the documented requirements.
(See Support Doc ID 1477245.1 or 1534683.1 for details.)
2. Approves the Service Request
3. Receives a document that describes how to configure the on-premise Identity Provider for the Service Provider.
4. When the conformance check has been done successfully, the Identity Provider Metadata File as XML file must be uploaded to the Service Request.
5. Configures the Service Provider in a non-production SaaS environment. When this is completed the Service Provider Metadata will be attached to the Service Request as an XML file for the customer. This file includes all the required information to add the Service Provider as a trusted partner to the Identity Provider.
6. Download the Service Provider metadata file and import it into the Identity Provider.
7. Adds the provided Identity Provider metadata to the Service Provider setup.
8. After the completion of the Service Provider setup, publishes a verification link in the Service Request.
9. Uses the verification link to test the features of Federated SSO.

Note: No other operations are allowed during this verification.

10. When the verification has been completed, update the SR to confirms the verification.
11. Finalize the configuration procedures.
12. Solely responsible for authenticating users.

When Federated SSO has been enabled, only those users whose identities have been synchronized between the on-premise Identity Provider and Oracle Cloud will be able to log in. To support this, Identity Synchronization must be configured (see below).

Identity Synchronization

Federated SSO only works correctly when users of the on-premise Identity Store and of the Oracle SaaS identity store are synchronized. The following sections outline the steps in general. The detailed steps will be covered in a later post.

Users are First Provisioned in Oracle SaaS

The general process works as follows:

Step Oracle SaaS On-premise Environment
1. Setup Extraction Process
2. Download Data
3. Convert Data into Identity Store Format
4. Import Data into Identity Store

Users are First Provisioned in On-Premise Environment

It is very common that users are already exiting in on-premise environments. To allow these users to work with Oracle SaaS, they have to be synchronized into Oracle SaaS. The general process works as follows:

Step Oracle SaaS On-premise Environment
1. Extract Data
2 Convert data into supported file format
3 Load user data using supported loading methods

References

Setting up Oracle’s Database as a Service (DBaaS) Pluggable Databases (PDBs) to Allow Connections via SID

$
0
0

Introduction

With the release of Oracle 12c Database the concept of Pluggable Databases (PDBs) was introduced.  Within a Container Database (CDB) one or many of these PDBs can exist.  Each PDB is a self-contained database, with its own system, sysaux and user tablespaces.  Each database has its own unique service name, and essentially functions as a stand-alone database.

Oracle’s Database as a Service (DBaaS) is based on 12c, and uses the same concepts.

Some applications – BI Cloud Service (BICS) as an example – require database connections to be defined using an Oracle SID, and not by a Service Name.  By default, the SID is not externally available for these PDBs, which causes connection issues for these applications.

This article will outline a simple method by which the listener within a PDB in an Oracle DBaaS environment can be set to use the Service Name as a SID option, and thus allow these applications to connect.

More information can be found in this support note  My Oracle Support note 1644355.1

 

Main Article

The pre-requists for this approach are:

  • A copy of the private key created when the DBaaS environment was set up, and the passphrase used.  The administrator who created the DBaaS instance should have both of these.
  • Port 1521 should be opened through the Compute node for the IPs of the servers or computers that need to connect to the PDB.
  • An SSH tool capable of connecting with a Private Key file.  In this article Putty will be used, which is a free tool available for download from here

 

Steps

a. From within the DBaaS console, identify the PDB database and it’s IP address.

Oracle_Database_Cloud_Service

b. Confirm that a connection can be made to the PDB using the service name.  If it the connection can not be made, see these instructions on how to resolve this within the Compute Node.

Windows7_x64

c. Open Putty and Set Up a Connection using the IP of the PDB obtained in step (a) and port 22.

Windows7_x64

d. Expand the ‘Connection’ / ‘SSH’ / ‘Auth’ menu item.  Browse in the ‘Private key file for authentication’ section to the key that the DBaaS administrator provided, and then click ‘Open’ in Putty to initiate the SSH session.

Windows7_x64

e. Login as the user ‘opc’ and enter the passphrase that the DBaaS administrator provided when prompted.

f. Use the following commands to change the user to ‘oracle’, and set the environmental variables:

sudo su – oracle

. oraenv

The correct Oracle SID should be displayed so you can just hit <enter> when prompted.  Only change this if it does not match the SID displayed in the DBaaS console in step (a).

Windows7_x64

g. The next set of commands will change the working directory to the Oracle DB home, take a copy of the existing Listener.ora file, and then stop the listener:

cd $ORACLE_HOME/network/admin

cp listener.ora listener.oraBKP

lsnrctl stop

Windows7_x64

h. The next commands will append the line ‘USE_SID_AS_SERVICE_LISTENER=on’ to the listener.ora file, and then re-start the Listener.

echo USE_SID_AS_SERVICE_LISTENER=on >> listener.ora

lsnrctl start

 

Windows7_x64

i. The final set of commands register the database to the listener.

sqlplus / as sysdba

alter system register;

exit

 

Windows7_x64

j. The Service Name can now be used as a SID for applications that can only connect with a SID.  Use SQL Developer to confirm that  a connection can be made using the Service Name from before – but this time in the SID field:

Windows7_x64

k. If Apex is used, it may be necessary to make a change within the Apex listener to reference the service name.  Before making the change, test to see if Apex is available by loging in.  If it works, then no change is required.  To make the change, follow steps (d) – (f) from above, and then type the following commands to locate the directory of the Apex listener configuration:

cd $ORACLE_HOME

cd ../../apex_listener/apex/conf/

Make a copy of the apex.xml file, then edit it and change the <entry key=”db.sid”> key to be the service name.  Finally, go to the GlassFish Administration from the DBaaS Cloud Service Console (requires port 4848 to be accessible – this can be made available in the Compute Cloud console – see step (b) above):

Oracle_Database_Cloud_Service

Within the ‘Applications’ option, select ‘Reload’ under Apex.

 

Summary
This article walked through the steps to configure the listener in a DBaaS PDB database to allow connections based on the Service Name as a SID option.

Using BICS Data Sync to Extract Data from Oracle OTBI, either Cloud or On-Premise

$
0
0

Introduction

Last year I wrote about configuring the BICS Data Sync Tool to extract data from ‘On-Premise’ data sources and to load those into BI Cloud Services (BICS).  That article can be found here.

In March 2016 the Data Sync Tool added the ability to connect and extract data from Oracle Transactional Business Intelligence (OTBI).  OTBI is available on many Oracle Cloud Services, and also On-Premise products.

This approach opens up many more sources that can now be easily loaded into BICS.  It also allows the Data Sync tool to leverage the Metadata Repository in OTBI (the RPD) to make the creation of an extract a simple process.

 

Main Article

This article will walk through the steps to download and configure the Data Sync tool to extract data from OTBI.  These steps are the same for an on-premise or cloud source, with the only difference being the URL used for the connection.

The Data Sync tool provides 3 approaches for extracting data from OTBI.  The ‘pros’ and ‘cons’ of each will be discussed, and then each method explained in detail.  The 3 approaches are:

1. An extract based on a report created in Analyses (also known as BI Answers)

2. An extract based on the SQL from the ‘Advanced’ tab of an Analyses Report

3. An extract based on a Folder from a Subject Area from the /analytics portal

 

Table of Contents

Which Approach Should You Use ?

Downloading Latest Version of Data Sync Tool

Setup Source Connection for OTBI Environment

Create Data Source Based on BI Analyses Report

Create Data Source Based On Logical SQL

Create Data Source Based on Subject Area Folder

Which Approach Should You Use ?

It is the opinion of this author that the second method, the extract based on SQL, is going to be the most useful approach for regular data updates.  Once created, there is no reliance on a saved analysis, and the true incremental update capability reduces the volume of data needing to be extracted from OTBI, improving performance and load times.

It is also not restricted by the 65,000 maximum row limit that many Cloud and On-Premise OTBI environments impose on reports.

Below is a quick summary of the differences of the 3 approaches.

 

Extract based on a report created in Analyses (also known as BI Answers)

In this approach, a report is created in OTBI and saved.  The Data Sync tool is then configured to run that report and extract the data.

  • Fully leverages the OTBI reporting front-end, allowing complex queries to be created without the need to understand the underlying table structures or joins, including filters, calculated fields, aggregates, etc
  • The select logic / filters in the Analyses report may be changed later, with no need to make changes in the Data Sync tool.  As long as the data fields being returned by the report remain the same, the changes will be transparent to the Data Sync tool. This would allow, for example, a monthly change to a report – perhaps changing the date range to be extracted.
  • For Cloud environments, this approach is limited to 65,000 rows, so should only be used for smaller data sets
  • It is not possible to restrict the data extract programmatically from the Data Sync tool, so true incremental updates are not possible.  For this functionality, one of the next two approaches should be used.

Extract based on the SQL from the ‘Advanced’ tab of an Analyses Report

This approach is very similar to the previous one, but instead of using a saved report, the logical SQL generated by OTBI is used directly.

  • Fully leverages the OTBI reporting front-end, allowing complex queries to be created without the need to understand the underlying table structures or joins, including filters, calculated fields, aggregates, etc
  • Allows for true incremental updates, with an Incremental SQL option that will reduce the amount of data being pulled from OTBI, improving performance and reducing load times
  • Once created, there is no reliance on a saved OTBI analyses report
  • Is NOT limited to 65,000 rows being returned, so can be used for both small, and larger data sets

Extract based on a Folder from a Subject Area from the /analytics portal

This approach bases the extract on a folder within a Subject Area within OTBI.  It allows the creation of many such mappings to be created at once.

  • No need to create an OTBI report or logical SQL, or even to log into OTBI.  The extract is set up purely within the Data Sync tool.
  • Allows mappings for multiple Subject Area folders to be created in one step, saving time if many mappings are needed and the Subject Area folders are structured in a meaningful way for BICS tables
  • Only allows Subject Area ‘folders’ to be imported, with no additional joins to other folders.  This approach will be most useful when the Subject Area folders closely mimic the desired data structures in BICS
  • Allows for true incremental updates, with a Filter option option that will reduce the amount of data being pulled from OTBI, improving performance and reducing load times
  • Is NOT limited to 65,000 rows being returned

Downloading Latest Version of Data Sync Tool

Versions of the Data Sync Tool prior to 2.0, released in February 2016, do not include this functionality.   The latest version can be obtained from OTN through this link.

For further instructions on configuring Data Sync, see this article.  If a previous version of Data Sync is being upgraded, use the documentation on OTN.

Setup Source Connection for OTBI Environment

No matter which of the 3 approaches is used, an initial Source Connection to OTBI needs to be configured.  If multiple OTBI environments are to be sourced from, then a Source Connection for each should be set up.

The Data Sync tool connects via web-services that many instances of OTBI expose.  This applies to both Cloud and On-Premise versions of OTBI.

To confirm whether your version of OTBI exposes these, take the regular ‘/analytics’ URL that is used to connect to the reporting portal – as demonstrated in this image:

Oracle_BIEE_Home_and_Edit_Post_‹_ATeam_Chronicles_—_WordPress_and_Evernote_Premium

and in a browser try and open the page with this syntax:

https://yourURL.com/analytics-ws/saw.dll/wsdl/v9

if the web-services are available – a page similar to the following will be displayed:

https___casf-test_crm_us1_oraclecloud_com_analytics-ws_saw_dll_wsdl_v9

 

If this does not display, try repeating but with this syntax (using ‘analytics’ instead of ‘analytics-ws’)

https://yourURL.com/analytics/saw.dll/wsdl/v9

If neither of these options display the XML page, then unfortunately web-services are not available in your environment, or the version of web-services available are earlier than the ‘v9’ that the data sync tool requires.

Speak with your BI Administrator to see if the environment can be upgraded, or the web-services exposed.

 

Defining Connection in the Data Sync Tool

a. In the Data Sync tool, select ‘Connections’, ‘Sources/Targets’, and ‘New’

Windows7_x64

b. Give the connection an appropriate name, and select ‘Oracle BI Connector’ as the connection type.

c. Enter the credentials for a user that has Consumer rights in the OTBI environment required to run a report.

d. For a the URL – first test with the syntax

https://yourURL.com/analytics-ws

This has less layers for the Data Sync tool to traverse, and so may offer slightly improved performance.

If the ‘Test Connection’ option fails – then try the following syntax:

https://yourURL.com/analytics

 

In this case, using the ‘analytics-ws’ version of the syntax, the configuration would look like this:

Windows7_x64

Save the connection, and then use the ‘Test Connection’ button to confirm details are correct.

Windows7_x64

e. For an On-Premise connection, the process would be identical.  Use the URL that is used to connect to the BI Analytics portal, and edit the URL to either use the ‘/analytics-ws’ or ‘/analytics’ path as described above.  In this example screenshot an internal URL is used, with the port number.  Once again, test the connection to confirm it connects.

Windows7_x64

 

Create Data Source Based on BI Analyses Report

a. Log into the Analytics portal for OTBI.

1

b. Create a new Analysis.  In this example, a simple report using Customer data from the ‘Marketing – CRM Leads’ Subject Area.  While not used here, a more complex query, with filters, calculated fields, aggregations, and fields from multiple subject area folders could easily be created.

Oracle_BI_Answers

c. Save the Analysis.  In this example the report was named ‘DataSyncTest’ and saved in /Shared Folders/Customer.  This path will be used in subsequent steps, although the path format will be slightly different.

3

d. Within the Data Sync tool, create a new ‘Manual Entry’ within the ‘Project’ / ‘Pluggable Source Data’ menu hierarchy:

1

e. Give the extract a Logical Name, used as the Source Name within the Data Sync Tool.  The ‘Target Name’ should be the name of the table you want to load into BICS. If the table doesn’t exist, the Data Sync Tool will create it.

Windows7_x64

f. A message provides some additional guidance of best practice.  Select ‘Report‘ as the source at the bottom of the message box as shown below, and then ‘OK’ to continue.

Windows7_x64

f. In the next screen – enter the path for the BI Answers analysis from step c.  Notice that the syntax for the Shared Folder is ‘/shared’ which differs from how it’s displayed in the OTBI Portal as ‘Shared Folder’.  In this example the folder path is:

/shared/Custom/DataSyncTest

 

Windows7_x64

g. the Data Sync tool can identify the field types but it does not know the correct length to assign in the target for VARCHAR fields.  By default these will be set to a length of 200.  These should be manually checked afterwards.  A message will inform you which of these need to be looked at.  Click ‘OK’ to continue.

Windows7_x64

h. The target table defined in step e. will be created.  Go to ‘Target Tables / Data Sets’, select the Target table that had just been created, and the ‘Table Columns’ option, and adjust the VARCHAR lengths as necessary.

Windows7_x64

i. As long as the Source Report has a unique ID field, and a date field that shows when the record was last updated, then the Load Strategy can be changed so that only new or updated data is loaded into the target table in BICS.  This can be changed in the ‘Project’ / ‘Pluggable Source Data’ menu hierarchy as shown below:

 

Windows7_x64

j. In this case the ‘Customer Row ID’ would be selected as the unique key for ‘User Key’ and the ‘Last Update Data’ for the ‘Filter’.

 

It is important to realize that while only changed or new data is being loaded into BICS, the full set of data needs to be extracted from OTBI each time.  The next two methods also provide the ability to filter the data being extracted from OTBI, and thus improving performance.

There is also a restriction within some Cloud OTBI environments where the result set is restricted to 65,000 rows or less.  If the extract is going to be larger than this, the other 2 methods should be considered.

You now have a Source and Target defined in BICS and can run the Job to extract data from an OTBI Analyses and load the data into BICS.

Create Data Source Based On SQL

a. While editing the Analyses Reports within OTBI, select the ‘Advanced’ tab and scroll down. The SQL used by the report is displayed.  This example below is the same report created earlier.

Windows7_x64

b. Cut and paste the SQL and remove the ‘ORDER by’ and subsequent SQL.  Also remove the first row of the select statement (‘0 s_0,’).  Both off these highlighted in the green boxes in the image above.

In this case, the edited SQL would look like this:

SELECT
“Marketing – CRM Leads”.”Customer”.”City” s_1,
“Marketing – CRM Leads”.”Customer”.”Country” s_2,
“Marketing – CRM Leads”.”Customer”.”Customer Row ID” s_3,
“Marketing – CRM Leads”.”Customer”.”Customer Unique Name” s_4,
“Marketing – CRM Leads”.”Customer”.”Last Update Date” s_5,
DESCRIPTOR_IDOF(“Marketing – CRM Leads”.”Customer”.”Country”) s_6
FROM “Marketing – CRM Leads”

c. In the Data Sync tool, Create a ‘Manual Entry’ for the SQL Data Source under the ‘Project’ / ‘Pluggable Source Data’ menu hierarchy.  As before, the Logical Name is used as the Source, and the Target Name should either be the existing table in BICS that will be loaded, or the new table name that the Data Sync Tool is to create.

Windows7_x64

d. Select ‘SQL‘ as the source type

Windows7_x64

e. In the ‘Initial SQL’ value, paste the SQL edited from the ‘Advanced’ tab in the Analyses Report

Windows7_x64

f. As before, a message is displayed reminding you to check the target table and adjust the size of VARCHAR fields as necessary:

Windows7_x64

g. Edit the newly created target and adjust the lengths of the VARCHAR fields as necessary.

Windows7_x64

h. This approach allows for true Incremental Updates of the target data, where only new or updated records from the Source OTBI environment are extracted.  To set up Incremental Updates, go back to ‘Project’, ‘Pluggable Source Data’ and select the source created in the previous step.  In the bottom section under the ‘Edit’ tab, select the ‘Incremental SQL’ box.

Windows7_x64

The Data Sync tool has 2 variables that can be added to the Logic SQL as an override to reduce the data set extracted from the source.

Those 2 variables are:

‘%LAST_REPLICATION_DATE%’ – which captures the date that the Data Sync job was last run

‘%LAST_REPLICATION_DATETIME%’ – which captures the timestamp that the Data Sync job was last run

As long as there is a suitable DATE or TIMESTAMP field in the source data that can be used to filter records, these variables can be used to reduce the data set pulled from OTBI to just the data that has changed since the last extract was run.

This is an example of the Incremental SQL using the %LAST_REPLICATION_DATE% variable.  The SQL is identical to the ‘Initial SQL’, just with the additional WHERE clause appended to the end.

Windows7_x64

And this is an example of the Incremental SQL using the %LAST_REPLICATION_DATETIME% variable:

Windows7_x64

i. To utilize the Incremental Approach, the Load Strategy should also be set to ‘Update Table’ from ‘Project’ / ‘Pluggable Source Data’ and by selecting the Source based on the Logic SQL.  Under the ‘Edit’ tab, change the Load Strategy:

Windows7_x64

Set the ‘User Key’ to a column, or columns, that make the row unique.

Windows7_x64

For the Filter, use a field that identifies when the record was last updated.

Windows7_x64

Data Sync may suggest an index to help load performance.  Click ‘OK’ to accept the recommendation and to create the Index.

Windows7_x64

You now have a Source and Target defined in BICS and can run the Job to extract data from the SQL created from an OTBI Analyses and load the data into BICS.

Create Data Source Based on Subject Area Folder

a. In the Data Sync Tool under the ‘Project’ / ‘Pluggable Source Data’ menu hierarchy, select ‘Data from Object’

b. To save time, use the Filter and enter the full or partial name of the Subject Area from within OTBI that is to be used.  Be sure to use the ‘*’ wildcard at the end.  If nothing is entered in the search field to reduce the objects returned, then an error will be thrown.

c. Select ‘Search’.  Depending on the Filter used, this could take several minutes to return.

1

Windows7_x64

d. Select the Subject Area folder(s) to be used.  If more than one is selected, a separate source and target will be defined for each within the Data Sync tool.

Windows7_x64

e. Some best practice for this method are displayed – hit ‘OK’

Windows7_x64

f. As in the other methods, a list of the fields cast as VARCHAR with a default length of 200 are shown.  Click ‘OK’

Windows7_x64

g. After a few moments a ‘Success’ notification should be received:

Windows7_x64

h. As before, update the length of the VARCHARs as needed, under ‘Target Tables / Data Sets’, and then by selecting the Target table created in the previous step, and then in the bottom section under ‘Table Columns’ the lengths can be altered:

Windows7_x64

i. The Load Strategy can be updated as needed in the same way, and the true ‘Incremental’ option is available if there is a suitable date field in the source.

Windows7_x64

Windows7_x64

NOTE – for this method, if a date is select as part of the Update Strategy, then that is automatically used to restrict the data extract from the source.  No further action is required to implement true incremental updates.

There is the option to add a filter to further restrict the data.  This example below shows how the “Contact” folder within the “Marketing – CRM Leads” could be restricted to just pull back contacts from California.  The value is in the form of a ‘Where Clause’, using fully qualified names.

Windows7_x64

You now have a Source and Target defined in BICS and can run the Job to extract data from one or more Subject Areas (separate mapping for each) and load that  data into BICS.

 

There is another method for creating an extract based on a single Subject Area.  This may be preferable if an incremental update approach is required, although only allows a single mapping to be set up at a time.

a. Under ‘Project’, ‘Pluggable Source Data’, select ‘Manual Entry’.  Select the OTBI DB connection and enter a suitable name for the logical source and target:

Windows7_x64

b. Select ‘Subject.Area.Table’ as the data source, and then click ‘OK’.

Windows7_x64

c. In the Properties, enter the fully qualified name for the Subject Area and Folder (‘Table’), and also the filter with the same format as step (i) above.  Be sure to follow the steps to update the Load Strategy if incremental updates are required.

Windows7_x64

Summary
This article walked through the steps to configure the Data Sync tool to be able to connect and extract data from a cloud or on-premise OTBI environment.  The second method – ‘Extracting Data based on SQL’ – is the recommended approach for most use cases.

For further information on the Data Sync Tool, and also for steps on how to upgrade a previous version of the tool, see the documentation on OTN.  That documentation can be found here.

Integrating Oracle Sales Cloud (OSC) with Oracle Database as a Service (DBaaS) using PL/SQL

$
0
0

Introduction


This article describes how to integrate Oracle Sales Cloud (OCS) with Oracle Database as a Service (DBaaS) using PL/SQL.

The code snippet provided uses the REST API for Oracle Sales Cloud to create a new OSC contact from DbaaS. The PL/SQL uses UTL_HTTP commands to call the REST API for Oracle Sales Cloud.

A sample use case for this code snippet could be: Displaying a list of contacts in an Oracle or an external application. Then allowing the application consumer to select the relevant contacts to push to OSC as potential opportunities.

Alternative OCS integration patterns have been discussed in the previously published articles listed below:


Integrating Oracle Sales Cloud with Oracle Business Intelligence Cloud Service (BICS) – Part 1

Integrating Oracle Sales Cloud with Oracle Business Intelligence Cloud Service (BICS) – Part 2


The primary difference between the past and current articles is:


The prior two articles focused on using the APEX_WEB_SERVICE.

This current article uses UTL_HTTP and has no dependency on Apex.


That said, DBaaS does come with Apex and the previously published solutions above are 100% supported with DbaaS. However, some DbaaS developers may prefer to keep all functions and stored procedures in DbaaS – using native PL/SQL commands through Oracle SQL Developer or SQL*Plus. This article addresses that need.

Additionally, the article explains the necessary prerequisites steps required for calling the REST API for OCS from DbaaS. These steps include: configuring the Oracle Wallet and importing the OSC certificate into the wallet.

The techniques referenced in this blog can be easily altered to integrate with other components of the OSC API. Additionally, they may be useful for those wanting to integrate DbaaS with other Oracle and non-Oracle products using PL/SQL.

There are four steps to this solution:


1. Create Oracle Wallet

2. Import OSC Certificate into Wallet

3. Create Convert Blob to Clob Function

4. Run PL/SQL Sample Snippet


Main Article


1. Create Oracle Wallet

 

The Schema Service Database is pre-configured with the Oracle Wallet and 70+ common root CA SSL certificates. It is completely transparent to developers when building declarative web services in APEX or when using APEX_WEB_SERVICE API.

DbaaS, on the other hand, does not come pre-configured with the Oracle Wallet. Therefore, a wallet must be created and the OSC certificate imported into the Wallet.


Using PuTTY (or another SSH and Telnet client) log on to the DbaaS instance as oracle or opc.

If set, enter the passphrase.

If logged on as opc, run:

sudo su – oracle

Set any necessary environment variables:

. oraenv

Create the Wallet

orapki wallet create -wallet . -pwd Welcome1

 

2. Import OSC Certificate into Wallet

 

From a browser (these examples use Chrome) go to the crmCommonApi contacts URL. The screen will be blank.

https://abc1-cloud1234-crm.oracledemos.com/crmCommonApi/resources/11.1.10/contacts

In R10 you may need to use the latest/contacts URL:

https://abc1-cloud1234-crm.oracledemos.com/crmCommonApi/resources/latest/contacts

Click on the lock

Snap1

Click Connection -> Certificate Information

Snap2

Click “Certification Path”. Select “GeoTrust SSL CA – G3”.

Snap3

Click Details -> Copy to File

Snap4

 

Click Next

Snap5

Select “Base-64 encoded X.509 (.CER)

Snap6

Save locally as oracledemos.cer

Snap7

Click Finish

Snap8

Snap9

Copy the oracledemos.cer file from the PC to the Cloud server. This can be done using SFTP.

Alternatively, follow the steps below to manually create the oracledemos.cer using vi editor and cut and paste between the environments.


Return to PuTTY.

Using the vi editor create the certificate file.

vi oracledemos.cer

Open the locally saved certificate file in NotePad (or other text editor). Select all and copy.

Return to the vi Editor and paste the contents of the certificate into the oracledemos.cer file.

Hit “i” to insert
“Right Click” to paste
Hit “Esc”
Type “wq” to save
Type “ls -l” to confirm oracledemos.cer file was successfully created

Run the following command to add the certificate to the wallet.

orapki wallet add -wallet . -trusted_cert -cert /home/oracle/oracledemos.cer -pwd Welcome1

 Confirm the certificate was successfully added to the wallet.

orapki wallet display -wallet . -pwd Welcome1

Snap11

3. Create Convert Blob to Clob Function


I used this v_blobtoclob function created by Burleson Consulting to convert the blob to a clob.

There are many other online code samples using various methods to convert blobs to clobs that should work just fine as well.

This function isn’t actually required to create the OSC contact.

It is however necessary to read the response – since the response comes back as a blob.

 

4. Run PL/SQL Sample Snippet


Replace the highlighted items with those of your environment:


(a) Wallet Path

(b) Wallet Password

(c) OSC crmCommonAPI Contacts URL

(d) OCS User

(e) OSC Pwd

(f) Blob to Clob Function Name


Run the code snippet in SQL Developer or SQL*Plus.

DECLARE
l_http_request UTL_HTTP.req;
l_http_response UTL_HTTP.resp;
l_response_text VARCHAR2(32766);
l_response_raw RAW(32766);
l_inflated_resp blob;
l_body VARCHAR2(30000);
l_clob CLOB;

l_body := ‘{“FirstName”: “Jay”,”LastName”: “Pearson”,”Address”: [{“Address1”: “100 Oracle Parkway”,”City”: “Redwood Shores”,”Country”: “US”,”State”: “CA”}]}’;
UTL_HTTP.set_wallet(‘file:/home/oracle‘, ‘Welcome1‘);
l_http_request := UTL_HTTP.begin_request (‘https://abc1-cloud1234-crm.oracledemos.com:443/crmCommonApi/resources/11.1.10/contacts’,’POST’,’HTTP/1.1′);
UTL_HTTP.set_authentication(l_http_request,’User‘,’Pwd‘);
UTL_HTTP.set_header(l_http_request, ‘Content-Type’, ‘application/vnd.oracle.adf.resourceitem+json’);
UTL_HTTP.set_header (l_http_request, ‘Transfer-Encoding’, ‘chunked’ );
UTL_HTTP.set_header(l_http_request, ‘Cache-Control’, ‘no-cache’);
utl_http.write_text(l_http_request, l_body);
l_http_response := UTL_HTTP.get_response(l_http_request);
dbms_output.put_line (‘status code: ‘ || l_http_response.status_code);
dbms_output.put_line (‘reason phrase: ‘ || l_http_response.reason_phrase);
UTL_HTTP.read_raw(l_http_response, l_response_raw,32766);
DBMS_OUTPUT.put_line(‘>> Response (gzipped) length: ‘||utl_raw.length(l_response_raw));
l_inflated_resp := utl_compress.lz_uncompress(to_blob(l_response_raw));
DBMS_OUTPUT.put_line(‘>> Inflated Response: ‘||dbms_lob.getlength(l_inflated_resp));
l_clob := v_blobtoclob(l_inflated_resp);
dbms_output.put_line(dbms_lob.substr(l_clob,24000,1));
UTL_HTTP.end_response(l_http_response);
–;
/
sho err

Dbms Output should show status code: 201 – Reason Phrase: Created.

However, I have found that status 201 is not 100% reliable.

That is why it is suggested to return the response, so you can confirm that the contact was actually created and get the PartyNumber.

Snap12

Once you have the PartyNumber, Postman can be used to confirm the contact was created and exists in OSC.

https://abc1-abc1234-crm.oracledemos.com:443/crmCommonApi/resources/11.1.10/contacts/345041

Snap13


Further Reading


Click here for the REST API for Oracle Sales Cloud guide.

Click here for Oracle Database UTL_HTTP commands.

Click here for more A-Team BICS Blogs.

 
Summary

 

This article provided a code snippet of how to use UTL_HTTP PL/SQL commands in DbaaS to create an OSC contact using the REST API for Oracle Sales Cloud.

Additionally, the article provided the prerequisite steps to create an Oracle Wallet and import the OSC certificate. This is required for accessing the OSC API externally – in this case using PL/SQL ran in SQL Developer.

The techniques referenced in this blog can be easily altered to integrate with other components of the OSC API. Additionally, they may be useful for those wanting to integrate DbaaS with other Oracle and non-Oracle products using PL/SQL.


Configuring the Remote Data Connector (RDC) for BI Cloud Service (BICS)

$
0
0

Introduction

The BICS Remote Data Connector (RDC) was released in March 2016.  It allows reports and analyses in BI Cloud Service (BICS) to directly connect to an On-Premise Oracle Database.  When a report is run, a SQL request is generated by BICS and sent to the on-premise Weblogic server.  Weblogic sends that request to the on-premise database, and then compresses the results before returning those to BICS where it is displayed.  This gives customers with large on-premise data sets the ability to use BI Cloud Service without having to push all of that data to the cloud.

There are several pre-requisits:

– The on-premise data source must be an Oracle DB.  Future versions of the RDC tool will expand that to other database vendors.

– The BI Admin tool used to create the RPD must be 12c.  Prior versions do not offer the JDBC (JNDI) Data Source option that is required for this process.  Download and install the 12c BI Admin Tool from OTN selecting the ‘Oracle Business Intelligence Developer Client Tool’ option.

– A Weblogic server running in the On-Premise environment.  The latest version of weblogic is available through this link, although prior versions will likely work as well.

– A knowledge of networking, security and firewalls.  The On-Premise weblogic server needs to be accessible externally, and the port defined in the RPD connection needs to correctly route to the weblogic server port.

This article will not go into detail of security, load-balancers, DMZs, firewalls etc.  The assumption is that the knowledge exists to make sure the connection from BICS can be correctly routed to the on-premise Weblogic server.  Some links to help can be found in the ‘Further Reading’ section at the end of the article.

Please note that while this approach can be used with the ‘Schema Service’ version of BICS, once the RPD is uploaded, data stored in the schema service database will not be accessible.  The RPD model will replace the Schema Service Model, and will not be able to connect to schema service data.

Multiple connections and subject areas can be defined in the RPD, so if a customer has both On-Premise connections, and DBaaS connections – those CAN be modeled in the RPD and will be available in BICS once the RPD has been uploaded to BICS.  For more information on defining a connection to DBaaS, see this article.

 

Main Article

Install RDC Application

1. Download the War file application to be installed into Weblogic and save to a file location that is accessible to the server where Weblogic is running.  This War file is available through this link on OTN and is called ‘BICS Remote Data Connector’

2. Log in to Weblogic. Navigate to “Deployments” > “Install“.

Oracle_WebCenter_Portal_11g_R1_PS7__Running_

3. Enter the Path where the WAR file is located, then hit ‘Next’ to continue

Oracle_WebCenter_Portal_11g_R1_PS7__Running_

4. Make sure ‘Install this deployment as an application’ is selected, hit ‘Next’ to continue:

Oracle_WebCenter_Portal_11g_R1_PS7__Running_

5. Select the Server(s) to install the application into.  In this case the AdminServer is selected.  Hit ‘Next’ to continue.

Oracle_WebCenter_Portal_11g_R1_PS7__Running_

6. Make sure ‘DD Only: Use only roles and policies that are defined in the deployment descriptors’, and ‘Use the defaults defined by the deployment’s targets’ are selected in the relevant sections (see below), then hit ‘Finish’ to install the application.

Oracle_WebCenter_Portal_11g_R1_PS7__Running_

7. If successful, a ‘successfully installed’ notification will be received:

Oracle_WebCenter_Portal_11g_R1_PS7__Running_

and you will see the application listed and active:

Oracle_WebCenter_Portal_11g_R1_PS7__Running_

8. The Remote Desktop Connector has metadata security built in.  To fully verify the application is working, and to connect to it through the BI Admin tool, this security will need to be temporarily disabled.  Shutdown the Weblogic Server, and then in the same command shell or shell script used to start weblogic – set this variable:

For Linux:

export DISABLE_RDC_METADATA_SECURITY=1

For Windows:

set DISABLE_RDC_METADATA_SECURITY=1

Then re-start Weblogic.

9. To confirm the Remote Desktop Connector was installed correctly, navigate to this path

http://<weblogic-server>:<weblogic-port>/obiee/javads?status”

If the steps above have been correctly followed, then the following XML file will appear:

Oracle_WebCenter_Portal_11g_R1_PS7__Running_

Configure Data Source

Multiple data sources can be setup.  Use the following process for each and use a unique name.  The connection to each data source would then be defined in the RPD connection.

1. Within the Weblogic Administration Console, expand ‘Services’ and ‘Data Sources’ and select ‘New’ to create a new data source.

Oracle_WebCenter_Portal_11g_R1_PS7__Running_

2. Select ‘Generic Data Source’ in the options:

14

3. Enter a Name for the Data Source and a JNDI Name, and the database type. In the initial release of the RDC tool, ‘Oracle’ is the only supported Database type.  In future releases this will be expanded to other vendors.  Note – the JDNI Name forms part of the URL used to access the data source, so try to avoid spaces and other characters that may cause problems with the URL.

Oracle_WebCenter_Portal_11g_R1_PS7__Running_

4. For the Database Driver select the appropriate driver for the On-Premise Oracle Database.  In the initial release, the following options are currently supported:

  • Oracle’s Driver  (Thin) for Instance Connections;
  • Oracle’s Driver  (Thin) for RAC Service-Instance Connections;
  • Oracle’s Driver  (Thin) for Service Connections;

Depending on the version of weblogic, the listed version may be slightly different.  In this case the ‘Oracle’s Driver (Thin) for Service Connections’ is selected.

Oracle_WebCenter_Portal_11g_R1_PS7__Running_

5. Keep the default options for the ‘Supports Global Transactions’ and ‘One-Phase Commit’ and hit ‘Next’

Oracle_WebCenter_Portal_11g_R1_PS7__Running_

6. Enter the appropriate values for the On-Premise Oracle Database, and then ‘Next’

Oracle_WebCenter_Portal_11g_R1_PS7__Running_

7. Make sure the configuration is correct, then ‘Test Connection’.  If the connection is successful, click ‘Next’

Oracle_WebCenter_Portal_11g_R1_PS7__Running_

8. On the final configuration screen, select the Target to deploy the JDBC data source.  In this case we use the AdminServer.  Hit ‘Finish’ to complete the configuration.

Oracle_WebCenter_Portal_11g_R1_PS7__Running_

Download and Deploy Public Key

Within the Service Console for BICS, Select the ‘Database Connections’ tab, and then ‘Get Public Key’

Oracle_BI_Cloud_Service_Console_png_705×245_pixels

Save the key on the weblogic server in the following path: $DOMAIN_HOME/rdc_keys/<deployment_name>

The <deployment_name> is ‘obi-remotedataconnector’ by default.

Using the prebuilt Webcenter VM available on OTN – the path would be:

/oracle/domains/webcenter/rdc_keys/obi-remotedataconnector

If the path doesn’t exist, create it – and then save the Public Key there.

 

 

Set up RPD Connection and Publish RPD to BICS

This process works best when an RPD is firstly created against the local On-Premise Database and is tested On-Premise to confirm joins / calculations / subject areas are working as desired.

1. Open the 12c Admin Tool but Do NOT Open the Existing On-Premise RPD.  Under the ‘File’ menu, select ‘Load Java Datasources’.  If this option is not available, the Admin Tool is not the correct version.  Download from OTN.

Windows7_x64

2. Enter the Host Name / IP address, port, and user that can connect to the weblogic server where the RDC was installed. NOTE – this hostname or IP does not need to be available externally.  This is just used to load the Java Datasource in the RPD for this step.

Windows7_x64

3. A ‘Success’ notification should be received.  If it’s not, check the previous steps.

Windows7_x64

4. Open the RPD, and right click on the Connection Pool and select ‘Properties’.  In the ‘General’ tab of the resulting properties screen, update the Data source name to have the syntax:

<WebLogicServer>:port/obiee/javads/<datasourcename>

The value used for the Weblogic Server needs to be able to resolve from the BICS server in the Oracle Cloud, to the on-premise Weblogic host.  If the customer has a resolvable name (www.oracle.com for instance) and the correct firewall rules have been put in place for a request to be routed to the internal weblogic server, then that would suffice.  Otherwise, use an IP address that will be routed to the weblogic server.  The hosts file on the BICS server is not accessible to have an entry added.

Windows7_x64

5. Go to the ‘Miscellaneous’ tab and make sure ‘Use SQL Over HTTP’ is set to ‘True’.  The value in ‘Required Cartridge Version’ may be missing.  Do not change that.

Windows7_x64

6. Note – you will not be able to import tables using this connection.  That should be done while the standard oracle RPD connection is in place to the On-Premise Oracle Database.

7. Save the RPD

8. In the BI Cloud Service Console, select ‘Snapshots and Models’

Windows7_x64

 

9. Select the ‘Replace Data Model’ option

Windows7_x64

10. Browse to the RPD and enter the Repository password.  Then click ‘OK’.

Windows7_x64

11. Reports can now be written in BICS to connect to the On-Premise data source.

 

Summary

This article walked through the steps to download, configure, and the Remote Data Connector for BICS.

 

Further Reading

Quick Start Guide for Remote Data Connector:   BICS Remote Data Connector getting started guide

Configure Weblogic to use SSL: http://docs.oracle.com/cd/E13222_01/wls/docs81/secmanage/ssl.html

Configure Plug-in for proxying requests from Oracle HTTP Server to Weblogic: http://docs.oracle.com/cd/E28280_01/web.1111/e37889/oracle.htm#PLGWL551

Integrating Oracle Data as a Service for Customer Intelligence (DaaS for CI) with Oracle Business Intelligence Cloud Service (BICS)

$
0
0

Introduction

 

This article provides a set of sample code snippets for loading data from Oracle Data as a Service for Customer Intelligence (DaaS for CI) into an Oracle Business Intelligence Cloud Service (BICS) Schema Service Database.


“Oracle Data as a Service for Customer Intelligence (DaaS for CI) enables you to analyze your own unstructured text as well as public social data, so you can translate insights into actions that benefit you and your customers. DaaS for CI provides a comprehensive feed of social media and news data from over 40 million sites worldwide, with new sites being added every day. These sites include social networks, blogs, video sharing sites, forums, news, and review sites.”
See Using Oracle Data as a Service for Customer Intelligence for more information on DaaS for CI.


The code snippets utilize the DaaS for CI Enriched Social Data Feed API to retrieve the data. The DaaS for CI data is then inserted into a BICS schema service database table using PL/SQL.


“The Enriched Social Data Feed API provides access to public social data from millions of users that has been gathered, cleaned, and analyzed by Oracle. Based on topic requirements that you define, you use the API to retrieve files containing data that is ready for your use in your Business Intelligence systems. Using the Enriched Social Data Feed alongside existing BI tools such as Oracle Business Intelligence Enterprise Edition, for example, you can visualize in detail the reaction of consumers to an advertising campaign, and much more. The files that are created and pre-analyzed, and that you retrieve using the Enriched Social Data Feed API, are often referred to as your data feed.”
See “Create Your Enriched Social Data Feed API Requests” for more info on the API.


Diagram 1, illustrates the Enriched Social Data Feed API returning a list of message URL / paths. This list is commonly referred to as the “data feed”. Each link is a reference to a .gz file. Contained inside each .gz file is an .xml file displaying information for that individual message. Diagram 2, shows an example of the contents of the message .xml file. This article will walk through the process of lopping through each data feed .gz file and parsing the data contained in the .xml message file to a relational database table.

Diagram 1: Data Feed:

FeedEx


Diagram 2: Contents of Data Feed Message .xml file.

XML


The intended audience for this article is BICS developers. That said, the code snippets provided can be ran in any Oracle database configured with Apex (independently of BICS). With appropriate code adjustments the PL/SQL can be converted to run outside of Apex on an Oracle on-prem or DbaaS schema (i.e. using the UTL_HTTP package). Additionally, the principle concepts covered can also be used as a starting point for DaaS for CI integrations with non-Oracle relational databases. General themes and concepts covered in the article are also beneficial for those wanting to integrate DaaS for CI with other products using alternative programming languages.


Main Article

Step One – Review Documentation


Click here for the DaaS for CI documentation home page.

This article focuses on the Enriched Social Data Feed API.

The Enriched Social Data Feed API documentation can be accessed from the home page by clicking on “Create Your Enriched Social Data Feed API Requests“.


Step Two – Create Table


Run the below code snippet to create the SOCIAL_DATA_FEEDS table that will be used to store the DaaS for CI data in BICS.


Click here for a text version of the create table code snippet.

CREATE TABLE SOCIAL_DATA_FEEDS
(“FNAME” VARCHAR2(1000 BYTE),
“PUB_DATE” VARCHAR2(30 BYTE),
“DESCRIPTION” VARCHAR2(40 BYTE),
“RELEASE” VARCHAR2(80 BYTE),
“COUNT” NUMBER,
“MSG_ID” NUMBER,
“MSG_PUBLISHED_ON” DATE,
“MSG_SOURCE” VARCHAR2(90 BYTE),
“MSG_SOURCE_TYPE” VARCHAR2(40 BYTE),
“MSG_LINK” VARCHAR2(120 BYTE),
“MSG_TITLE” VARCHAR2(200 BYTE),
“MSG_SOURCE_ID” VARCHAR2(200 BYTE),
“MSG_AUTH_NAME” VARCHAR2(100 BYTE),
“MSG_AUTH_FOLLOWERS” NUMBER,
“MSG_AUTH_FRIENDS” NUMBER,
“MSG_AUTH_KLOUT” NUMBER,
“MSG_AUTH_SRC_ID” VARCHAR2(200 BYTE),
“TOPIC_ID” NUMBER,
“TOPIC_NAME” VARCHAR2(50 BYTE),
“SNIPPET_ID” NUMBER,
“SNIPPET_READABILITY” VARCHAR2(40 BYTE),
“SNIPPET_SUBJECTIVITY” VARCHAR2(40 BYTE),
“SNIPPET_TONALITY” NUMBER,
“SNIPPET_ANCHOR” VARCHAR2(40 BYTE),
“SNIPPET_TEXT” VARCHAR2(1000 BYTE),
“DIMENSION_ID” NUMBER,
“DIMENSION_NAME” VARCHAR2(60 BYTE)
) ;


Step Three – Insert Records


This step provides the “insert” code snippet that loads the DaaS for CI data into BICS.


The high-level steps of the insert stored procedure are:


a)    Loop through each URL/path in the data feed.


-> The DaaS for CI data feed is retrieved using the Enriched Social Data Feed API.

-> Each .gz file is read in as a blob using The Oracle Apex API APEX_WEB_SERVICE MAKE_REST_REQUEST_B function.


b)    Open / Uncompress each .gz file -> to retrieve the XML file.


-> The UTL_COMPRESS Oracle PL/SQL database function is used to open / uncompress each .gz file


c)    Map the XPath for each XML field.


-> The XMLTable Oracle PL/SQL database function is used to parse the XML for each field.


d)    Data is inserted into a relational database table.


-> All fields are inserted into the SOCIAL_DATA_FEEDS table.


Click here for a text version of the insert records code snippet.

create or replace procedure SP_UNCOMPRESS_AND_INSERT(p_username varchar2,p_password varchar2,p_url varchar2)
IS
v_blob BLOB;
v_uncompress_blob BLOB;
v_xml XMLTYPE;
BEGIN
v_blob := apex_web_service.make_rest_request_b
(
p_url => p_url,
p_http_method => ‘GET’,
p_username => p_username,
p_password => p_password
);
v_uncompress_blob := utl_compress.lz_uncompress(v_blob);
INSERT INTO SOCIAL_DATA_FEEDS
SELECT
p_url,
m.pub_date,
m.description,
m.release,
m.count,
p.msg_id,
TO_DATE(SUBSTR(p.msg_published_on ,0,17), ‘YYYY-MM-DD HH24:MI:SS’) msg_published_on ,
p.msg_source,
p.msg_source_type,
p.msg_link,
p.msg_title,
–p.msg_body,
p.msg_source_id,
p.msg_auth_name,
p.msg_auth_followers,
p.msg_auth_friends,
p.msg_auth_klout,
p.msg_auth_src_id,
t.topic_id,
t.topic_name,
s.snippet_id,
s.snippet_readability,
s.snippet_subjectivity,
s.snippet_tonality,
s.snippet_anchor,
s.snippet_text,
d.dimension_id,
d.dimension_name
FROM
XMLTable(xmlnamespaces(‘http://www.collectiveintellect.com/schemas/messages’ as “ci” ),’/ci:messages’
passing xmltype(v_uncompress_blob, nls_charset_id(‘AL32UTF8’))
columns
pub_date varchar2(30) path ‘pub_date’,
description varchar2(40) path ‘description’,
release varchar2(80) path ‘release’,
count number path ‘count’,
post xmltype path ‘posts/post’
) (+) m ,
XMLTable( ‘/post’ passing m.post
columns
msg_id number path ‘message_id’,
msg_published_on varchar2(50) path ‘published_on’,
msg_source varchar2(90) path ‘source’,
msg_source_type varchar2(40) path ‘source_type’,
msg_link varchar2(120) path ‘link’,
msg_title varchar2(200) path ‘title’,
–msg_body varchar2(1000) path ‘body’,
msg_source_id varchar2(200) path ‘message_source_generated_id’,
msg_auth_name varchar2(100) path ‘author/name’,
msg_auth_followers number path ‘author/followers_count’,
msg_auth_friends number path ‘author/friends_count’,
msg_auth_klout number path ‘author/klout_score’,
msg_auth_src_id varchar2(200) path ‘author/source_generated_id’,
topic xmltype path ‘topics/topic’
) (+) p,
XMLTable( ‘/topic’ passing p.topic
columns
topic_id number path ‘@id’,
topic_name varchar2(50) path ‘@name’,
snippet xmltype path ‘snippets/snippet’
) (+) t,
XMLTable( ‘/snippet’ passing t.snippet
columns
snippet_id number path ‘id’,
snippet_readability varchar2(40) path ‘readability’,
snippet_subjectivity varchar2(40) path ‘subjectivity’,
snippet_tonality number path ‘tonality’,
snippet_anchor varchar2(40) path ‘anchor’,
snippet_text varchar2(1000) path ‘text’,
dim xmltype path ‘dimensions/dimension’
) (+) s,
XMLTable( ‘/dimension’ passing s.dim
columns
dimension_id number path ‘id’,
dimension_name varchar2(60) path ‘name’
) (+) d;
commit;
END;


Step Four – Get Data Feed Link


Replace the items highlighted in yellow with those from your environment.


a) Max Age = The age in seconds of the oldest file to be returned. The value can be up to 604800 (one week).

b) URL = Data Feed Link index.xml.

c) Username = API Key

d) Password = Customer XID


This code snippet reads through the Data Feed list of files and calls the SP_UNCOMPRESS_AND_INSERT stored procedure for each one.


Click here for a text version of the get data feed link code snippet.

CREATE OR REPLACE PROCEDURE RUN_CI_INSERT
IS
l_ws_response_clob CLOB;
l_ws_response_xml XMLTYPE;
l_link VARCHAR2(500);
l_max_days VARCHAR(100) := ‘259200’; –last 3 days
l_ws_url VARCHAR2(500) := ‘https://{CustomerKey:ApiKey}:@data.collectiveintellect.com/feeds/{Feed_ID}/index.xml?max_age=’ || l_max_days;
l_username VARCHAR2(100) := ‘API Key‘;
l_password VARCHAR2(100) := ‘Customer XID‘;
BEGIN
DELETE FROM SOCIAL_DATA_FEEDS;
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => l_ws_url,
p_http_method => ‘GET’,
p_username => l_username,
p_password => l_password
);
l_ws_response_xml := XMLTYPE.createXML(l_ws_response_clob);
FOR R IN
(
select (feed_link)
from
xmltable
(‘/batches/batch/url’
passing l_ws_response_xml
columns
feed_link VARCHAR2(500) path ‘text()’
)
)
LOOP
SP_UNCOMPRESS_AND_INSERT(l_username,l_password,R.feed_link);
END LOOP;
END;


Step Five – Run Job


Click here for a text version of the run job code snippets.


a) Create the job.

BEGIN
cloud_scheduler.create_job(
job_name => ‘LOAD_CI_DATA’,
job_type => ‘STORED_PROCEDURE’,
job_action => ‘RUN_CI_INSERT’,
start_date => ’01-MAR-16 07.00.00.000000 AM -05:00′,
repeat_interval => ‘FREQ=DAILY’,
enabled => TRUE,
comments => ‘Loads CI Data into SOCIAL_DATA_FEEDS’);
END;

b) Run the job.

BEGIN
CLOUD_SCHEDULER.RUN_JOB(JOB_NAME => ‘LOAD_CI_DATA’);
END;


c) Audit the job.

Check the progress of currently running jobs:
SELECT * FROM USER_SCHEDULER_RUNNING_JOBS
WHERE JOB_NAME = ‘LOAD_CI_DATA’;

Displays log information about job runs, job state changes, and job failures:
SELECT * FROM USER_SCHEDULER_JOB_LOG
WHERE JOB_NAME = ‘LOAD_CI_DATA’;

Displays detailed information about job runs, job state changes, and job failures:
SELECT * FROM USER_SCHEDULER_JOB_RUN_DETAILS
WHERE JOB_NAME = ‘LOAD_CI_DATA’;

Displays information about scheduled jobs:
SELECT * FROM USER_SCHEDULER_JOBS
WHERE JOB_NAME = ‘LOAD_CI_DATA’;


d) Disable / re-enable the job

BEGIN
CLOUD_SCHEDULER.DISABLE(‘LOAD_CI_DATA’);
END;

BEGIN
CLOUD_SCHEDULER.ENABLE(‘LOAD_CI_DATA’);
END;


Step Six – Review Data

SELECT * FROM SOCIAL_DATA_FEEDS;


Further Reading


Click here for more A-Team BICS Blogs.

Click here for the DaaS for CI documentation home page.

Click here for the Enriched Social Data Feed API documentation.

Click here for the Application Express API Reference Guide – MAKE_REST_REQUEST_B Function.


Summary


This article provided a set of sample code snippets that leverage the APEX_WEB_SERVICE_API to integrate Oracle Data as a Service for Customer Intelligence (DaaS for CI) with Oracle Business Intelligence Cloud Service (BICS) Schema Service Database.

The DaaS for CI Enriched Social Data Feed API was used to retrieve the data feed. The results from the data feed were then loaded into a BICS schema service database using PL/SQL and Apex functions.

The code snippets provided can be ran in any Oracle database configured with Apex (independently of BICS). With appropriate code adjustments the PL/SQL can be converted to run outside of Apex on an Oracle on-prem or DbaaS schema (i.e. using the UTL_HTTP package).

Themes and concepts covered in the article are also beneficial for those wanting to integrate DaaS for CI with other products using alternative programming languages.

Uploading files to Oracle Document Cloud Service using SOA

$
0
0

This blog provides a quick tip for implementing file upload into Oracle Document Cloud Service (DOCS) using java in Oracle SOA and Oracle SOA Cloud Service(SOACS)

The DOCS upload REST service requires POSTing of multipart form, a feature that is currently unavailable in the REST cloud adapter. This POST request to upload a file contains 2 body parts. The first being a json payload and the second containing the actual file content.

 

The request format looks as shown here in the Oracle Documents Cloud Service REST API Reference.

Content-Type: multipart/form-data; boundary=---1234567890
-----1234567890
Content-Disposition: form-data; name="parameters"
Content-Type: application/json
{
"parentID":"FB4CD874EF94CD2CC1B60B72T0000000000100000001"
}
-----1234567890
Content-Disposition: form-data; name="primaryFile"; filename="example.txt"
Content-Type: text/plain
 
<File Content>
-----1234567890--

 

The section below shows a java embedded block of code that can be used within a BPEL process to achieve the file upload. This can be used in Oracle SOA and SOACS – BPEL composites. A valid DOCS endpoint, credentials for authorization, and a GUID of the folder location for the file upload are required to execute this REST call.
In this sample, a pdf document file is being uploaded into DOCS. The media type should be appropriately changed for other content formats.
Also, It is recommended to access the authorization credentials from a credential store when developing for production deployments. This section is only intended as a demo.

 

com.sun.jersey.api.client.Client client = com.sun.jersey.api.client.Client.create(); 
com.sun.jersey.api.client.WebResource webResource = client.resource("https://xxxx-yyyy.documents.zome.oraclecloud.com/documents/api/1.1/files/data"); 
com.sun.jersey.api.client.filter.HTTPBasicAuthFilter basicAuth = new com.sun.jersey.api.client.filter.HTTPBasicAuthFilter("username", "password"); 
client.addFilter(basicAuth);
;
com.sun.jersey.multipart.FormDataMultiPart multiform = new com.sun.jersey.multipart.FormDataMultiPart(); 
String DocCSFolderID = "{'parentID' : 'F1B2DDE55E4606D2B4718FDE2C1A41A800FD957B38C9'}";
com.sun.jersey.multipart.FormDataBodyPart formPart = new com.sun.jersey.multipart.FormDataBodyPart("parameters", DocCSFolderID, javax.ws.rs.core.MediaType.APPLICATION_JSON_TYPE);
com.sun.jersey.multipart.file.FileDataBodyPart filePart = new com.sun.jersey.multipart.file.FileDataBodyPart("primaryFile", new java.io.File("C:\\temp\\SampleDoc.pdf") , javax.ws.rs.core.MediaType.APPLICATION_OCTET_STREAM_TYPE);
multiform.bodyPart(formPart);
multiform.bodyPart(filePart); 

String response = webResource.type(javax.ws.rs.core.MediaType.MULTIPART_FORM_DATA_TYPE).accept(javax.ws.rs.core.MediaType.APPLICATION_JSON_TYPE).post(String.class, multiform);

Note that the REST cloud adapter can be used to interact with DOCS for most of the REST API defined here. The above exception is only for file upload and few other operations which require multipart forms. The REST cloud adapter is being enhanced to add multipart/form-data support in the near future.
Once that is available, the file upload into Oracle Document Cloud Service can also be achieved using the adapter within Oracle Integration Cloud Service (ICS), Oracle SOA, and SOACS.

 

Round Trip On-Premise Integration (Part 1) – ICS to EBS

$
0
0

One of the big challenges with adopting Cloud Services Architecture is how to integrate the on-premise applications when the applications are behind the firewall. A very common scenario that falls within this pattern is cloud integration with Oracle E-Business Suite (EBS). To address this cloud-to-ground pattern without complex firewall configurations, DMZs, etc., Oracle offers a feature with the Integration Cloud Service (ICS) called Connectivity Agent (additional details about the Agent can be found under New Agent Simplifies Cloud to On-premises Integration). Couple this feature with the EBS Cloud Adapter in ICS and now we have a viable option for doing ICS on-premise integration with EBS. The purpose of this A-Team blog is to detail the prerequisites for using the EBS Cloud Adapter and walk through a working ICS integration to EBS via the Connectivity Agent where ICS is calling EBS (EBS is the target application). The blog is also meant to be an additional resource for the Oracle documentation for Using Oracle E-Business Suite Adapter.

The technologies at work for this integration include ICS (Inbound REST Adapter, Outbound EBS Cloud Adapter), Oracle Messaging Cloud Service (OMCS), ICS Connectivity Agent (on-premise), and Oracle EBS R12.  The integration is a synchronous (request/response) to EBS where a new employee will be created via the EBS HR_EMPLOYEE_API. The flow consists of a REST call to ICS with a JSON payload containing the employee details.  These details are then transformed in ICS from JSON to XML for the EBS Cloud Adapter. The EBS adapter then sends the request to the on-premise connectivity agent via OMCS. The agent then makes the call to EBS where the results will then be passed back to ICS via OMCS. The EBS response is transformed to JSON and returned to the invoking client. The following is a high-level view of the integration:

ICSEBSCloudAdapter-Overview-001

Prerequisites

1. Oracle E-Business Suite 12.1.3* or higher.
2. EBS Configured for the EBS Cloud Adapter per the on-line document: Setting Up Oracle E-Business Suite Adapter from Integration Cloud Service.
3. Install the on-premise Connectivity Agent (see Integration Cloud Service (ICS) On-Premise Agent Installation).

* For EBS 11 integrations, see another A-Team Blog E-Business Suite Integration with Integration Cloud Service and DB Adapter.

Create Connections

1. Inbound Endpoint Configuration.
a. Start the connection configuration by clicking on Create New Connection in the ICS console:
ICSEBSCloudAdapter-Connections_1-001
b. For this blog, we will be using the REST connection for the inbound endpoint. Locate and Select the REST Adapter in the Create Connection – Select Adapter dialog:
ICSEBSCloudAdapter-Connections_1-002
c. Provide a Connection Name in the New Connection – Information dialog:
ICSEBSCloudAdapter-Connections_1-003
d. The shell of the REST Connection has now been created. The first set of properties that needs to be configured is the Connection Properties. Click on the Configure Connectivity button and select REST API Base URL for the Connection Type. For the Connection URL, provide the ICS POD host since this is an incoming connection for the POD. A simple way to get the URL is to copy it from the browser location of the ICS console being used to configure the connection:
ICSEBSCloudAdapter-Connections_1-004
e. The last set of properties that need to be configured are the Credentials. Click on the Configure Credentials button and select Basic Authentication for the Security Policy. The Username and Password for the basic authentication will be a user configured on the ICS POD:
ICSEBSCloudAdapter-Connections_1-005
f. Now that we have all the properties configured, we can test the connection. This is done by clicking on the Test icon at the top of the window. If everything is configured correctly, a message of The connection test was successful!:
ICSEBSCloudAdapter-Connections_1-006
2. EBS Endpoint Connection
a. Create another connection, but this time select Oracle E-Business Suite from the Create Connection – Select Adapter dialog:
ICSEBSCloudAdapter-Connections_2-001
b. Provide a Connection Name in the New Connection – Information dialog:
ICSEBSCloudAdapter-Connections_2-002
c. Click on the Configure Connectivity button and for the EBS Cloud Adapter there is only one property, the Connection URL. This URL will be the hostname and port where the EBS metadata has been deployed for EBS. This metadata is provided by Oracle’s E-Business Suite Integrated SOA Gateway (ISG) and the setup/configuration of ISG can be found under the Prerequisites for this blog (item #2). The best way to see if the metadata provider has been deployed is to access the WADL using a URL like the following: http://ebs.example.com:8000/webservices/rest/provider?WADL where ebs.example.com is the hostname of your EBS metatdata provider machine. The URL should provide something like the following:
<?xml version = '1.0' encoding = 'UTF-8'?>
<application name="EbsMetadataProvider" targetNamespace="http://xmlns.oracle.com/apps/fnd/soaprovider/pojo/ebsmetadataprovider/" xmlns:tns="http://xmlns.oracle.com/apps/fnd/soaprovider/pojo/ebsmetadataprovider/" xmlns="http://wadl.dev.java.net/2009/02" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:tns1="http://xmlns.oracle.com/apps/fnd/rest/provider/getinterfaces/" xmlns:tns2="http://xmlns.oracle.com/apps/fnd/rest/provider/getmethods/" xmlns:tns3="http://xmlns.oracle.com/apps/fnd/rest/provider/getproductfamilies/" xmlns:tns4="http://xmlns.oracle.com/apps/fnd/rest/provider/isactive/">
   <grammars>
      <include href="http://ebs.example.com:8000/webservices/rest/provider/?XSD=getinterfaces_post.xsd" xmlns="http://www.w3.org/2001/XMLSchema"/>
      <include href="http://ebs.example.com:8000/webservices/rest/provider/?XSD=getmethods_post.xsd" xmlns="http://www.w3.org/2001/XMLSchema"/>
      <include href="http://ebs.example.com:8000/webservices/rest/provider/?XSD=getproductfamilies_post.xsd" xmlns="http://www.w3.org/2001/XMLSchema"/>
      <include href="http://ebs.example.com:8000/webservices/rest/provider/?XSD=isactive_post.xsd" xmlns="http://www.w3.org/2001/XMLSchema"/>
   <include href="http://ebs.example.com:8000/webservices/rest/provider/?XSD=getinterfaces_get.xsd" xmlns="http://www.w3.org/2001/XMLSchema"/>
      <include href="http://ebs.example.com:8000/webservices/rest/provider/?XSD=getmethods_get.xsd" xmlns="http://www.w3.org/2001/XMLSchema"/>
      <include href="http://ebs.example.com:8000/webservices/rest/provider/?XSD=getproductfamilies_get.xsd" xmlns="http://www.w3.org/2001/XMLSchema"/>
      <include href="http://ebs.example.com:8000/webservices/rest/provider/?XSD=isactive_get.xsd" xmlns="http://www.w3.org/2001/XMLSchema"/>
   </grammars>
   <resources base="http://ebs.example.com:8000/webservices/rest/provider/">
      <resource path="getInterfaces/{product}/">
         <param name="product" style="template" required="true" type="xsd:string"/>
         <method id="getInterfaces" name="GET">
            <request>
               <param name="ctx_responsibility" type="xsd:string" style="query" required="false"/>
               <param name="ctx_respapplication" type="xsd:string" style="query" required="false"/>
               <param name="ctx_securitygroup" type="xsd:string" style="query" required="false"/>
               <param name="ctx_nlslanguage" type="xsd:string" style="query" required="false"/>
               <param name="ctx_language" type="xsd:string" style="query" required="false"/>
               <param name="ctx_orgid" type="xsd:int" style="query" required="false"/>
               <param name="scopeFilter" type="xsd:string" style="query" required="true"/>
               <param name="classFilter" type="xsd:string" style="query" required="true"/>
            </request>
            <response>
               <representation mediaType="application/xml" type="tns1:getInterfaces_Output"/>
               <representation mediaType="application/json" type="tns1:getInterfaces_Output"/>
            </response>
         </method>
      </resource>
      <resource path="getInterfaces/">
         <method id="getInterfaces" name="POST">
            <request>
               <representation mediaType="application/xml" type="tns1:getInterfaces_Input"/>
               <representation mediaType="application/json" type="tns1:getInterfaces_Input"/>
            </request>
            <response>
               <representation mediaType="application/xml" type="tns1:getInterfaces_Output"/>
               <representation mediaType="application/json" type="tns1:getInterfaces_Output"/>
            </response>
         </method>
      </resource>
      <resource path="getMethods/{api}/">
         <param name="api" style="template" required="true" type="xsd:string"/>
         <method id="getMethods" name="GET">
            <request>
               <param name="ctx_responsibility" type="xsd:string" style="query" required="false"/>
               <param name="ctx_respapplication" type="xsd:string" style="query" required="false"/>
               <param name="ctx_securitygroup" type="xsd:string" style="query" required="false"/>
               <param name="ctx_nlslanguage" type="xsd:string" style="query" required="false"/>
               <param name="ctx_language" type="xsd:string" style="query" required="false"/>
               <param name="ctx_orgid" type="xsd:int" style="query" required="false"/>
               <param name="scopeFilter" type="xsd:string" style="query" required="true"/>
               <param name="classFilter" type="xsd:string" style="query" required="true"/>
            </request>
            <response>
               <representation mediaType="application/xml" type="tns2:getMethods_Output"/>
               <representation mediaType="application/json" type="tns2:getMethods_Output"/>
            </response>
         </method>
      </resource>
      <resource path="getMethods/">
         <method id="getMethods" name="POST">
            <request>
               <representation mediaType="application/xml" type="tns2:getMethods_Input"/>
               <representation mediaType="application/json" type="tns2:getMethods_Input"/>
            </request>
            <response>
               <representation mediaType="application/xml" type="tns2:getMethods_Output"/>
               <representation mediaType="application/json" type="tns2:getMethods_Output"/>
            </response>
         </method>
      </resource>
      <resource path="getProductFamilies/">
         <method id="getProductFamilies" name="GET">
            <request>
               <param name="ctx_responsibility" type="xsd:string" style="query" required="false"/>
               <param name="ctx_respapplication" type="xsd:string" style="query" required="false"/>
               <param name="ctx_securitygroup" type="xsd:string" style="query" required="false"/>
               <param name="ctx_nlslanguage" type="xsd:string" style="query" required="false"/>
               <param name="ctx_language" type="xsd:string" style="query" required="false"/>
               <param name="ctx_orgid" type="xsd:int" style="query" required="false"/>
               <param name="scopeFilter" type="xsd:string" style="query" required="true"/>
               <param name="classFilter" type="xsd:string" style="query" required="true"/>
            </request>
            <response>
               <representation mediaType="application/xml" type="tns3:getProductFamilies_Output"/>
               <representation mediaType="application/json" type="tns3:getProductFamilies_Output"/>
            </response>
         </method>
      </resource>
      <resource path="getProductFamilies/">
         <method id="getProductFamilies" name="POST">
            <request>
               <representation mediaType="application/xml" type="tns3:getProductFamilies_Input"/>
               <representation mediaType="application/json" type="tns3:getProductFamilies_Input"/>
            </request>
            <response>
               <representation mediaType="application/xml" type="tns3:getProductFamilies_Output"/>
               <representation mediaType="application/json" type="tns3:getProductFamilies_Output"/>
            </response>
         </method>
      </resource>
      <resource path="isActive/">
         <method id="isActive" name="GET">
            <request>
               <param name="ctx_responsibility" type="xsd:string" style="query" required="false"/>
               <param name="ctx_respapplication" type="xsd:string" style="query" required="false"/>
               <param name="ctx_securitygroup" type="xsd:string" style="query" required="false"/>
               <param name="ctx_nlslanguage" type="xsd:string" style="query" required="false"/>
               <param name="ctx_language" type="xsd:string" style="query" required="false"/>
               <param name="ctx_orgid" type="xsd:int" style="query" required="false"/>
            </request>
            <response>
               <representation mediaType="application/xml" type="tns4:isActive_Output"/>
               <representation mediaType="application/json" type="tns4:isActive_Output"/>
            </response>
         </method>
      </resource>
      <resource path="isActive/">
         <method id="isActive" name="POST">
            <request>
               <representation mediaType="application/xml" type="tns4:isActive_Input"/>
               <representation mediaType="application/json" type="tns4:isActive_Input"/>
            </request>
            <response>
               <representation mediaType="application/xml" type="tns4:isActive_Output"/>
               <representation mediaType="application/json" type="tns4:isActive_Output"/>
            </response>
         </method>
      </resource>
   </resources>
</application>

 

If you don’t get something like the above XML, here are some general troubleshooting steps:
Login to EBS console
–> Integrated SOA Gateway
—-> Integration Repository
——> Click on “Search” button on the right
——–> Enter “oracle.apps.fnd.rep.ws.service.EbsMetadataProvider” in the field “Internal Name”
———-> Click “Go” (If this doesn’t list anything, you are missing a patch on the EBS instance. Please follow the Note. 1311068.1)
————> Click on “Metadata Provider”
————–> Click on “REST Web Service” tab
—————-> Enter “provider” as is in the “Service Alias” field and click the button “Deploy”
——————> Navigate to “Grants” tab and give grants on all methods.
If the WADL shows that the metadata provider is deployed and ready, the Connection URL is simply the host name and port where the metatdata provider is deployed. For example, http://ebs.example.com:8000
ICSEBSCloudAdapter-Connections_2-003
d. The next set of properties that need to be configured are the Credentials. Click on the Configure Credentials button and select Basic Authentication for the Security Policy. The Username and Password for the basic authentication will be a user configured on the on-premise EBS environment granted privileges to access the EBS REST services:
ICSEBSCloudAdapter-Connections_2-004
NOTE: The Property Value for Username in the screen shot above shows the EBS sysadmin user. This will most likely “not” be the user that has grants on the EBS REST service. If you use the sysadmin user here and your integration (created later) “fails at runtime” with a “Responsibility is not assigned to user” error from EBS, either the grants on the EBS REST service are not created or a different EBS user needs to be specified for this connection. Here is an example error you might get:
<ISGServiceFault>
    <Code>ISG_USER_RESP_MISMATCH</Code>
    <Message>Responsibility is not assigned to user</Message>
    <Resolution>Please assign the responsibility to the user.</Resolution>
    <ServiceDetails>
        <ServiceName>HREmployeeAPISrvc</ServiceName>
        <OperationName>CREATE_EMPLOYEE</OperationName>
        <InstanceId>0</InstanceId>
    </ServiceDetails>
</ISGServiceFault>
e. Finally, we need to associate this connection with the on-premise Connectivity Agent that was configured as a Prerequisite. To do this, click on the Configure Agents button and select the agent group that contains the running on-premise Connectivity Agent:
ICSEBSCloudAdapter-Connections_2-005
f. Now that we have all the properties configured, we can test the connection. This is done by clicking on the Test icon at the top of the window. If everything is configured correctly, a message of The connection test was successful!:
ICSEBSCloudAdapter-Connections_2-006
3. We are now ready to construct our cloud-to-ground integration using ICS and the connections that were just created.

Create Integration

1. Create New Integration.
a. Navigate to the Integrations page of the Designer section. Then click on Create New Integration:
ICSEBSCloudAdapter-CreateIntegration_1-001
b. In the Create Integration – Select a Pattern dialog, locate the Map My Data and select it:
ICSEBSCloudAdapter-CreateIntegration_1-002
c. Give the new integration a name and click on Create:
ICSEBSCloudAdapter-CreateIntegration_1-003
2. Configure Inbound Endpoint.
a. The first thing we will do is to create our inbound endpoint (entry point to the ICS integration). In the Integration page that opened from the previous step, locate the Connections section and find the REST connection configured earlier. Drag-and-Drop that connection onto the inbound (left-hand side) of the integration labeled “Drage and Drop a Trigger”:
ICSEBSCloudAdapter-CreateIntegration_2-001
b. Since the focus of this blog is on the EBS Adapter, we will not go into the details of setting up this endpoint. The important details for this integration is that the REST service will define both the request and the response in JSON format:

Example Request:

{
  "CREATE_EMPLOYEE_Input": {
    "RESTHeader": {
      "Responsibility": "US_SHRMS_MANAGER",
      "RespApplication": "PER",
      "SecurityGroup": "STANDARD",
      "NLSLanguage": "AMERICAN",
      "Org_Id": "204"
    },
    "InputParameters": {
      "HireDate": "2016-01-01T09:00:00",
      "BusinessGroupID": "202",
      "LastName": "Sled",
      "Sex": "M",
      "Comments": "Create From ICS Integration",
      "DateOfBirth": "1991-07-03T09:00:00",
      "EMailAddress": "bob.sled@example.com",
      "FirstName": "Robert",
      "Nickname": "Bob",
      "MaritalStatus": "S",
      "MiddleName": "Rocket",
      "Nationality": "AM",
      "SocialSSN": "555-44-3333",
      "RegisteredDisabled": "N",
      "CountryOfBirth": "US",
      "RegionOfBirth": "Montana",
      "TownOfBirth": "Missoula"
    }
  }
}

Example Response:

{
  "CreateEmployeeResponse": {
    "EmployeeNumber": 2402,
    "PersonID": 32871,
    "AssignmentID": 34095,
    "ObjectVersionNumber": 2,
    "AsgObjectVersionNumber": 1,
    "EffectiveStartDate": "2016-01-01T00:00:00.000-05:00",
    "EffectiveEndDate": "4712-12-31T00:00:00.000-05:00",
    "FullName": "Sled, Robert Rocket (Bob)",
    "CommentID": 1304,
    "AssignmentSequence": null,
    "AssignmentNumber": 2402,
    "NameCombinationWarning": 0,
    "AssignPayrollWarning": 0,
    "OrigHireWarning": 0
  }
}
ICSEBSCloudAdapter-CreateIntegration_2-002
3. Configure Outbound Endpoint.
a. Now we will configure the endpoint to EBS. In the Integration page, locate the Connections section and find the E-Business Suite adapter connection configured earlier. Drag-and-Drop that connection onto the outbound (right-hand side) of the integration labeled “Drage and Drop an Invoke”:
ICSEBSCloudAdapter-CreateIntegration_3-001
b. The Configure Oracle E-Business Suite Adapter Endpoint configuration window should now be open. Provide a meaningful name for the endpoint and press Next >. If the windows hangs or errors out, check to make sure the connectivity agent is running and ready. This endpoint is dependent on the communication between ICS and EBS via the connectivity agent.
ICSEBSCloudAdapter-CreateIntegration_3-002
c. At this point, the adapter has populated the Web Services section of the wizard with Product Family and Product metatdata from EBS. For this example, the Product Family will be Human Resources Suite and the Product will be Human Resources. Once those are selected, the window will be populated with API details.
ICSEBSCloudAdapter-CreateIntegration_3-003
d. Next to API label is a text entry field where the list of APIs can be searched by typing values in that field. This demo uses the HR_EMPLOYEE_API, which can be found by typing Employee in the text field and selecting Employee from the list:
ICSEBSCloudAdapter-CreateIntegration_3-004
e. The next section of the configuration wizard is the Operations. This will contain a list of “all” operations for the API including operations that have not yet been deployed in the EBS Integration Repository. If you select an operation and see a warning message indicating that the operation has not been deployed, you must go to the EBS console and deploy that operation in the Integration Repository and provide the appropriate grants.
ICSEBSCloudAdapter-CreateIntegration_3-005
f. This demo will use the CREATE_EMPLOYEE method of the HR_EMPLOYEE_API. Notice that there is no warning when this method is selected:
ICSEBSCloudAdapter-CreateIntegration_3-006
g. The Summary section of the configuration wizard shows all the details from the previous steps. Click on Done to complete the endpoint configuration.
ICSEBSCloudAdapter-CreateIntegration_3-007
h. Check point – the ICS integration should look something like the following:
ICSEBSCloudAdapter-CreateIntegration_3-008
4. Request/Response Mappings.
a. The mappings for this example are very straightforward in that the JSON was derived from the EBS input/output parameters, so the relationships are fairly intuitive. Also, the number of data elements have been minimized to simplify the mapping process. It is also a good idea to provide a Fault mapping:

Request Mapping:

ICSEBSCloudAdapter-CreateIntegration_4-001

Response Mapping:

ICSEBSCloudAdapter-CreateIntegration_4-002

Fault Mapping:

ICSEBSCloudAdapter-CreateIntegration_4-003
5. Set Tracking.
a. The final step to getting the ICS Integration to 100% is to Add Tracking. This is done by clikcing on the Tracking icon at the top right-hand side of the Integration window.
ICSEBSCloudAdapter-CreateIntegration_5-001
b. In the Business Identifiers For Tracking window, drag-and-drop fields that will be used for tracking purposes. These fields show up in the ICS console in the Monitoring section for the integration.
ICSEBSCloudAdapter-CreateIntegration_5-002
c. There can be up to 3 fields used for the tracking, but only one is considered the Primary.
ICSEBSCloudAdapter-CreateIntegration_5-003
6. Save (100%).
a. Once the Tracking is configured, the integration should now be at 100% and ready for activation. This is a good time to Save all the work that has been done thus far.
ICSEBSCloudAdapter-CreateIntegration_6-001

Test Integration

1. Make sure the integration is activated and you open the endpoint URL that located by clicking on the “I”nformation icon.
ICSEBSCloudAdapter-Test-001
2. Review the details of this page since it contains everything needed for the REST client that will be used for testing the integration.
ICSEBSCloudAdapter-Test-002
3. Open a REST test client and provide all the necessary details from the endpoint URL. The important details from
the page include:
Base URL: https://[ICS POD Host Name]/integration/flowapi/rest/HR_CREATE_EMPLOYEE/v01
REST Suffix: /hr/employee/create
URL For Test Client: https://[ICS POD Host Name]/integration/flowapi/rest/HR_CREATE_EMPLOYEE/v01/hr/employee/create
REST Method: POST
Content-Type application/json
JSON Payload:
{
  "CREATE_EMPLOYEE_Input": {
    "RESTHeader": {
      "Responsibility": "US_SHRMS_MANAGER",
      "RespApplication": "PER",
      "SecurityGroup": "STANDARD",
      "NLSLanguage": "AMERICAN",
      "Org_Id": "204"
    },
    "InputParameters": {
      "HireDate": "2016-01-01T09:00:00",
      "BusinessGroupID": "202",
      "LastName": "Demo",
      "Sex": "M",
      "Comments": "Create From ICS Integration",
      "DateOfBirth": "1991-07-03T09:00:00",
      "EMailAddress": "joe.demo@example.com",
      "FirstName": "Joseph",
      "Nickname": "Demo",
      "MaritalStatus": "S",
      "MiddleName": "EBS",
      "Nationality": "AM",
      "SocialSSN": "444-33-2222",
      "RegisteredDisabled": "N",
      "CountryOfBirth": "US",
      "RegionOfBirth": "Montana",
      "TownOfBirth": "Missoula"
    }
  }
}
The last piece that is needed for the REST test client is authentication information. Add Basic Authentication to the header with a user name and password for an authorized “ICS” user. The user that will be part of the on-premise EBS operation is specified in the EBS connection that was configured in ICS earlier. The following shows what all this information looks like using the Firefox RESTClient add-on:
ICSEBSCloudAdapter-Test-003
4. Before we test the integration, we can login to the EBS console as the HRMS user. Then navigating to Maintaining Employees, we can search for our user Joseph Demo by his last name. Notice, nothing comes up for the search:
ICSEBSCloudAdapter-Test-004
5. Now we send the POST from the RESTClient and review the response:
ICSEBSCloudAdapter-Test-005
6. We can compare what was returned from EBS to ICS in the EBS application. Here is the search results for the employee Joseph Demo:
ICSEBSCloudAdapter-Test-006
7. Here are the details for Joseph Demo:
ICSEBSCloudAdapter-Test-007
8. Now we return to the ICS console and navigate to the Tracking page of the Monitoring section. The integration instance shows up with the primary tracking field of Last Name: Demo
ICSEBSCloudAdapter-Test-008
9. Finally, by clicking on the tracking field for the instance, we can view the details:
ICSEBSCloudAdapter-Test-009

Hopefully this walkthrough of how to do an ICS integration to an on-premise EBS environment has been useful. I am looking forward to any comments and/or feedback you may have. Also, keep an eye out for the “Part 2” A-Team Blog that will detail EBS business events surfacing in ICS to complete the ICS/EBS on-premise round trip integration scenarios.

Some Great News for Oracle MAF Developers

$
0
0

Introduction

This week started nicely for Oracle Mobile Application Framework (MAF) developers as the new MAF 2.3  release has been made available. Details about this new release can be found in this blog from MAF product management. We will end the week with even better news: a new version of the A-Team Mobile Persistence Accelerator (AMPA) has been released, and Oracle has decided that AMPA will be productized and integrated with the next version of MAF. Read on for more info!

Main Article

For those of you that are new to AMPA, let me first explain what it is. AMPA is a lightweight persistence and data synchronization framework that works on top of Oracle MAF, and is available on GitHub under open source license . AMPA eases the consumption of RESTful services and provides a complete persistence layer that allows you to use the mobile application in offline mode. You can read and write data while not connected to the internet, and synchronize any pending data changes later when you are online again. The design-time wizards that are integrated with JDeveloper enable you to build (generate) a first-cut mobile application with offline capabilities within minutes without any Java coding. Does this sound to good to be true? Well, then you might want to check out the video Building Oracle MAF Application with Offline Sync against Oracle Mobile Cloud Service that hopefully convinces you of the power of AMPA.

While the video shows you how to build an application consuming Oracle MCS REST services, AMPA can be used with any RESTful services. For example, the tutorial Consuming and persisting REST/JSON services with Oracle MAF and the A-Team Mobile Persistence Accelerator shows you how to create RESTful services using JPA/EclipseLink technology and then consume these REST service in a MAF application using AMPA.

There is also an AMPA Overview Presentation that provides a comprehensive overview of all features.

New AMPA 12.2.1 Release

A new release of AMPA, build 12.2.1.0.68, is now available. This release (only) works with MAF 2.3 and JDeveloper 12.2.1. Focus of this release has been on enhanced Oracle MCS integration but there is much more. Here is a list of the main new features:

  • Support for MCS Analytics in Offline Mode: You can register MCS analytics events in offline mode, AMPA will batch them up and sync them when you are online again.
  • Support for MCS Storage: AMPA now provides easy read and write access to MCS storage collections, including offline support, you can add or modify files in an MCS storage collection while offline, the files will be synced later when you are online again.
  • Support for MCS Device registration: you can register or unregister a device with MCS push notifications service using a simple Java method call.
    Support for parallel REST calls: Background REST calls were executed sequentially to prevent multiple background threads writing REST results to SQLite DB. The code has now been re-organized to allow the REST calls to be executed in parallel while the DB write actions are still done sequentially. This behavior can be controlled using the new property enable.parallel.rest.calls in mobile-persistence-config.properties.
  • Choose Target Project for Java Generation: You can choose the target project for the data object classes and service classes generated by the AMPA wizard. By choosing the ApplicationController project as target, you can share the same instance of a class across features and you can access the classes in application lifecycle listener methods.
  • Support for ADF Business Components Describe: The Business Objects from Rest Service wizard now includes an option to use the ADF BC Describe metadata format to discover data objects and associated CRUD resources.
  • Various enhancements to Business Objects from Rest Service wizard: the wizard now includes a Runtime Options panel which allows you to set persistence options that previously could only be set by directly modifying the persistence-mapping XML file. In addition, the Data Object Attributes panel now includes the payload attribute name, and the Resource Details panel includes fields to set the “attributes to exclude” attribute and the option to delete local rows prior to executing the REST resource.
  • Global Synchronization of Offline Transactions: When performing a data sync operation, this will now sync all pending transactions for all data objects. In previous versions, the data synchronization happened in the context of an Entity CRUD service, it only synchronized the data object of the entity CRUD service and its child data objects (if applicable). For backwards compatibility, the synchronize method on the entity CRUD Service is still supported, however it will now synchronize all data objects. As a result, the EL expression to check for pending data sync actions is no longer data object specific, you can now use the following expressions: #{applicationScope.ampa_hasDataSyncActions} and #{applicationScope.ampa_dataSyncActionsCount}. In addition, when navigating to the reusable data sync feature, you no longer have to set the data object (entity) class for which you want to see the pending sync actions.
  • Ability to Force Offline Mode: You can now force AMPA to behave as if the device is an offline mode while in reality the device is online. This might for example be handy if you want to batch up multiple transactions, and/or prevent network traffic when the device is on a low-bandwidth network connection. To force/unforce offline mode, you can call the boolean method oracle.ateam.sample.mobile.controller.bean.Connectivity.forceOffline and/or include an application-scoped managed bean that uses this class, and then use a setPropertyListener to set this boolean property using the expression #{Connectivity.forceOffline}. If you use the MAF User Interface Generator, this managed bean is automatically created, and the generated pages have a menu option to toggle force offline mode.
  • Easy Database Search on Multiple Attributes: A new method on DBPersistenceManager class allows you to pass in an attribute key-value map to search on. AMPA will generate the SQL SELECT statement using the = operator to match the attribute value and multiple attributes will be combined using the AND operator.

While this list is impressive, probably the best new feature is not mentioned here. If you belief in the saying “A framework is as good as its documentation” then we have to admit that AMPA was a pretty lousy framework in the past…. With the new comprehensive Developer’s Guide we dare to say that AMPA has become an outstanding framework. A-Team has put a lot of effort in putting this guide together so hopefully you will find it useful. And if it does not contain the information you are looking for, you can still post your AMPA questions on the MAF discussion forum. Please mention AMPA in the title of your post, this makes it easier for A-Team to identify the AMPA-related posts we should be answering.

Full release notes are available on the GitHub wiki.

AMPA Productization

Oracle has decided that AMPA will be productized and integrated into Oracle MAF as the Client Data Model (CDM). CDM will be fully supported by Oracle Support as part of Oracle MAF. The MAF release that includes CDM is expected later this calendar year (2016). The current AMPA 12.2.1 production release is the baseline for this integration. The first release of MAF that includes CDM will be very similar to AMPA 12.2.1, changes will be mostly limited to those imposed by productization requirements. Oracle will try to make the migration path from AMPA to CDM as smooth as possible, but some code changes (like changes of package names) are inevitable. Oracle MAF product management will come later with a more detailed statement of direction.

 

Viewing all 376 articles
Browse latest View live