Quantcast
Channel: ATeam Chronicles
Viewing all 376 articles
Browse latest View live

Using Oracle Mobile Cloud Service with React JS

$
0
0

Introduction

Oracle’s Mobile Cloud Service (MCS) can be used with any client development tool. Whether it is a mobile framework or web based framework. To speed up development for the most popular development tools, MCS comes with a Software Development Kit (SDK) for native Android, iOS and Windows development, as well as a JavaScript SDK for hybrid JavaScript-based development. Oracle’s A-Team has gained experience with the MCS JavaScript SDK in combination with various JavaScript toolkits. In a previous post, we discussed how to use the SDK when building Oracle JET applications. In this post we will share code samples, tips and best practices when building a web application using React JS. We assume you have a basic knowledge of React JS and JQuery. Check out their websites for tutorials if you are new to these technologies.
In a different post we explain how to use the MCS JS SDK in a hybrid mobile application using Ionic and Angular.

Main Article

In this article, we will explain how to connect to your custom REST API’s defined in MCS and how to leverage the various MCS platform services like storage and analytics. Note that all platform services can be accessed through the Oracle MCS REST API.
This article explains how to connect to MCS using the JavaScript SDK for Mobile Cloud Service which abstracts authentication and platform API’s.

 

Downloading and Configuring the JavaScript SDK

The Javascript SDK for MCS comes in two flavours, one for JavaScript applications and one for Cordova applications. The MCS Cordova SDK is a super set of the MCS JS SDK as it provides a few more more capabilities that depend on Cordova, such as methods for registering a device for push notifications.  Since we are creating a web app, we choose the JavaScript SDK.

To download an SDK for MCS, login to your MCS instance and open the hamburger menu. Click on “Applications”. Click on the “SDK Download” link on the right hand side. This takes you to a page where you can select your target platform, and download the SDK.

mcsJS

After downloading the file, we unzip it and copy over the following files:

  • Copy mcs.js and mcs.min.js into a new mcs subfolder of your project
  • Copy the oracle_mobile_cloud_config.js.js file to the scripts folder.

We add the mcs.js file to the index.html file, and add it above the existing script tags that include the React libraries.

 

We will create a seperate JS class: McsService.js in the scripts folder that will contain all the MCS related code:

function McsService(){
  var mcs_config = {
    "logLevel": mcs.logLevelInfo,
    "mobileBackends": {
      "HR": {
        "default": true,
        "baseUrl": "https://mobileportalsetrial.yourdomain.mobileenv.us2.oraclecloud.com:443",
        "applicationKey": "a4a8af19-38f8-4306-9ac6-adcf7a53deff",
        "authorization": {
          "basicAuth": {
            "backendId": "e045cc30-a347-4f7d-a05f-4d285b6a9abb",
            "anonymousToken": "QVRFQU1ERVZfTUVTREVWMV9NT0JJTEVfQU5PTllNT1VTX0FQUElEOnByczcuYXduOXRlUmhp"
          }
        }
      }
    }
  };

  // initialize MCS mobile backend
  mcs.MobileBackendManager.setConfig(mcs_config);
  var mbe = mcs.MobileBackendManager.getMobileBackend('HR');
  mbe.setAuthenticationType("basicAuth");
}

The structure of the mcs_config variable can be copied from the oracle_mobile_cloud_service_config.js file. This file includes all the declarative SDK configuration settings. Since we are using basic authentication in our app, we left out the configurations required when using OAuth, Facebook or Single-Sign-On (SSO) authentication. The configuration is defined for a backend called HR. This name doesn’t have to match the name of the backend that is actually in MCS, but can if you wish to match it. A description of all possible configuration settings can be found in the chapter JavaScript Applications in the MCS Developer’s Guide.

You can find the values of the mcs_config settings baseUrl, backendId and anonymousToken on the overview of your mobile backend in MCS:

MBE-Settings

 

 

We could have done the SDK configuration directly in oracle_mobile_cloud_service.js as well, and add a reference to this file in index.html.The downside of that approach is that we “pollute” the global space with another global variable mcs_config.  In addition, we prefer to make the McsService self-contained including the required configuration settings. If you prefer to have the configuration in a separate file, then it is better to create a json file that holds the config object and read the content of this file into your mcs_config variable within the McsService.js file.
After you copied the mcs_config variable, you can remove file oracle_mobile_cloud_service.js again from your project.

 

Authenticating Against MCS

MCS basic authentication provides two ways to authenticate, a “named” authentication using a username and password and a so-called anonymous authentication which uses the anonymousToken that we specified in the mcs_config variable. Anonymous authentication might be convenient during initial development when you did not set up a user realm with your mobile backend yet, or when you want to use another authentication mechanism for your app that is unrelated to MCS.

We first add functions to our mcsService to support both ways of authentication and to be able to logout:

McsService.prototype.authenticate = function(username,password,success,failure) {
  mbe.Authorization.authenticate(username, password
    , function(statusCode,data) {success(data)}
    , failure);
};

McsService.prototype.authenticateAnonymous = function(success,failure) {
  mbe.Authorization.authenticateAnonymous(
    function (statusCode, data) {
      success(data)
    }
    , failure);
};

McsService.prototype.logout = function(success,failure) {
  mbe.Authorization.logout(
    function(statusCode,data) {success(data)}
    , failure);
};

Now we can create a login component in React.JS:

var LoginButton = React.createClass({
  doLogin: function(){
    mcsService.authenticate(this.state.username,this.state.password,function(data){
        console.log("Success");
    },
    function(status,data){
      console.log("Error authenticating: " + data);
    })
  },
  render:function(){
    return (
      <button role="button" onClick="doLogin">Login</button>
    )
  }
});

Also make sure you initialize the mcsService at the top of the app.js so the service is initialized only once:

var mcsService = new McsService();

Invoking the HR Custom API

For this demo, we have created a basic HR REST API to manage departments. The API looks like this:

Endpoint Method Description
/departments GET List of departments with id, name attributes
/departments POST Add new department
/department/{id} GET All department attributes and list of employees in department
/department/{id} PUT Update department
/department/{id} DELETE Delete department

 

To access our custom API endpoints through the SDK, we add the following function to our McsService object:

McsService.prototype.invokeCustomAPI = function(uri,method,payload,success,failure) {
  this.mbe.CustomCode.invokeCustomCodeJSONRequest(uri , method , payload
    , function(statusCode,data) {success(data)}
    , failure);
};

Next step is to bind an endpoint to a component:

Binding endpoints to components

If we want to create a component that lists all the departments, we want to bind the component to the corresponding endpoint.

We first define the component that will contain the data and render the global list:

var DepartmentBox = React.createClass({
  getInitialState:function(){
    return {data:[]}
  },
  componentDidMount:function(){
    mcsService.invokeCustomAPI(this.props.url,"GET",null,function(data){
      this.setState({data:data});
    }.bind(this))
  },
  render: function(){
    return (
      <div className="departmentBox">
        <h1>Departments</h1>
        <DepartmentList data={this.state.data}/>
        <DepartmentDetails/>
      </div>
    );
  }
});

The componentDidMount function is a method from React that is called once per component, when the component is attached to the DOM. This ensures us that the data will be fetched in an asynchronous way and at the earliest possible.

Once the data is retrieved, we assign it to the component state (line 7) which will trigger a re-render of the component.
Notice that the actual URL is not hard coded put provided in the properties of the component. We pass this on during the initial render:

ReactDOM.render(
  <DepartmentBox data={departments} url="hr/departments" />,
  document.getElementById("content")
);

This technique allows you to easily see what data will be presented in which components.
By making it dynamic and allowing to pass the URL to the data, you also increase the re-usability of the component.

The next component we need is the one that will actually display the department list:

var DepartmentList = React.createClass({
  render: function(){
    var depNodes = this.props.data.map(function(dep){
      return (
        <Department key={dep.id} dep={dep}/>
      )
    })
    return (
      <div className="departmentList">
        {depNodes}
      </div>
    );
  }
});

And the department component:

 

var Department = React.createClass({
  render:function(){
    return (
      <div className="department">
        <h4>{this.props.dep.name}</h4>
      </div>
    )
  }
});

Using the MCS Storage Service

MCS includes a file storage service where you can store or cache mobile application objects, such as text, JSON, or images. You can define a storage collection and then add files to such a collection. For our sample application, we store the employee images in an MCS collection named HR. The storage ID of each image includes a reference to the employee ID, so we can easily link each employee with his/her photo in the HR collection:

HRCollection

You can upload files to a storage collection using the MCS UI as shown above. When you use the MCS UI, the ID of the storage object is system-generated. This is inconvenient if you want to associate MCS storage objects with the data of your systems of record that you expose through MCS. Fortunately, if you use the PUT method of the storage REST API, you can add new files and determine the storage ID yourself. For example, you can use a CURL command to upload an image to a storage collection like this:

curl -i -X PUT  -u steven.king:AuThyRJL!  -H "Oracle-Mobile-Backend-ID:bcda8418-8c23-4d92-b656-9299d691e120" -H "Content-Type:image/png"  --data-binary @FayWood.png https://mobileportalsetrial1165yourdomain.mobileenv.us2.oraclecloud.com:443/mobile/platform/storage/collections/HR/objects/EmpImg119

To use the storage service in our app, we first add functions to read storage objects in our McsService:

McsService.prototype.getCollection = function(collectionName,success,failure) {
  this.mbe.Storage.getCollection(collectionName, null
    , function(collection) {success(collection)}
    , failure);
};

McsService.prototype.getStorageObjectFromCollection = function(collection,storageId,success,failure) {
  collection.getObject(storageId
    , function(storageObject) {success(storageObject)}
    , failure
    ,'blob');
};

McsService.prototype.getStorageObject = function(collectionName,storageId) {
  //  This is the officially documented way, but fires redundant REST call:
  //  return getCollection(collectionName).then( function (collection) {
  //      return getStorageObjectFromCollection(collection,storageId)
  //  })
  var collection = new mcs._StorageCollection({id:collectionName,userIsolated:false},null,this.mbe.Storage);
  return this.getStorageObjectFromCollection(collection,storageId);
};

The “official” way for accessing a storage object is through its collection. This can be done by calling the getCollection method on the Storage object of the SDK. This returns the collection object which holds the metadata of all the objects inside the collection. On this collection object we can then call methods like getObject, postObject and putObject.  In our app, we want to prevent this additional REST call since we are not interested in the collection as a whole. This is why in line 19 we programmatically instantiate the collection object without making a REST call. As indicated  by the underscore, the _StorageCollection constructor function was intended to be “private” but this use case has been identified as valid and it will be made public in the next version of the SDK.

To be able to show the images with the employees, we extend the code in the editDepartment controller as follows:

var Employee = React.createClass({
  render:function(){
    return (
      <h5>{this.props.firstName} {this.props.lastName}</h5>
      <img src={this.state.empImg}/>
    )
  }
});

Just as we did with the departments, we can use the componentDidMount on that component to load the employee image in an asynchronous way:

var Employee = React.createClass({
  getInitialState:function(){
    return {data:[]}
  },
  componentDidMount:function(){
     mcsService.getStorageObject("HR","EmpImg"+this.props.empId,function(data){
       var url = URL.createObjectURL(data);
      this.setState({empImg:url});
    }.bind(this))
  },
  render:function(){
    return (
      <h5>{this.props.firstName} {this.props.lastName}</h5>
      <img src={this.state.empImg}/>
    )
  }
});

Using MCS Analytics Service

The MCS Analytics platform service is a very powerful tool to get detailed insight in how your mobile app is used. From your mobile app you can send so-called system events like startSession and endSession to get insight in session duration, location, etc.  Even better, you can send custom events to get very specific information about how the app is used, for example which pages are accessed for how long, which data is viewed the most, etc.

To support the MCS analytics events, we add the following functions to our McsService:

McsService.prototype.logStartSessionEvent = function() {
  this.mbe.Analytics.startSession();
}

McsService.prototype.logEndSessionEvent = function() {
  this.mbe.Analytics.endSession();
}

McsService.prototype.logCustomEvent = function(eventName, properties) {
  var event = new mcs.AnalyticsEvent(eventName);
  event.properties = properties;
  this.mbe.Analytics.logEvent(event);
}

McsService.prototype.flushAnalyticsEvents = function() {
  this.mbe.Analytics.flush();
}

When you log an event, the event is not yet sent to the MCS server. You can batch up multiple events and then flush them to the server, which is more efficient because all events are then sent in one REST call. If you log a custom event and you didn’t log a startSession event before, the SDK will automatically create a startSession event first.

In our app, we are going to log a custom event when the user selects a department to display its details. Therefore we add an onClick event on the Department component:

var Department = React.createClass({
  clicked: function(){
    mcsService.logCustomEvent('ViewDepartment',{user:this.state.userName,department:this.props.dep.name});
    mcsService.flushAnalyticsEvents();
  },
  render:function(){
    return (
      <div className="department">
        <h4><a onClick={this.clicked}>{this.props.dep.name}</a></h4>
      </div>
    )
  }
});

The name of the event is ViewDepartment and we send the user name and department name as properties with the event. If you check the REST request payload that is sent to MCS, you can see how the SDK is easing your life, the required context object with device information, the startSession event and the custom event itself are all included in the payload:

AnalyticsPayloadIn the MCS user interface, we can navigate to the custom events analytics page, and get some nice graphs that represent our ViewDepartment event data:

AnaGraphs

Making Direct REST Calls to MCS

At the end of the day, every interaction between a client app using the MCS SDK and MCS results in a REST call being made. The MCS SDK provides a nice abstraction layer which makes your life easier and can save you a lot of time as we have seen with the payload required to send an MCS analytics event. However, there might be situations where you want to make a direct REST call to MCS, for example:

  • to call your custom API with some custom request headers
  • to get access to the raw response object returned by the REST call
  • to call a brand new REST API not yet supported by the JavaScript SDK, like the Locations API.

In such a case, the SDK can still help you by providing the base URL and the Authorization and oracle-mobile-backend-id HTTP headers. Here are two functions you can add to your mcsService to expose this data:

McsService.prototype.getHttpHeaders = function() {
  return this.mbe.getHttpHeaders();
}

McsService.prototype.getCustomApiUrl = function(customUri) {
  return this.mbe.getCustomCodeUrl(customUri);
}

Going mobile with React

So far, this artice has provided a way of using React.JS in a web based application however with some minor tweaks, we can easily build hybrid applications using Cordova and React.JS.

By using a framework like Reapp.io it is very easy to combine Cordova and React.

When using Cordova to build a hybrid application with React, you just have to make sure that you are using the Cordova based JavaScript SDK. This will give you even more functonality than the JavaScript SDK used in this article. It will also provide functionality for push notification and so on.

Conclusion

The MCS Javascript SDK provides an easy and fast way to connect to your custom API’s defined in MCS as well as the various MCS platform services. In this article we provided tips and guidelines for using the MCS authorization, storage and analytics services in the context of a React app. If you are using another JavaScript toolkit, most of the guidelines and code samples (with some minor tweaks) should still be useful. If you are using Oracle JET, we provide some Oracle JET specific guidelines.

Keep an eye on Oracle A-Team Chronicles home page, or if you are specifically interested in articles around Oracle MCS, there is also a list of MCS-related articles.

 

 


HTTPS and trust in Oracle Public Cloud

$
0
0

The shift to cloud computing offers a huge number of benefits, but also does introduce some potential risks; the most obvious of these is the need to enable integrations – and by implication, the need to transmit sensitive data – across public networks. Fortunately, we already have a pretty good set of standards and techniques for doing this, with the HTTPS (Secure HTTP) protocol taking care of the grunt-work of encryption and server authentication at the transport layer. The purpose of this article is to explore the usage of HTTPS across the various services that comprise the Oracle Public Cloud.

A brief re-introduction to HTTPS

There is nothing new or particularly special about the HTTPS protocol. As with most secure protocols, it’s really just the combination of a few building blocks: HTTP as the application-level protocol for hypertext transfer, Transport Layer Security (TLS) as the underlying transport-level security mechanism and a healthy dose of public key cryptography to enable server authentication and key exchange. Here’s a really good primer, in case you want to learn more.

TLS authentication can be either one-way (where only the server presents a certificate) or two-way (where the client also has a certificate to prove its identity). For the purposes of this discussion, though, we will stick to one-way TLS, since that’s what it used in the vast majority of OPC use cases.

There are two major use cases that are important here, each involving a different type of HTTPS client. If the use case is a direct front-end (UI) interaction with an OPC service, then the client is a web browser. In the case of a back-end (server-to-server) call, it’s another service (whether on-premise or in the cloud) acting as an HTTP client in the context of a service invocation.

During the TLS handshake, the server sends a certificate to the client, containing some attributes to identify itself as well as a public key. The intention here is that the client should inspect the certificate in order to verify the server’s identity, but the whole process only works if the client knows that it can trust the certificate in the first place. This is why a certificate will always be accompanied by a digital signature that asserts its validity. The client makes a decision as to whether it trusts the certificate based on whether it trusts the entity that signed the certificate (and hence vouched for the server’s identity). Each client starts out by trusting a number of root CA certificates and any other certificate signed directly or indirectly by one of these trusted CA certificates will be itself be trusted. All of the well-known CA’s are included in this pre-configured trusted list, meaning you can easily and quickly purchase a certificate from one of them and know that you won’t have any “trust issues” in the future.

The other option is to roll your own; to use commonly-available utilities such as OpenSSL to generate self-signed certificates. In this case, there is no mutually-trusted 3rd party between client and server, hence you’ll need to take steps to configure each client to explicitly trust the certificate chain that the server presents. I say “chain” because the typical way to do this is to generate a single self-signed root certificate and then use that certificate to sign all of your server certificates. That ends up being a far more effective way to manage the explicit configuration of trust, since you don’t need to import each individual server certificate into your client’s trust store. Even with this latter approach, trying to manage explicit trust of self-signed certificates is a nightmare that does not work at all well when the clients are web browsers. For securing point-to-point HTTP communication between two services, though, it becomes more feasible (and in some ways more cost effective) to use self-signed certificates and explicitly-configured trust.

So what about Oracle Public Cloud?

Oracle’s Public Cloud offering is, of course, vast and comprises a plethora of useful services, divided into a number of different major categories (SaaS, PaaS and IaaS). Since the main focus of this article is to explore the intricacies of HTTPS in Oracle Public Cloud – and specifically how a client (be it browser or service consumer) can establish a secure connection to an OPC end-point, it follows that we need to understand which of the above certificate models are used.

There are two broad categories here:

SaaS and Non-Compute based PaaS

All of Oracle’s Software as a Service (SaaS) offerings, including Sales Cloud, HCM Cloud, Marketing Cloud, ERP Cloud and many others, fall into this first category. Also in this category are those Platform as a Service (PaaS) that are non-compute based, including things like Integration Cloud Service, SOA Cloud Service, Mobile Cloud Service and Document Cloud Service.

Note: Non-compute based PaaS offerings are those that do not allow access to the underlying virtual machines. These offerings are accessed and configured only through web-based consoles linked to from the OPC “My Services” console and all HTTPS end-points are served through a tenant-specific sub-domain of the main OPC URL space.

Services in this category are configured to use a certificate signed by a trusted 3rd-party CA (VeriSign). Looking at a specific example of an ICS tenant URL and following the certificate chain, we see that it looks as follows:

Connected to peer host  integrationtrial7892-caoracletrial93012.integration.us2.oraclecloud.com
Retrived 3 certificates
Certificate 1
Issued to: CN=*.integration.us2.oraclecloud.com, O=Oracle Corporation, L=Redwood Shores, ST=California, C=US
Issued by: CN=Symantec Class 3 Secure Server CA - G4, OU=Symantec Trust Network, O=Symantec Corporation, C=US

Certificate 2
Issued to: CN=Symantec Class 3 Secure Server CA - G4, OU=Symantec Trust Network, O=Symantec Corporation, C=US
Issued by: CN=VeriSign Class 3 Public Primary Certification Authority - G5, OU="(c) 2006 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US

Certificate 3
Issued to: CN=VeriSign Class 3 Public Primary Certification Authority - G5, OU="(c) 2006 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US
Issued by: OU=Class 3 Public Primary Certification Authority, O="VeriSign, Inc.", C=US

The implication is that virtually any HTTPS client attempting to connect to an end-point presented by a service within this category will be able to do so without requiring any additional configuration. Trust is pre-configured since the VeriSign CA certificate above is included in most client trust stores by default.

IaaS and Compute based PaaS

This category includes all of the Infrastructure as a Service (IaaS) offerings within the Oracle Compute Cloud, as well as those PaaS services – such as Java Cloud Service and Database Cloud Service – that expose and provide access to the underlying Oracle Compute virtual machines.

Instances of these service offerings will not be provisioned along with automatically-trusted certificates, mainly due to the “bring your own host name” policy that applies to such instances. When an instance of, say, JCS is provisioned, the customer is provided with a public IP address and would then need to assign a host name (within their own domain) to that public IP. Since the certificate used by whatever web tier is deployed to that instance is always tied to the host name, it follows that the customer must obtain and install their own certificate as well. Again here, there are three options:

Purchase a certificate from a 3rd-party certificate authority. This is the simplest and most recommended option (although obviously not the cheapest) since the certificate obtained in this way would be automatically trusted by clients. There will most likely be no need for any further configuration in order to enable a trusted HTTPS connection to the service from either web browser or REST clients.

Use your own self-signed certificate authority. This option saves on the cost of purchasing a certificate, but does result in the need for explicitly configuring trust on each client. In this case, your organisation essentially becomes its own CA. Clients will need to be configured to trust the self-signed root certificate (this is done by importing that certificate into each browser and HTTP Client trust store), but since all server certificates are chained back to this one self-signed certificate, there is no further configuration required on a per server basis.

Generate a self-signed certificate per server. This option requires explicit trust configuration of every client for every server. It is perhaps only suited for development environments due to the complexity involved.

We’ve spoken a lot about needing to explicitly configure certificate trust when self-signed certificates are used. At a high-level, this process will involve the following steps:
1 – EITHER generate a self-signed server certificate, OR generate a key-pair and a certificate request, which your CA will use to issue a server certificate
2 – install the certificate and configure the server to use that certificate for HTTPS
3 – obtain the root CA certificate and import this into the client’s trust store

In a forthcoming post, I will share more explicit details regarding the configuration of some popular OPC service combinations to enable trust where it is not pre-configured.

Building Oracle ATG Commerce with Maven

$
0
0

Introduction

This article will provide an overview of using Maven to build code and create modules for Oracle ATG Commerce.

A sample that builds the Commerce Reference Store for Oracle ATG Commerce 11.2 is available at https://github.com/oracle/atg-crs-11.2-maven

 

Understanding Oracle ATG Commerce modules, and their relationship to Maven

In order to effectively use Maven with Oracle ATG Commerce, or any other build tool, it is important to understand how ATG modules are structured and used by the product.

ATG Module layout

Assume we have a module named ATeamSample. The following is an example directory structure, starting at the root of the ATG installation ($ATG_ROOT)

  • $ATG_ROOT/ATeamSample
    • config – this is where configuration/properties files are located
    • lib – this is there class files, jar files, and other items meant for the classpath are located
    • META-INF – this is where the module MANIFEST.MF exists, and is the core of what defines and ATG module
      • MANIFEST.MF – The MANIFEST.MF file is what tells the ATG product that this is a module, and defines how it is structured

The MANIFEST.MF file contains entries that are read by the ATG product, and tell the product how to load this module.

Here is an example MANIFEST.MF for our ATeamSample module

ATG-Product: ATeamSample
ATG-Class-Path: lib/classes.jar
ATG-Config-Path: config/config.jar
ATG-Required: DAS

The purpose of each line is as follows:

  • ATG-Product – This defines the name this module will be referred to by in the ATG product. When you pass modules to runAssembler, or load modules in your application server, the MANIFEST is parsed to look for a matching ATG-Product value
  • ATG-Class-Path – This tells the product where to load classpath artifacts from
  • ATG-Config-Path – This tells the product where to load configpath artifacts from
  • ATG-Required – This tells the product what other ATG modules must be loaded for this module to function properly. This line is used to calculate module dependencies.

 

There are additional fields that can be added to the MANIFEST.MF file. Refer to the product documentation for more details.

 

Maven layout

The structure of a Maven project is different from the layout used for ATG modules.

A sample layout of a Maven project to be used with Oracle ATG Commerce is as follows:

  • src/main – this is the root of all other folders
    • module – contains the META-INF/MANFIEST.MF used by ATG modules
    • config – contains the ATG module configs, which will be jar’d and copied to your module’s config directory in ATG_ROOT
    • configlayers – contains config layers that runAssembler will pull in if they are specified
    • java – contains source code, which will be compile, then jar’d, then the jar is copied to your ATG_ROOT/module’s lib directory
    • liveconfig – contains properties for your ATG liveconfig layer
    • j2ee-apps – These trees contain war files

Mapping Maven to an ATG module

Using the EStore module in the Commerce Reference Store sample application, the following shows a mapping of Maven directories to ATG Module directories.

The sample build process available on github shows you how to automatically compile, jar, and move files from Maven to the appropriate ATG directory structure.

maven-to-atg

Best practices with Maven

The following best practices are demonstrated in the sample build process available on github.

Use a parent pom to control artifacts

Define artifact dependencies in as few locations as possible. Ideally, a single parent pom will define the specific version of artifacts all your modules utilize. This gives you a central location to control what version of what artifact is being used, and helps prevent clutter across poms that can result is conflicting dependencies like pulling in multiple versions of the same library.

Tag libraries

Pull tag libraries in at build time with Maven.

Instead of including a tag library in your maven builds, and possible having the exact same tab library checked in to your source repository in multiple locations, you can pull the tab libraries directly into your build at build time.

This allows you to centrally manage your tab libraries in a single location, and easily update the version of a tag library your build is using.

Here is a sample scenario seen on actual projects in the past.

Multiple custom ATG modules were checked into source control. The DSP tag library was used by several of these modules. This resulted in dspjspTaglib1_0.jar and dspjspTaglib1_0.tld being checked in to source control multiple times, in multiple locations. There were actually many product tag libraries in multiple locations, all checked in the same way.

When it came time to upgrade/patch the ATG product, it was necessary to manually locate every product tag library checked into source control, and manually update it to the new version. It took several iterations, and a good deal of time to get them all updated.

By allowing Maven to pull in the tag libaries you want at build time, you add your new taglib to your Maven repository, update your pom file to point at the new version, and rebuild your code. The new version is automatically pulled in.

Third party libraries

Similar to the tag libraries, 3rd party libraries can be pulled in at build time.

In the Maven to ATG mapping image above, commons-codec-1.3.jar is shown. Instead of checking this into source control, and embedding it in your Maven module, it can be pulled in at build time when it is needed.

Identify your build version

Custom fields can be added to the ATG Modules MANIFEST.MF

These fields can be visible in /dyn/admin, and optionally accessed through JSP’s in your running application.

Maven can be used to automatically add items like a build number, and/or build timestamp to your MANIFEST when you build your code.

This is useful to keep track of exactly what version of code is running in what environment.

A note on using IDE’s, such as Eclipse.

When following the above suggestions using a parent pom, and allowing Maven to manage dependencies, the same process will work in an IDE. You are not loosing development functionality by allowing Maven to manage your dependencies.

 

Integration with Oracle Developer Cloud Service

This Maven samples in github, and best practices mentioned here also translate to Oracle’s Developer Cloud Service.

By following the layout and dependency management techniques outlined here, you can easily move a local project to Developer Cloud Service.

Integrating Oracle Mobile Cloud Service with Oracle IoT Cloud Service

$
0
0

Introduction

The Oracle Internet of Things Cloud Service (IoTCS) allows you to set up an integration with the Oracle Mobile Cloud Service (MCS) to process and analyze the data received from your mobile devices that are connected to IoTCS. This article explains how you can create a custom API in MCS that exposes this device data so you can easily build a mobile or web app on top of it.

Main Article

This article discusses two techniques to store the device data in MCS:

  • Using the MCS storage service
  • Using the MCS database service

While only the first technique is documented in the IoTCS Developers Guide, you will see that the second technique is actually easier to implement and also results in a better performance.

Using the MCS Storage Service

This technique is partially explained in chapter Integrating Oracle Mobile Cloud Service with Oracle IoT Cloud Service of the IoTCS developer’s guide. Please read this chapter first if you are new to IotCS and MCS and want to learn how to set up an MCS integration in IotCS and a mobile backend and storage collection in MCS. The key point is that the URL property in the Connection tab of your MCS integration points to the MCS storage API POST endpoint:

IotStorageGOED

IotCS will call this endpoint to add a JSON file with the device message to the specified MCS storage collection (named “IoT” in the above example). If you go to the storage collection in the MCS web user interface, you can see how the collection is populated with these JSON files:

StorageContent

The content of each JSON file will look similar to this (actual payload data attributes will vary based on device and message type):

[
  {
    "id": "6748cf3a-a2cc-467e-a823-b1761bcb3b3f",
    "clientId": "e6ce7146-e326-4628-bebd-fda7a69e07f4",
    "source": "AAAAAAQ4EO8A-AM",
    "destination": "",
    "priority": "HIGHEST",
    "reliability": "BEST_EFFORT",
    "eventTime": 1468583693714,
    "sender": "",
    "type": "ALERT",
    "properties": {},
    "direction": "FROM_DEVICE",
    "receivedTime": 1468583685315,
    "sentTime": 1468583689020,
    "payload": {
      "format": "urn:com:oracle:iot:device:hvac:alert:unabletoconnect",
      "description": "Unable to connect alert",
      "severity": "SIGNIFICANT",
      "data": {
        "unable_to_connect": true
      }
    }
  }
]

So far so good, we have all the messages stored in MCS. The next step is to create a custom API to expose these device messages and optionally apply some filtering, aggregations and/or transformations as needed by the client app that you want to build on top of this API.

If you are new to building a custom API with MCS, you might want to check out the article series Creating a Mobile-Optimized API Using MCS.

We create a simple iot custom API, with one GET endpoint /messages:

IOT ENDPOINT

After we downloaded the implementation scaffold we are ready to implement the endpoint. Now things become less trivial: we need to loop over all the files in our IoT storage collection, retrieve the content of each file, and merge that together in one JSON array of messages that we return as response. (In reality you might wat to apply some additional filters, aggregations and transformations, but that is beyond the scope of this article).

Let’s start with checking out the section Calling MCS APIs from Custom Code in the MCS developer’s guide. After some general info that applies to all MCS API’s, there is a specific section on Accessing the Storage API from Custom Code. We can learn from this section that we first need to call storage.getAll to get the metadata of all objects (JSON files in our case) in the storage collection. We can then loop over the result array of this call and make a call to storage.getById to get the content of each file. Every REST call in MCS is made in an asynchronous way and to speed up performance, we should make all the storage.getById calls in parallel, and once the last call is finished, merge the results of each call.

A so-called promise provides access to the result of such an asynchronous request, and every JavaScript promise library includes functionality to make multiple asychronous requests in parallel and then do some processing when all requests are finished. MCS internally uses the bluebird promise library and we recommend to use the same library for your custom API implementations. To install this library, go to the root directory of your custom API implementation that was created when unzipping the scaffold zip file (this directory should have a file named package.json). In this directory, execute the following command:

npm install bluebird --save

This command creates a subdirectory called node_modules which in turn contains a directory named bluebird.We are now ready to code the implementation in our main JavaScript file. Here it is:

var Promise = require("bluebird");

module.exports = function (service) {
    service.get('/mobile/custom/iot/messages', function (req, res) {
        req.oracleMobile.storage.getAll("IoT").then(
          function (result) {
              var items = JSON.parse(result.result).items;
              var promises = [];
              items.forEach(function (item) {
                  var promise = req.oracleMobile.storage.getById("IoT", item.id, {outType: 'json'});
                  promises.push(promise);
              });
              return promises;
          })
          .then(function (promises) {
              return Promise.all(promises);
          })
          .then(function (result) {
              var itemsContent = [];
              result.forEach(function (result) {
                  itemsContent.push(result.result);
              });
              res.send(200, itemsContent);
          })
          .catch(function (err) {
              res.send(500, err);
          })
    });
};

If you are new to the concept of promises this code might look a bit intimidating but we will explain line-by line what is going on:

  • at line 1 we make the bluebird promise library available for use in our JavaScript file
  • at line 5 we make the REST call to get all metadata of all files in the collection
  • at lines 7-12 we process the response from the storage.getAll call and create an array of promises where we use the id of the file included in the metadata to construct the proper storage.getById promise (every MCS REST call returns a promise)
  • at line 13, we return the array of promises, so it is passed in as argument into the next then statement at line 15.
  • at line 16, we execute all storage.getById calls in parallel using the Promise.all command.
  • at lines 19-22 we loop over the result array produced by the Promise.all command. Each result includes the complete REST response, not just the actual file content. To get the fie content, we need to get the value of the result property of the REST response, this is why we push result.result on our array that we use to send as response at line 23.
  • at lines 28-30, we catch any unexpected error and set the error message as response

That’s it, if we now call this endpoint using the MCS tester page it will return an array of all device messages together.

Using the MCS Database Service

The MCS integration functionality in IoTCS is not really aware that it is calling the MCS storage API through the URL property. All it knows is that it needs to call this REST endpoint with the POST method, include the mobile backend id request header parameter, and send the content of the device message in the request body. In other words, we can also provide a custom API endpoint (or even a non-MCS endpoint that would simply ignore the mobile backend id request header param) that supports the POST method and then write our own logic to store the message content in a database table using the MCS database API.

First, we need to create the database table that will hold all the device messages. We can do this by navigating to the database management API pages in the MCS web interface. The database management API is a bit hard to find. It is included at the bottom of the API page where you have a “film strip” of Platform APIs. Many other pages have the same platform APIs film strip, but these other pages do not include the database management API. So, from the MCS dashboard page, click on APIs, scroll to the bottom, then browse to the right in the film strip and the database management icon should appear:

DBMgtAccess

We then click on the POST Create a Table link, which brings us to a page where we enter “id” in the Oracle-Mobile-Extra-Fields and the following JSON payload in the body field to create the table:

{
  "name" : "IOT_Messages",
  "columns": [
    {
      "name": "content", "type": "string"
    }
  ]
}

Then we enter authentication details, choose a mobile backend and click the Test Endpoint button which will create a very simple table with just one column that will hold the message content. In our custom API we now create the same /messages endpoint as we did when using the storage service, but this time we add both a GET method and a POST method:

service.post('/mobile/custom/iot/messages', function (req, res) {
    var messages = req.body;
    var rows = [];
    messages.forEach(function (message) {
        rows.push({content: JSON.stringify(message)});
    })
    req.oracleMobile.database.insert('IOT_Messages', rows).then(
      function (result) {
          res.send(result.statusCode, result.result);
      },
      function (error) {
          res.send(500, error.error);
      });

});

service.get('/mobile/custom/iot/messages', function (req, res) {
    req.oracleMobile.database.getAll("IOT_Messages")
      .then(function (result) {
          var rows = JSON.parse(result.result).items;
          var messages = [];
          rows.forEach(function (row) {
              messages.push(JSON.parse(row.content));
          })
          res.send(result.statusCode, messages);
      })
      .catch(function (error) {
          res.send(500, error.error);
      });
});

 

In the POST method we loop over the array sent as request body (which typically only contains one message), then stringify the JSON message, and create a row JSON object with the content attribute value set to the stringified device message. Then we call the database.insert method to insert the rows array.

The GET method has become much simpler compared to the implementation using the storage service, we now need just one REST call to retrieve all the rows from the database table. We loop over all rows, and create an array of the values of the content column. We convert the stringified message back to JSON to prevent a response payload that includes escape characters for all quotes.

The last step is to configure IoTCS to use our custom API rather then the storage API. This is as simple as changing the URL field in the Connection tab:

IotCustomGOED

Conclusion

Both the MCS storage API and MCS database API can be used to store device messages. When using the storage API, you do not have to write custom code to store the messages, but the custom code to retrieve the messages is much more complex, and involves a separate REST call for each message. A-Team recommends to use the database API because the code is simpler and requires only one REST call to retrieve all messages sent by IoTCS.

 

Oracle Service Cloud – Outbound Integration Approaches

$
0
0

Introduction

This blog is part of the series of blogs the A-Team has been running on Oracle Service Cloud(Rightnow).

In the previous blogs we went through various options for importing data into Service Cloud. In this article I will first describe two main ways of subscribing to outbound events, as data is created/updated/deleted in Rightnow. These notifications are real-time and meant only for real-time or online use-cases.
Secondly, I will briefly discuss a few options for bulk data export.

This blog is organized as follows :

  1. Event Notification Service (ENS) – The recently introduced mechanism for receiving outbound events
    • a. Common Setup Required – for using ENS
    • b. Registering a Generic Subscriber with ENS
    • c. Using Integration Cloud Service – the automated way of subscribing to ENS
  2. Rightnow Custom Process Model(CPM) – The more generic, PHP-cURL based outbound invocation mechanism
  3. Bulk Export
    • a. Rightnow Object Query Language (ROQL) and ROQL based providers
    • b. Rightnow Analytics Reports
    • c. Third-party providers

1. The Event Notification Service

Sincethe May 2015 release Rightnow has a new feature called the Event Notification Service, documented here .
This service currently allows any external application to subscribe to Create/Update/Delete events for Contact, Incident and Organization objects in Service Cloud. More objects/features may be added in upcoming releases.

I will now demonstrate how to make use of this service to receive events. Essentially there are two ways, using the Notification Service as is (the generic approach) or via Integration Cloud Service (ICS).

a. Common Setup

In order to receive event notifications the following steps have to be completed in the Rightnow Agent Desktop. These steps need to be completed for both generic as well as the ICS approaches below.

  1. In the Agent Desktop go to Configuration -> Site Configuration-> Configuration Settings. In the Search page that comes up, in the ‘Configuration Base’ section select ‘Site’ and click Search.
  2. In the ‘Key’ field enter ‘EVENT%’ and click Search.
  3. Set the following keys:
    • EVENT_NOTIFICATION_ENABLED – Set it to ‘Yes’ for the Site. This is the global setting that enables ENS.
    • EVENT_NOTIFICATION_MAPI_USERNAME – Enter a valid Service Cloud username.
    • EVENT_NOTIFICATION_MAPI_PASSWORD – Enter the corresponding password.
    • EVENT_NOTIFICATION_MAPI_SEC_IP_RANGE – This can be used for specifying whitelisted subscriber IP Addresses. All IPs are accepted if kept blank.
    • EVENT_NOTIFICATION_SUBSCRIBER_USERNAME– Enter the Subscriber service’s username. ENS sends these credentials as part of the outgoing notification, in the form of a WS-Security Username-Password token.
    • EVENT_NOTIFICATION_SUBSCRIBER_PASSWORD – Enter the password.

01

b. Registering a Generic Subscriber

Now that the Event Notifications have been enabled, we need to create a subscriber and register it. The subscriber endpoint should be reachable from Rightnow, and in most cases any publicly available endpoint should be good.

For the purpose of this blog I defined a generic subscriber by creating a Node.js based Cloud9 endpoint accessible at https://test2-ashishksingh.c9users.io/api/test . It’s a dummy endpoint that accepts any HTTP POST and prints the body on Cloud9 terminal. It doesn’t require any authentication as well.

In order to register this endpoint, following steps must be followed :

  1. Rightnow manages subscriptions by using an object called ‘EventSubscription’. By instantiating this object an ‘endpoint’ can be registered as a subscriber, to listen to an object(Contact/Organization/Incident) for a particular operation(Create/Update/Delete). The object also tracks username/password to be sent out to the endpoint as part of the notification.
  2. In order to create an EventSubscription object the usual Connect Web Services Create operation can be used. Below is a sample XML request payload for the Create operation, that registers a Contact Update event to the Cloud9 endpoint.
  3. <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:v1="urn:messages.ws.rightnow.com/v1_3" xmlns:v11="urn:base.ws.rightnow.com/v1_3">
       <soapenv:Body>
          <v1:Create>
             <v1:RNObjects xmlns:ns4="urn:objects.ws.rightnow.com/v1_3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="ns4:EventSubscription"> <!--specify the subscription object-->
    			<ns4:EndPoint>https://test2-ashishksingh.c9users.io/api/test</ns4:EndPoint> <!--endpoint info-->
    			<ns4:EventType>
    				<ID id="2" xmlns="urn:base.ws.rightnow.com/v1_3" /> <!--1=Create,2=Update,3=Delete-->
    			</ns4:EventType>
    			<ns4:IntegrationUser>
    				<ID id="1" xmlns="urn:base.ws.rightnow.com/v1_3" /> <!--1 = the seeded SUSCRIBER_USERNAME and PWD above-->
    			</ns4:IntegrationUser>
    			<ns4:Name>TestContactSubscription</ns4:Name>  <!--Name of the subscription-->
    			<ns4:ObjectShape xsi:type="Contact"/>   <!--Name of the object to subscribe-->
    			<ns4:Status>
    				<ID id="1" xmlns="urn:base.ws.rightnow.com/v1_3" /> <!--1=Active,2=Paused,3=Inactive-->
    			</ns4:Status>
             </v1:RNObjects>
          </v1:Create>
       </soapenv:Body>
    </soapenv:Envelope>

     
    Note : The OWSM security policy username_token_over_ssl_client_policy can be used to invoke the web service, passing valid Rightnow credentials. However, the SOAP Security Header shouldn’t contain a TimeStamp element. Rightnow will discard the requests containing a Timestamp element in the SOAP Header.

  4. That’s it. The endpoint is now registered, and whenever a contact is updated, Rightnow will invoke the registered endpoint with details. The message sent out is an XML SOAP message that contains object/event details and conforms to the Rightnow Event WSDL available at https:///cgi-bin/.cfg/services/soap?wsdl=event . This message also contains the SUBSCRIBER_USERNAME/PWD in the SOAP Header, in the form of a WS-Security UsernameToken. For now our Cloud9 endpoint doesn’t care about validating the Username token.
  5. In order to test, let’s update a Contact in Agent Desktop
  6. 02

  7. Voila! We see the corresponding EventNotification XML message in the Cloud9 console.
  8. 03

    For reference I have attached the formatted XML message here.

c. Using ICS Service Cloud Adapter

The Oracle Integration Cloud Service (ICS), the tool of choice for SaaS integrations, automates all of the steps in 1.2 above into a simple GUI based integration definition.
Below are the steps for receiving Rightnow events in ICS. It is assumed that the reader is familiar with ICS and knows how to use it.
Please note that the steps in 1.1 still need to be followed, and this time the SUBSCRIBER_USERNAME/PWD ‘Configuration Setting’ should be the ICS account’s username/password.

  1. Create and save an Oracle Rightnow connection in ICS.
  2. Create an Integration by the name ‘receive_contacts’. For this blog I chose the ‘Publish to ICS’ integration type.
  3. 05

  4. Open the integration and drag the Rightnow connection on the source-side. Name the endpoint and click ‘Next’
  5. 06

  6. On the ‘Request’ page select ‘Event Subscription’ , and select the desired event. Click Next.
  7. 07

  8. On the ‘Response’ page select ‘None’. Although, you could select a callback response if the use-case required so. Click Next.
  9. 08

  10. Click ‘Done’. Complete the rest of the integration and activate it.
  11. 09

  12. During activation ICS creates an endpoint and registers it as an EventSubscription object, as described in 1.2 above. But all of that happens in the background, providing a seamless experience to the user.
  13. If a Contact is updated in Agent Desktop now, we’d receive it in ICS.
  14. 10

2. Rightnow Custom Process Model

As discussed above, the Event Notification Service supports only Contact, Organization and Incident objects. But sometimes use-cases may require Custom Objects or other Connect Common Objects. In such cases Service Cloud’s Custom Process Model feature can be used for outbound events. I will now describe how to use them.

First, a few key terms:

  • Object Event Handler : A PHP code snippet that is executed whenever Create/Update/Delete events occur in the specified Rightnow objects. The snippet is used to invoke external endpoints using the cURL library.
  • Process Designer / Custom Process Model (CPM) : A component of the Rightnow Agent Desktop that is used to configure Object Event Handlers.

Below are the steps :

  1. Using any text editor, create a file called ContactHandler.php (or any other name) with the following code. The code basically defines a Contact create/update handler, loads the PHP cURL module and invokes a web service I wrote using Oracle BPEL. I have provided explanation at various places in the code as ‘[Note] :’
  2. <?php
    /**
     * CPMObjectEventHandler: ContactHandler // [Note] : Name of the file.
     * Package: RN
     * Objects: Contact // [Note] : Name of the object.
     * Actions: Create, Update // [Note] : Name of the operations on the object above for which the PHP code will be executed
     * Version: 1.2 // [Note] : Version of the Rightnow PHP API
     * Purpose: CPM handler for contact create and update. It invokes a web service.
     */
    use \RightNow\Connect\v1_2 as RNCPHP;
    use \RightNow\CPM\v1 as RNCPM; 
    /**
     * [Note] : Below is the main code, defining the handler class for the CPM . Like java, the class name should match the file name, and it implements the ObjectEventHandler class. The 'use' statements above define aliases for the \RightNow\Connect\v1_2 'package' .
     */
    class ContactHandler implements RNCPM\ObjectEventHandler
    {
        /**
         * Apply CPM logic to object.
         * @param int $runMode
         * @param int $action
         * @param object $contact
         * @param int $cycles
         */
    // [Note] : Below is the actual function that gets executed on Contact Create/Update.
        public static function apply($runMode, $action, $contact, $cycle)
        {
            if($cycle !== 0) return ;
    		// [Note] : The snippet below declares the URL and the XML Payload to be invoked
                $url = "http://10.245.56.67:10613/soa-infra/services/default/RnContact/bpelprocess1_client_ep?WSDL" ;
                $xml = '<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
            <soap:Header>
                    <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" mustUnderstand="1">
                <wsse:UsernameToken>
                    <wsse:Username>HIDDEN</wsse:Username>
                    <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">HIDDEN</wsse:Password>
                </wsse:UsernameToken>
            </wsse:Security>
            </soap:Header>
            <soap:Body>
                    <ns1:process xmlns:ns1="http://xmlns.oracle.com/Application6/RnContact/BPELProcess1">
                            <ns1:input>'.$contact->Name->First.' '.$contact->Name->Last .'</ns1:input>
            </ns1:process>
        </soap:Body>
    </soap:Envelope>' ;
      
    
                $header[0]= "Content-Type: text/xml;charset=UTF-8";
                $header[1]= 'SOAPAction: "process"';
    			
    			// [Note] :The invocation requires and makes use of the cURL module.
                load_curl();
                $curl = curl_init();
                curl_setopt_array($curl,array(
                  CURLOPT_URL => $url,            
                  CURLOPT_HEADER => 0,
                  CURLOPT_HTTPHEADER => $header,  
                  CURLOPT_FOLLOWLOCATION => 1, 
                  CURLOPT_RETURNTRANSFER => 1,
                  CURLOPT_CONNECTTIMEOUT => 20,
                  CURLOPT_SSL_VERIFYPEER => 0,
                  CURLOPT_SSL_VERIFYHOST => 0,
     
                ));
                curl_setopt($curl,CURLOPT_POST,TRUE);
                curl_setopt($curl,CURLOPT_POSTFIELDS, $xml);
                $content = curl_exec($curl);
        }
    }
    /**
     * CPM test harness
     */
    // [Note] : These are unit test functions, needed by the RN PHP framework.
    class ContactHandler_TestHarness
            implements RNCPM\ObjectEventHandler_TestHarness
    {
        static $contactOneId = null,
        static $contactTwoId = null;
        /**
         * Set up test cases.
         */
        public static function setup()
        {
            // First test
            $contactOne = new RNCPHP\Contact;
            $contactOne->Name->First = "First";
            $contactOne->save();
            self::$contactOneId = $contactOne->ID;
            // Second test
            $contactTwo = new RNCPHP\Contact;
            $contactTwo->Name->First = "Second";
            $contactTwo->save();
            self::$contactTwoId = $contactTwo->ID;
        }
        /**
         * Return the object that we want to test with. You could also return
         * an array of objects to test more than one variation of an object.
         * @param int $action
         * @param class $object_type
         * @return object | array
         */
        public static function fetchObject($action, $object_type)
        {
            $contactOne = $object_type::fetch(self::$contactOneId);
            $contactTwo = $object_type::fetch(self::$contactTwoId);
            return array($contactOne, $contactTwo);
        }
        /**
         * Validate test cases
         * @param int $action
         * @param object $contact
         * @return bool
         */
        public static function validate($action, $contact)
        {
            echo "Test Passed!!";
            return true;
        }
        /**
         * Destroy every object created by this test. Not necessary since in
         * test mode and nothing is committed, but good practice if only to
         * document the side effects of this test.
         */
        public static function cleanup()
        {
            if (self::$contactOneId)
            {
                $contactOne = RNCPHP\Contact::fetch(self::$contactOneId);
                $contactOne->destroy();
                self::$contactOneId = null;
            }
            if (self::$contactTwoId)
            {
                $contactTwo = RNCPHP\Contact::fetch(self::$contactTwoId);
                $contactTwo->destroy();
                self::$contactTwoId = null;
            }
        }
    }
    ?>
  3. Log on to Agent Desktop. Click on Configuration-> Site Configuration-> Process Designer, and click ‘New’.
  4. 11

  5. Upload the ContactHandler.php file. Check the ‘Execute Asynchronously’ checkbox, the lib_curl module is available for async CPMs only.
  6. 12

  7. Click ‘Save’ on the Home Ribbon , and then click on the ‘Test’ button. On clicking test the ‘validate’ function in the code is executed. Make sure it executes fine, and that the output looks OK.
  8. 13

  9. Click OK, followed by clicking ‘Yes’, and then Save again. Now go to the Contact object under OracleServiceCloud, and assign the newly created ContactHandler to the Create and Update events. Then Save again.
  10. 14

  11. Now click ‘Deploy’ on the Ribbon to upload and activate all the changes to the RN Server
  12. 15

  13. In order to test, create a new contact called ‘John Doe’ in Service Cloud, and the BPEL process gets instantiated.
  14. 16

This ends our discussion on configuring and consuming outbound real-time events. Before moving on to bulk data export, it must be noted that the Rightnow event subscribers and CPMs are inherently transient. Thus, durable subscriptions are not available, although for error scenarios Rightnow does have a retry mechanism with exponential back-off.
If durability is a key requirement then the subscriber must be made highly-available and durability must be built in the subscriber design, such as by persisting messages in a queue immediately upon receiving them.

3. Bulk Export

So far we have discussed various ways of receiving real-time events/notifications from Rightnow. These can be used for online integration scenarios, but not for bulk-export use-cases.
We’ll now discuss a few options for bulk export:

a. ROQL

ROQL, or Rightnow Object Query Language is the simplest tool for extracting data, using SQL-like queries against Rightnow.
ROQL can be executed using Connect Web Services, Connect REST Services, and Connect PHP Services.

ROQL comes in two flavors, Object Query and Tabular Query:

  • Object Query : This is when Rightnow Objects are returned as response to the query. This is the simpler form of queries, available in SOAP API as the QueryObjects operation, or in REST API as the ?q= URL parameter.
  • Tabular Query : Tabular queries are more advanced queries, which allow operands like ‘ORDER BY’, USE, aggregate functions, max returned items, pagination, etc. These are available in SOAP API as the queryCSV operation, or in REST API as the queryResults resource.

Between the two, Tabular Query is the more efficient way of extracting data, as it returns the required dataset in a single database query. Two great resources to get started on tabular queries are the A-Team blogs here and here. They explain how to use SOAP and REST-based Tabular queries to extract data from Service Cloud and import into Oracle BI Cloud Service.

b. Analytics Report

For more advanced querying needs Rightnow Analytics Reports can be defined in the Agent Desktop, and they can be executed using the SOAP RunAnalyticsReport operation, or REST analyticsReportResults resource.

c. Third Party Providers

A number of third party providers, including the Progress ODBC and JDBC Drivers also allow bulk extract of Rightnow data. These providers internally use the same ROQL based approach, but provide a higher layer of abstraction by automating pagination and other needs.

Conclusion

In this blog we looked at a couple of ways to receive outbound events from Service Cloud, and how ICS can be used to seamlessly receive the events in a UI-driven fashion.
We also saw how PHP-based Rightnow Custom processes can be used as object triggers.
Finally, we saw a few options available for bulk data export from Rightnow, using ROQL, Rightnow Analytics and third-party providers.

Configuring HTTPS between Integration Cloud Service and Java Cloud Service

$
0
0

In a previous post, I discussed some general topics relating to the usage of HTTPS and certificates within Oracle Public Cloud. In this follow up piece, I will work through a concrete example and explain how to set up a Java Cloud Service instance in such a way that Integration Cloud Service can consume a service deployed to that platform over HTTPS.

The use case we have in mind here is a simple one. A custom REST-based service is deployed to WebLogic on JCS (I’ll use a simple Servlet that returns a JSON payload). An integration defined in Integration Cloud Service uses the REST adaptor to invoke that service over HTTPS. Since JCS is an example of a compute-based PaaS service, it is provisioned by default without an external hostname and with a self-signed certificate mapped to the Load Balancer IP Address. This is different to the ICS instance, which is accessible via an *.oraclecloud.com hostname with an automatically-trusted certificate. The first thing we will do is configure JCS to use a hostname that we provide, rather than the IP address. We’ll then look at how to provision a certificate for that instance and then finally, how to configure ICS.

I’ve used a JCS instance based on WebLogic 12.1.3 and Oracle Traffic Director 11.1.1.9 for this post. Exact steps may differ somewhat for other versions of the service.

Configuring JCS with your own hostname

I’ve deployed my simple Servlet to WebLogic via the console and for now, the only option available to me is to access it via the IP address of the JCS Load Balancer. We can see from the screenshots below that my web browser first prompts me to accept the self-signed certificate before accessing the end point, which is not what we want to happen:

AccessByIP

I’ve added a DNS entry mapping that IP (140.86.13.181) to an A record within my domain as below:

DNSEntry

And I also add this hostname (jcs-lb.securityateam.org.uk) in the OTD console on my JCS instance:

AddHostnameToOTD

I can now access the service with the hostname, but the certificate issue remains:

AccessByHostname

Configuring JCS with a certificate

We need to configure our JCS instance with a certificate that matches our hostname. There are two options:

1. Buy a certificate from a 3rd-party Certificate Authority

This option is preferable for production as the configuration of clients is far simpler. There is, generally, a cost associated, though. I’ve opted to use a trial certificate and have performed the following steps:

The first step (which is the same for either option) is to generate a Certificate Signing Request. When we do this, OTD generates a keypair and includes the public key in the request, which is to be sent to a CA for signing. Note how we use the hostname of our server as the common name (CN) in the request.

CertRequest OTD-CSR

I copy the CSR and paste in to the CA website and obtain my certificate, which is emailed to me once issued.

FreeTrialCert

Along with the server certificate itself, I receive a number of root and intermediate CA certificates, which I install into OTD as CA Certificates before importing my new server certificate.

InstallCACert

I deployed the configuration and restarted OTD (just to be safe), before copying the Base64-encoded server certificate I was sent and importing that into OTD.

InstallServerCert

The last step here is to modify my HTTPS listener in OTD to use my new certificate, as below. Once that is done, I can successfully connect to the server over SSL using my hostname.

AddCertToLisn GoodConnection

2. Obtain a certificate from your own (self-signed) Certificate Authority

Many organisations that use TLS certificates widely for internal communication security will have their own in-house Certificate Authority. These organisations have weighed up the costs and benefits and decided that it makes more sense to sign all of their server certificates in house – and to deal with the pain of configuring clients to manually trust this CA – than it does to buy a certificate for each server.

Most of the configuration steps from a JCS/OTD perspective are the same. I am going to use my colleague Chris Johnson’s simple yet awesome Simple CA script to create a self-signed CA certificate. I create a certificate signing request from within the OTD console as before, and then use an openssl command like the below to create the certificate based on my CSR.

openssl x509 -req -in server-cert.csr  -out server-cert.cer -CA ca.crt -CAkey ca.key -CAcreateserial -CAserial ca.serial -days 365 -sha256

That command uses the CSR I created (which I saved as “server-cert.csr”) and generates a certificate, signed by the CA certificate (“ca.crt”) created by Chris’s script. The output is in “server-cert.cer” and I can validate the contents as below:

ValidateSelfSigned

Now I repeat the steps above; first importing the self-signed CA certificate into OTD as a trusted certificate, then importing my server certificate and finally updating my listener to use the new certificate.

SelfSigned

One important change, though, is that I can no longer hit the REST endpoint directly with my browser, since, once again, the “Unknown Issuer” exception prevents my browser from establishing a secure connection. Because the CA cert that signed my server certificate is not trusted by the browser, I need to manually import this certificate into the browser trust store before I can access the URL.

FFImportCA

Connecting to JCS from ICS

Within our Integration Cloud Service console, we’re going to create a new Connection to our REST end-point on JCS. The steps that we need to follow will depend on which of the two options above we’ve gone with. Let’s do the simpler one first.

1.Connecting when JCS is using a certificate from a 3rd-party CA

ICS ships with a set of pre-configured trusted CA certificates, as you can see here:

ICSTrustedCA

As long as the SSL certificate that you have installed in your JCS instance has been signed by one of the pre-configured trusted CA’s in this list, then you don’t need to do anything more in order to configure the HTTPS connection using the ICS REST Adapter.

ICS-Success

2.Connecting when JCS is using a certificate from a self-signed CA

I’ve now changed my OTD listener back to the certificate signed by the self-signed CA. Here’s what happens when I test the connection in JCS:

ICS-Fail

The error message is a rather familiar one, especially to those who are used to configuring Java environments to connect to un-trusted certificates:

Unable to test connection "JCSREST_ROTTOTEST". [Cause: CASDK-0003]: 
 -  CASDK-0003: Unable to parse the resource, https://jcs-lb.securityateam.org.uk/simple/users. Verify that URL is reachable, can be parsed and credentials if required are accurate
  -  sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
   -  PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
    -  unable to find valid certification path to requested target

This is, in fact, exactly the same exception I get when using a simple Java test client to connect to that end-point:

Java-Fail

Fortunately, the fix is quite simple. All I need to do is to manually import the self-signed CA cert into ICS as a trusted issuer and I can then successfully connect to the REST endpoint.

ICSImportSS
ICSImportedSS

Once I perform the above step, I am able to successfully connect from ICS to JCS once more.

Using ODI Loading Knowledge Modules on the Oracle Database Cloud Service (DBCS)

$
0
0

Introduction

 

This article discusses how to use Loading Knowledge Modules (LKMs) in Oracle Data Integrator (ODI) to upload data into instances of the Oracle Database Cloud Service (DBCS).  LKMs are re-usable code templates within the ODI framework that can be used in ODI mappings to perform data upload operations.

On cloud computing, Oracle offers Platform as a Service (PaaS), which provides a shared and elastically scalable platform for the consolidation of existing applications and the development of new applications.  Under PaaS, Oracle offers data management services such as the Oracle Database Cloud Service (DBCS), which offers the power and flexibility of the Oracle database in the cloud.

On the data integration space, ODI offers a variety of LKMs for the Oracle technology that can be used for both on-premise Oracle databases and instances of DBCS.  This article discusses three LKMs that can be used to exchange data between instances of DBCS.  Also, the article discusses how to upload data from on-premise datastores such as Oracle databases and text files into instances of DBCS.  Finally, the article extends the usability of these three LKMs to Amazon Web Services (AWS), which supports the Oracle database as well.

 

Using ODI Loading Knowledge Modules on Oracle Database Cloud Service (DBCS)

 

In ODI, loading knowledge modules (LKMs) are required on the following use-cases:

  • Different Technologies – The source datastore and the target datastore are from different technologies.  For instance, an LKM is required when loading data from a file into an Oracle table, or when loading data from a Teradata table into an Oracle table.
  • Different Data Servers – The source datastore and the target datastore are from the same technology, but they are not located on the same data server.  For instance, an LKM is required when loading data between two Oracle tables that are located on different databases.
  • Different Database Instances – On cloud computing, LKMs are required when uploading data from an on-premise data server into a cloud data service, or when both the source and the target datastores are from the same cloud service, but each datastore is located in a different instance of the service or hosted on separate services.

ODI offers a variety of LKMs for the Oracle technology.  For instance, when loading data between Oracle databases, ODI offers the LKM Oracle to Oracle (datapump), and the LKM Oracle to Oracle (DBLINK), among others.  The LKM Oracle to Oracle (datapump) uses the Oracle Data Pump technology to upload data – in parallel – between two Oracle databases.  This technology offers the fastest method for uploading data between two Oracle databases.  The LKM Oracle to Oracle (DBLINK) uses the Oracle DBLINK technology to connect two Oracle databases and perform the data upload operation. For external sources such as text files, ODI offers – among others – the LKM File to Oracle (EXTERNAL TABLE).  This LKM uses the Oracle External Table technology to upload text files into the Oracle database.

These three LKMs can be used on both on-premise Oracle databases and instances of DBCS.  The following sections of this article illustrate some examples.

 

LKM Oracle to Oracle (datapump) on DBCS

 

Figure 1 below illustrates an example of how the LKM Oracle to Oracle (datapump) can be used on DBCS.

 

Figure 1 - LKM Oracle to Oracle (datapump) on DBCS

Figure 1 – LKM Oracle to Oracle (datapump) on DBCS

Figure 1 above illustrates four instances of cloud services on the Oracle Public Cloud (OPC).  Instance A, Instance C, and Instance D are all DBCS instances.  Instance A contains a database that is used as a metadata repository – the ODI repository is located on this database. Instance B is an instance of the Java Cloud Service (JCS) – the ODI agent is located on this instance.  Instance C contains a database that is used as a source database.  Instance D contains a database that is used as a target database.

Using this example, the ODI agent launches an ODI mapping that uses the LKM Oracle to Oracle (datapump) to export data from the source database and import it into the target database.  The datapump export operation is performed by the source database, and the datapump files are created on Instance C.  The datapump files are then copied from Instance C to Instance D using the ODI tool called OdiScpPut.  This ODI tool uses the Secure Copy (SCP) protocol to copy files between two data servers.  The datapump import operation is performed by the target database, located on Instance D.

On Figure 1 above, the LKM Oracle to Oracle (datapump) has been customized with a new step to invoke the ODIScpPut tool.  Alternatively, the ODIScpPut tool can be invoked from an ODI procedure or ODI package.

Oracle Cloud services such as JCS and DBCS require a public/private key-pair to access the cloud instances of these services via a secure shell (SSH).  When using the ODI OdiScpPut tool to copy files from on-premise data servers to instances of DBCS or between instances of DBCS, a private key is required.  For additional information on how to create a SSH public/private key-pair for Oracle Cloud services, go to “Creating SSH Keys for Use with Oracle Cloud Services.”

 

LKM Oracle to Oracle (DBLINK) on DBCS

 

The LKM Oracle to Oracle (DBLINK) can also be used to copy data between two instances of DBCS.  Figure 2 below shows an example.

 

Figure 2 - LKM Oracle to Oracle (DBLINK) on DBCS

Figure 2 – LKM Oracle to Oracle (DBLINK) on DBCS

In this example, on Figure 2 above, the ODI agent launches an ODI mapping that uses the LKM Oracle to Oracle (DBLINK) to select data from the source database (DBCS Instance C) and insert it into the target database (DBCS Instance D).  In this example, the data-transfer is performed by the DBLINK technology.  The database link is created by the LKM at runtime, or it can be configured by a database cloud administrator prior the execution of the ODI mapping.

When the ODI agent connects to a database instance of DBCS, ODI uses the Oracle JDBC driver to establish the connection.  This JDBC connection should be secured.  To secure this connection, the Oracle JDBC driver should first create a SSH tunnel between the host – where the Agent is located – and the database instance.  To learn how to establish a secured connection between the ODI Agent and DBCS, go to “Connect ODI to Oracle Database Cloud Service (DBCS)” – this article describes how to use a customized version of the Oracle JDBC driver to establish secured connections between the ODI agent and instances of DBCS.

 

LKM Oracle to Oracle (Datapump & DBLINK) – On-Premise to DBCS

 

LKM Oracle to Oracle (datapump) and LKM Oracle to Oracle (DBLINK) can also be used with ETL architectures that have both on-premise Oracle databases and instances of DBCS.  Figure 3 below shows an example of how these two LKMs can be used to upload data from on-premise Oracle databases into several instances of DBCS.

 

Figure 3 - LKM Oracle to Oracle (Datapump & DBLINK) – On-Premise to DBCS

Figure 3 – LKM Oracle to Oracle (Datapump & DBLINK) – On-Premise to DBCS

Figure 3 above shows two environments:  an on-premise environment, and an OPC environment.  The on-premise environment has two components:  an ODI agent, and an Oracle database server.  The ODI agent, Agent A, is a J2EE agent, but it can be a standalone or collocated agent instead.  The database server, Datastore A, has an Oracle database, which is used as an online transaction processing (OLTP) database.

On the OPC environment, there are four instances of the Oracle cloud services:  three instances of DBCS, and one instance of JCS.  The DBCS instances called Instance A, Instance C, and Instance D contain each a database to host – respectively – an ODI repository, an operational datastore (ODS), and a data warehouse.  Instance B, the JCS instance, hosts the ODI agent, Agent B.

Using this example, Agent A, the on-premise agent, launches an ODI mapping that uses the LKM Oracle to Oracle (datapump) to copy data from the OLTP database (on-premise) to the ODS database (cloud).  The OLTP database performs the datapump export operation and creates the datapump files on Datastore A.  The ODI OdiScpPut tool securely copies the datapump files from Datastore A (on-premise) to Instance C (cloud).  On Instance C, the ODS database performs the datapump import operation.  Notice that this entire operation is orchestrated by the on-premise agent, Agent A.  Then, Agent B, the agent on the JCS instance, launches another ODI mapping that uses the LKM Oracle to Oracle (DBLINK) to copy data from the ODS database to the warehouse database.  Notice that both agents, Agent A and Agent B, use the same ODI repository, located on Instance A.

 

 

LKM File to Oracle (EXTERNAL Table) – On-Premise to DBCS

 

LKMs can also be used to upload text files from on-premise data servers to DBCS instances.  Figure 4 below shows an example.

 

Figure 4 - Using ODI Loading Knowledge Modules - On-Premise to DBCS – External Tables

Figure 4 – Using ODI Loading Knowledge Modules – On-Premise to DBCS – External Tables

Figure 4 above shows two environments:  an on-premise environment, and an OPC environment.  The on-premise environment has an ODI agent, and a data server.  The data server, File Server A, has text files that represent operational data.  The OPC environment includes two instances of DBCS, and one instance of JCS.  The DBCS instances called Instance A and Instance C contain each a database to host the ODI repository, and the warehouse database, respectively.  Instance B, the JCS instance, hosts the ODI agent, Agent B.

Using this example, Agent A, the on-premise agent, launches an ODI procedure that uses the ODI OdiScpPut tool to copy the operational text files from File Server A (on-premise) to Instance C (cloud).  On the OPC environment, Agent B launches an ODI mapping that uses the LKM File to Oracle (EXTERNAL TABLE) to upload the text files into the warehouse database.  Notice that the upload operation is done by the warehouse database via Oracle external tables – Agent B only orchestrates the upload operation.  Both agents, Agent A and Agent B, use the same ODI repository, located on Instance A.

 

Using ODI Loading Knowledge Modules on Amazon Web Services (AWS)

 

The use of LKMs can be extended to other cloud services such as the Amazon web services (AWS).  Figure 5 below illustrates an example.

 

Figure 5 - Using ODI Loading Knowledge Modules - Amazon Web Services (AWS)

Figure 5 – Using ODI Loading Knowledge Modules – Amazon Web Services (AWS)

On Figure 5 above, the ODI agent is located on an instance of the Amazon Elastic Compute Service (EC2).  The LKM Oracle to Oracle (datapump) can be used in ODI mappings to perform data upload operations between two instances of the Oracle database located on the Amazon Relational Database Service (RDS).  The LKM Oracle to Oracle (DBLINK) can be used in ODI mappings to perform data upload operations between two Oracle databases, each of them located on Amazon RDS and Amazon EC2.  Also, the LKM File to Oracle (EXTERNAL TABLE) can be used in ODI mappings to upload text files into Oracle databases located on Amazon EC2.  The text files and the Oracle database are both located on the same instance of the Amazon EC2.  The ODI agent only orchestrates the executions – the actual data upload operations are done by the Oracle tools.

 

Conclusion

 

ODI Loading Knowledge Modules are re-usable code templates within the ODI framework that perform data upload operations for both on-premise data servers and cloud data services.  This article presented an overview of how to use ODI LKMs to upload data into instances of the Oracle database as a service (DBaaS).

For more Oracle Data Integrator best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-team Chronicles for Oracle Data Integrator (ODI).”

 

ODI Related Articles

Integrating Oracle Data Integrator (ODI) On-Premise with Cloud Services

Connect ODI to Oracle Database Cloud Service (DBCS)

ODI 12c and DBaaS in the Oracle Public Cloud

Using Oracle Data Pump with Oracle Data Integrator (ODI)

Oracle Platform as a Service (PaaS)

Infrastructure as a Service (IaaS)

Oracle Storage Cloud Service (SCS)

Applications as a Service (SaaS)

Oracle Database Cloud Service (DBCS)

Using Oracle Database Schema Cloud Service

Oracle Exadata Cloud Service (ExaCS)

Loading Data into the Oracle Database in an Exadata Cloud Service Instance

Working with Files in Oracle Data Integrator (ODI)

 

Oracle JCS Switchover Configuration

$
0
0

Introduction

This article outlines a configuration option for Oracle Java Cloud Service (JCS) in the Oracle Cloud to support switchover between datacenters to allow for ongoing availability. The configuration allows easy switchover between datacenters in case of scheduled maintenance and other outages. In this example the primary datacenter is US2 and secondary is EM2.

This example has been intentionally kept simple to allow easy understanding of the concept. The WebLogic configuration part that is stored on the file system will be synced between the two datacenters using rsync. For simplicity the entire WebLogic DOMAIN_HOME will be synchronized this allows all configuration changes on the WebLogic environment to be replicated. Oracle Data Guard is used to sync application data as well as the configuration data that is stored in the database. Only static application data is allowed to be stored on the file system, all other data has to be placed in the database to allow switchover without data loss. The usage of DNS aliases on all nodes make this solution work with minimal setup effort and no manual reconfiguration requirements in case of a switchover.

For this configuration the deployment has to be symmetrical to allow simplified switch and failover. This means the same number of VMs, Cluster etc. have to be deployed on both sides.

The following VMs are involved in this example – please note the are anonymised IP addresses.

Short Name Purpose Internal IP External IP Data Centre
Loadbalancer HAProxy Server 10.0.0.1 129.0.0.1 US2
WL Node 1 WebLogic Server 10.148.1.1 129.0.1.1 US2
WL Node 2 WebLogic Server 10.196.1.2 140.0.1.2 EM2
DB Node 1 Database Server 10.148.2.1 129.0.2.1 US2
DB Node 2 Database Server 10.196.2.2 140.0.2.2 EM2

 

This is the simplified architecture diagram for this example:

Drawing1

Creating Storage Container

In this example two individual storage containers are used to allow separate backups of both environments. More details can be found in this document.

Deploying Oracle Cloud – Database as a Service (DBaaS)

After creating the Storage Container the next step is to deploy DBaaS for both environments as a backend for the JCS Services. A separate DBaaS Service with the same name, e.g. “jcs-db” has to be created in both datacenters. For this example the “Oracle Database Cloud Services” option as Oracle Database 12c Release 1 with Enterprise Edition or better is recommended. The DBaaS Virtual Image option requires additional steps that are not discussed in this article. Oracle Database Standard Edition is not supported. For more details on deploying DBaaS see here.

The service name has to be identical between the two instances – here “jcs-db”. It is mandatory to reference the storage container that has been created in the corresponding data center. Please note the format “instance-id_domain/container” where instance is the name of the Oracle Storage Cloud Service instance, id_domain is the name of the identity domain, and container is the name of the container – e.g. Storage-rdb/jcsContainer. Select the SSH public key carefully and make sure to take a secure backup of the private key. More details can be found here.

 

image2

Continue with the same deployment in the second identity domain. Make sure to select the same Service Name for the database, as shown below.

image3

 

Deploying Oracle Cloud – Java Cloud Service (JCS)

In the next step create a separate JCS Services in both datacenters. For more details on deploying JCS see here.

For this example choose Service Level “Oracle Java Cloud Server” and Billing Frequency based on your preference.  The Oracle Java Cloud Service – Virtual Image option requires additional steps that are not discussed in this article. Oracle WebLogic Server 12c (12.2.1.0) with Enterprise Edition is used – however the procedure can be applied to Oracle WebLogic 11g and 12c.

The Service Name can be chosen freely, however it has to be the same across both deployment. The same goes for the database configuration. It is possible to select a different PDB, if required as long as it is the same as in the second instance.

image4

The second instance has to be created identical to the one created first. Make sure to select the correct storage container based on what has been created earlier – all other settings should remain the same.

image5

Load Balancer Configuration

This examples is using the open source proxy HAProxy as an exemplary load balancer based on the simple configuration. In an enterprise production deployment this should be replaced by an enterprise grade DNS based load balancer or similar mechanism.

To install HAProxy simply login as opc user and use the commands below on the Oracle Cloud VM where you want to run the HAProxy. For production use this should not be one of the JCS or DB VMs running the workload.

sudo yum –y install haproxy

To configure HAProxy simply append the following lines to the /etc/haproxy/haproxy.conf. This configuration will route all traffic that is hitting the port 5555 on the HAProxy VM to the JCS Application server based on their availability. Load Balanced by Round Robin. This is Layer 7 Load Balancing – further details can be found here.

frontend http
   bind 0.0.0.0:5555
   default_backend jcs-backend

backend jcs-backend
   balance roundrobin
   mode http
   option httpchk GET /sample-app/index.html
   server jcs-1 10.148.1.1:8001 check
   server jcs-2 10.196.1.2:8001 check

Then start the HAProxy Service:

service haproxy start

Network Configuration

Open the Oracle Compute Service Console navigate to Network and Create a new Security IP List for the Load Balancer:

image8

Continue by creating a security application for the JCS app that is supposed to be accessed via the load balancer. In this example the JCS Sample App is used.

image9

Finish by creating a security rule – note that the Destination Security List jcs/wls/ora_ms is created during JCS provisioning. This target list represents the WebLogic Managed Server.

image10

Should the HAProxy be deployed on Oracle Compute, make sure to open up the configured port. To achieve this create a new Security Application:

image11

Then create a new Security Rule to allow Traffic to the HAProxy from the Clients. For simplicity public-internet is chosen in this example. Generally this should be minimized to the actual client IPs that will be using the application.

image12

Next validate that you can open up the sample-app via the loadbalancer.

image13

To test that the load balancing is working properly login to the Administration Console of the WebLogic instances and stop the sample application first on the first instance – validate that the application is still available and then move on to the next instance.

image14

Creating host alias

The DNS configuration is crucial to get the environment to switch properly between the datacenter. The JCS alias for the nodes need to be resolvable to the local internal IP to get WebLogic to bind properly to the correct interfaces. Since the deployments are in different Identity Domains make sure to include the alias

Ensure that the database traffic is only routed to the corresponding database in the same datacenter by including the DNS alias in the /etc/hosts of the WebLogic VM.

The /etc/hosts should look similar to this:

WL Node 1:

image15

WL Node 2:

image16

 

Enable Configuration Synchronizations

SSH equivalence

In order to achieve secure communication between the WebLogic VMs it is recommended to establish SSH equivalence between the nodes. In order to achieve this login to the first instance and execute the following commands and copy the public key as shown below.

sudo su -
ssh-keygen 
cat ~/.ssh/id_rsa.pub

The copied key needs to be pasted in the ~/.ssh/authorized_keys of the opc user on the second WebLogic server (WL Node 2). For more details see here.

Make sure the connections via SSH from the root to the opc user are working between the two instances both ways.

sudo su -
ssh opc@10.196.1.2
sudo su -
ssh opc@10.148.1.1
exit

RSYNC Configuration

Before the synchronization can be enabled all servers and the NodeManager should be stopped on the secondary site. Login as OPC user switch into the oracle user and execute the following. In this example this is done on WL Node 2.

/u01/app/oracle/middleware/oracle_common/common/bin/wlst.sh
nmConnect('weblogic', 'Welcome1#', '127.0.0.1', '5556','jcs_domain','/u01/data/domains/jcs_domain')
nmKill('jcs_doma_server_1')
nmKill('jcs_doma_adminserver')
exit()
kill -9 $(cat /u01/data/domains/jcs_domain/nodemanager/nodemanager.process.id)

Next remove the current configuration from the node so that the configuration from the primary site can be copied without issues.

mv /u01/data/domains/jcs_domain /u01/data/domains/jcs_domain.bkp

Then configure a cron job to enable copying the data between the sites. The domain directory should be copied into a staging directory – here /u01/data/domains/jcs_stage. This is done to avoid problems with partially completed copies. More information about cron jobs and the crontab can be found here. Alternatively other scheduling tools can be used. To edit the crontab simply run as opc user:

crontab -e

The crontab should look like the following, to sync the data from WL Node 1 to WL Node 2 every 10 minutes.

*/10       *       *       *       *      sudo rsync -avz --exclude "*.log" --log-file=/u01/data/domains/rsync.log --rsync-path="sudo rsync" opc@129.0.1.1:/u01/data/domains/jcs_domain /u01/data/domains/jcs_stage

Database Configuration

The database configuration for this example is similar to the configuration has been described in this white paper. Simply follow the steps outlined in the section Appendix A: Case Study Details matching the Service Names to what has been configured in this example as shown in the table below.

Node Service Name Unique Name
DB1 JCS JCS
DB2 JCS JCSEM2

Switchover Execution

The following diagram shows the basic steps that are carried out as part of a switchover.

Untitled

On WL Node 1 the following steps have to be carried out as user oracle to stop the WebLogic Server.

/u01/app/oracle/middleware/oracle_common/common/bin/wlst.sh
nmConnect('weblogic', 'Welcome1#', '127.0.0.1', '5556','jcs_domain','/u01/data/domains/jcs_domain')
nmKill('jcs_doma_server_1')
nmKill('jcs_doma_adminserver')
exit()
kill -9 $(cat /u01/data/domains/jcs_domain/nodemanager/nodemanager.process.id)

Add a # symbol in front of the crontab rsync job to disable further syncing of data to the remote site.

On Database Node 1 the following steps have to be carried out:

. oraenv
ORACLE_SID = [ORCL] ? ORCL
dgmgrl /
switchover to jcsem2
exit

On WebLogic Node 2 the following steps have to be carried out to start the environment. Make sure to review the rsync log prior to executing these steps.

cp /u01/data/domains/jcs_stage/jcs_domain /u01/data/domains
cd /u01/data/domains/jcs_domain/nodemanager
/u01/data/domains/jcs_domain/bin/startNodeManager.sh &
/u01/app/oracle/middleware/oracle_common/common/bin/wlst.sh
 nmConnect('weblogic', 'Welcome1#', '127.0.0.1', '5556','jcs_domain','/u01/data/domains/jcs_domain')
nmStart('jcs_doma_adminserver')
 nmStart('jcs_doma_server_1')
 exit()

Due to the failover configuration of the Load Balancer no other change is required. To enable WL Node 2 to be the Primary node for the Synchronization simply disable the cron job on WL Node 2 and enable the following job on WL Node 1.

*/10       *       *       *       *      sudo rsync -avz --exclude "*.log" --log-file=/u01/data/domains/rsync.log --rsync-path="sudo rsync" opc@ 10.148.1.1:/u01/data/domains/jcs_domain /u01/data/domains/jcs_stage

In case of a switchback the steps have to be carried out in the inverted order.

Next Steps

As discussed in the introduction this example is intentionally kept simple. One great option is to implement the copy jobs in Enterprise Manager. This allows for monitoring of the process and corrective actions. This is described in the Enterprise Manager Administration Guide. The switchover process can also be executed from Enterprise Manager.


Using SSH for Data Sync Loads to BI Cloud Service (BICS)

$
0
0

For other A-Team articles about BICS and Data Sync, click here

Introduction

The Data Sync tool can be used to load both on-premise, and cloud data sources, into BI Cloud Service (BICS).  When the target is the standard schema service database, then an HTTPS connection is used through the BICS API, and the data uploaded is encrypted via that.

While Oracle Compute allows for security lists to limit the IP addresses that can connect to the DBaaS database, the data itself that is transmitted between the host running Data Sync, and the DBaaS database, may not be encrypted and in theory could be intercepted.  Future functionality of Data Sync will provide the ability to use an inbuilt SSH connection to secure the data that is being passed.  This article will walk through a simple method to provide that functionality now.

The process involves creating an SSH tunnel from the environment running Data Sync, to the DBaaS environment, and pushing the data to be loaded through that encrypted connection.

 

Main Article

The pre-requists for this approach are:

  • A copy of the key created when the DBaaS environment was set up, and the passphrase used.  The administrator who created the DBaaS instance should have both of those.
  • An SSH tool capable of connecting with a Private Key file and creating the tunnel.  In this article Putty will be used, which is a free tool available for download from here, although any SSH tool capable of creating a tunnel could be used instead.

 

Steps

a. From within the DBaaS console, identify the target database, its IP address, and Service Name.

NOTE – if only a SID is available for the database in question, see this article for steps on how to make the SID available as a Service Name.  Follow those steps first.

Oracle_Database_Cloud_Service_Details

b. Open Putty and Set Up a Connection using the IP of the DBaaS database obtained in step (a) and port 22.

Cursor

c. Expand the ‘Connection’ / ‘SSH’ / ‘Auth’ menu item.  Browse in the ‘Private key file for authentication’ section to the key that the DBaaS administrator provided.

Windows7_x64

d. Select the ‘Tunnels’ sub-section within SSH.  This is where the tunnel is created.  A port on the local machine is entered in the ‘Source Port’.  This must be a port that is not being used.  In the destination, enter the IP number of the DBaaS target database and the port.  This should be in the format:

host:port

Once entered, hit ‘Add’ to save the tunnel settings.

In this example, the local port of 9999 is used, and the destination being port 1521 on the DBaaS host.  Putty will take any traffic sent to port 9999 on the local machine, push that through the SSH tunnel, and then direct that traffic to port 1521 on the destination machine.

Screenshot_8_25_16__4_52_PM

e. Return to the ‘Session’ section, give the session a name and save it.  Then hit ‘Open’ to start the connection to the DBaaS host.

Screenshot_8_25_16__4_29_PM

f. For the ‘Login as’ user, enter ‘opc’ and when prompted for the ‘Passphrase’, use the passphrase for the SSH Key.  If the connection is successful, then a command prompt should appear after these have been entered:

Cursor

g. The Putty session must be left open for SSH communication to use the tunnel.

h. In Data Sync create a new connection.  Use the Service Name for the DBaaS database, but for the host, point to the machine where the Putty session is running (in this case localhost).  Use the local Port defined in step (d) above.

Screenshot_8_25_16__4_55_PM

i. Test the connection.

Cursor

j. Create a new Job in Data Sync by selecting ‘New’ under the ‘Jobs’ main menu.

Cursor

k. For the ‘TARGET’ Data Source, Override with the Connection created in step (h) so that the job will load data to the new DBaaS target using the SSH connection.

Cursor

 

Summary
This article walked through the steps to configure Data Sync to use an SSH tunnel to load data.

For other A-Team articles about BICS and Data Sync, click here

Retrieve and Update Task Payload with PCS REST API

$
0
0

PCS has a number of REST APIs you can use to search for the tasks assign to a user/group, and retrieve the payload of a specific task as well as update the payload.  In PCS 16.3.5, a new Oracle form technology was introduced, hence, there are some changes to the REST API.

Prior to PCS 16.3.5, Frevvo WebForms was the main form technology used in PCS, when you use the PCS REST API to retrieve the payload of a specific task, it will always return an XML formatted payload.  With the introduction of the new PCS 16.3.5, if you are using the Oracle New Webforms, you have an option to specify whether you want the payload to be returned as JSON format or XML.

Use Case:  You want to retrieve a list of tasks that were assigned to a user or a group, and update the payload of a specific task.

To implement this use case, you need to use the following PCS REST API:

  1. 1. Retrieve a list of tasks

To retrieve a list of tasks that were assigned to a user or a group, you need to use the HTTP GET method on the /tasks REST endpoint e.g. https://example.com/bpm/api/3.0/tasks. The content type and accept header parameter must be set to “application/json”.

The tasks API supports several optional query parameters that make searching for tasks easier.  To find the specific task that you want to update, you can use the combination of keyword and assignment parameters.  Using keyword parameter allows you to search for a task using the task title, so when you are designing your Human Task, you will need to provide meaningful and searchable title that can be used for searching, for example “Loan Application Approval for #120-002-00”.  Below are some of the useful parameters that can be used for searching:

priority – Task priority from 1 (highest) to 5 (lowest) or Any (default)

dueDateFrom – Start of due date range in the format yyyy-mm-dd hh:mm:ss

dueDateTo – End of due date range in the format yyyy-mm-dd hh:mm:ss

assignment – Task assignees – MY, GROUP, MY_AND_GROUP (default), MY_AND_GROUP_ALL, REPORTEES, OWNER, REVIEWER, PREVIOUS, ALL, ADMIN

Keyword – Keyword in task title.

UpdateTaskPayload_1

  1. 2. Fetch Task Payload

From the response of the fetch list of tasks REST API, locate the TaskId and the URL of the task that you are interested in. This would be present in the first “items []” array or the number field in the array.  Once you have the task id, you can now use the HTTP GET method on this URL: http://example.com/bpm/api/3.0/tasks/{taskID}/payload to fetch the payload value associated with that task. The content type parameter must be set to “application/json”.

In some integration systems, you will need to set the accept header explicitly for a specific media type for the response.  In Oracle Service Bus (OSB), the outbound request header can be set using the transport header action.

UpdateTaskPayload_2

For example, if you want the response to be XML representation, you will need to set the Accept header value to “*” or “*/*” to indicate all media types.

UpdateTaskPayload_3

If you want the response to be a JSON representation, you will need to set the Accept header to “application/json”

UpdateTaskPayload_4

For Basic Forms (Frevvo Webforms) the response to the get Payload call is always an XML representation of the Payload.

For Oracle New WebForms, the response will be in JSON format if you set the Accept header setting to “application/json”.

UpdateTaskPayload_5

However, if the webform is associated with the Frevvo webform, and you have added the Accept header setting with application/json value, the response will be a 404 error.

UpdateTaskPayload_6

If you require the response to be in XML for New WebForms, make sure you remove the Accept: application/json header setting.

UpdateTaskPayload_7

  1. 3. Update Payload of the task

Basic Forms (Frevvo Webforms)

After you have retrieved the payload for a specific task, you can update the payload and send the update back to PCS to update the payload of a given task.  You can also use the same URL (e.g. http://example.com/bpm/api/3.0/tasks/{taskID}/payload ) as the fetch task payload but you will need to use the HTTP POST method.

For Basic forms, the task update REST API requires an xml payload as part of the POST request, the xml payload will be wrapped in the JSON request. You will also need to set the content-type to “application/json”.

Note: Each “(double quote) has been escaped by a \ (backslash). Another alternative is to use single quotes instead of double quotes. Also ensure that there are no newline characters with the payload.

For example:  Updating Task Payload associated with Basic Form

{

“xmlPayload”:”<payload xmlns=\”http://xmlns.oracle.com/bpel/workflow/task\”><Address xsi:type=\”def:Address\” xmlns:ns1=\”http://xmlns.oracle.com/bpm/forms/schemas/Address\” xmlns:def=\”http://xmlns.oracle.com/bpm/forms/schemas/Address\” xmlns:xsi=\”http://www.w3.org/2001/XMLSchema-instance\”><ns1:addressType>Street</ns1:addressType><ns1:city>SYD</ns1:city><ns1:country>Australia</ns1:country><ns1:currentAddress>true</ns1:currentAddress>     <ns1:endDate>2016-08-24</ns1:endDate><ns1:floorNumber/><ns1:housingSituation>Own</ns1:housingSituation><ns1:months>1</ns1:months><ns1:postCode>2001</ns1:postCode><ns1:startDate>2005-08-03</ns1:startDate><ns1:state>New South Wales</ns1:state><ns1:streetName>David</ns1:streetName><ns1:streetNumber>100</ns1:streetNumber><ns1:streetType>option1</ns1:streetType><ns1:unitNumber/><ns1:years>10</ns1:years></Address></payload>”

}

UpdateTaskPayload_8

New WebForms

If the task is associated with New WebForms, you can update the payload by using the JSON payload or the XML payload

For example:  Updating Task Payload using JSON Payload

UpdateTaskPayload_9

For example:  Updating Task Payload using XML Payload

UpdateTaskPayload_10

Loading Oracle Service Cloud (RightNow) Data into BICS with Data Sync

$
0
0

For other A-Team articles about BICS and Data Sync, click here

Introduction

Version 2.2 of the Data Sync tool was released September 2016 and added the ability to connect to a number of different data sources via JDBC.

Setting up these Data Sources in Data Sync varies by the source.  Rather than create a single article to cover them all, 3 have been created.  Select the appropriate article to learn more about the steps to set up that Data Source.

This article covers the steps to extract data from Oracle Service Cloud (RightNow) and load that into BI Cloud Service (BICS).

 

Data Source Article With Details
Service Cloud / RightNow this article
SalesForce link to article
Greenplum, Hive, Impala, Informix, Mongodb. Postgresql. Redshift, Sybase, and Instruction for other JDBC data sources where the JDBC driver is available link to article

 

 

Main Article

Downloading Latest Version of Data Sync Tool

Be sure to download and install the latest version of the Data Sync Tool from OTN through this link.

For further instructions on configuring Data Sync, see this article.  If a previous version of Data Sync is being upgraded, use the documentation on OTN.

Data Load Methods

The Data Sync tool offers 3 different methods of extracting data from Oracle Service Cloud:

  1. 1. Loading the data from a report created in the Oracle Service Cloud Report Explorer desktop tool
  2. 2. Loading the data directly from individual objects available in Oracle Service Cloud
  3. 3. Loading the data directly from a ROQL query

Each method will be discussed during this article.

Setting up the Oracle Service Cloud Connection in Data Sync

1. In the Data Sync tool, create a new Connection:

Cursor

2. In the ‘Connection Type’ selection, use ‘Oracle Service Cloud (RightNow), enter a suitable user name and password, and URL for the RightNow environment:

Cursor

 

3. ‘Test’ the connection to confirm connectivity:

Screenshot_8_1_16__4_36_PM

 

Data Loading

As mentioned, there are 3 different ways that data can be loaded into BICS from Oracle Service Cloud using the Data Sync Tool.  Each will be discussed in detail in this section.

In each method discussed, filters can be used to restrict the data set size.

 

Method 1: Loading data from a report created in the Oracle Service Cloud ‘Report Explorer’ tool

This method provides the ability to access all available objects in Oracle Service Cloud.  It goes against the database tables in Service Cloud, and not against the semantic layer that the other methods work against.  This means that only the fields that are needed can be extracted, and only the tables needed are traversed by the query.  This makes the query against Service Cloud more efficient and possibly better performing.

The process involves creating 2 reports in Oracle Service Cloud.

The first contains the data that the extract will use and that will be loaded into BICS (the ‘Data Report’), and the second to provide metadata about the data to help Data Sync run incremental loads, and to ‘chunk’ the data into more manageable extracts (the ‘Helper Report’).

1. In the Oracle Service Cloud ‘Report Explorer’ create the ‘Data Report’ pulling in the fields that will be required in BICS.  In this example some fields are pulled from the ‘incidents’ table.

Cursor

2. To enable the Data Sync tool to chunk data into more manageable extracts, and to filter data on dates for incremental updates – Two filters MUST be defined in each ‘Data Report’.

One filter, that should be based on an ID, is used by data sync to count how many rows will be returned.  From this it can decide how to break the extracts down into smaller chunks.  For instance, if a query is going to return 2 million rows, that may take some time.  Going across network and cloud environments, it’s possible that the connection may be interrupted, or perhaps a time-out, or memory usage threshold could be reached.

Data Sync will, by default, break those 2 million rows into 200 ‘chunks’ of 10,000 rows of data (this value can be changed).  The first connection will request the first 10,000 rows based on this Filter ID field.  Once loaded, the next 10,000 rows will be requested, and so on.  This puts less strain on the Service Cloud, and reduces the possibility of interruptions or time-outs.

The second filter should be based on a date field that can be use to help identify new or changed records since the previous load.

The names of these 2 filters is important.  They have to be named:

INCREMENTAL_LOAD – for the date based filter, and

RANGE_COUNT – for the ID based filter.

In the Filter Expression editor in Service Cloud, the filters will look like this:

Screenshot_8_5_16__2_17_PM

In this example, the INCREMENTAL_LOAD filter references the ‘incidents.updated’ field.  Confirm that the ‘Make this filter selectable at run time’ remains checked:

Cursor

For the RANGE_COUNT filter, the ‘incidents.i_id’ field is used from the ‘incidents’ table.

Cursor

3. Save the main ‘Data’ Report, and then create the ‘Helper’ Report

Cursor

 

This needs to have 3 fields.

One that calculates the minimum value of the ID that will be extracted from the report.  Another that calculates the maximum value, and a third that calculates the total number of rows that will be returned – the count.

The names of these fields are important.  Use ‘MIN_VALUE’, ‘MAX_VALUE’, and ‘COUNT’.

Below are the 3 field definitions used in this example report, using the ‘Min’, ‘Max’ and ‘Count’ functions available in the Service Cloud Report Explorer.

Zoom_Participant_ID__32___Meeting_ID__584-903-655

Zoom_Participant_ID__32___Meeting_ID__584-903-655

Zoom_Participant_ID__32___Meeting_ID__584-903-655

As before, a filter is required.  For the ‘helper’ report, only the INCREMENTAL_LOAD filter is required.  That way the Data Sync tool can request the metadata only for the date range that it needs to extract for.

 

Cursor

and use the SAME date field as from the Data query.  In this case the ‘incidents.updated’ field.

Run the helper report to confirm it works.  In this example it is showing that the data to be returned has 1,494 rows (COUNT), with the minimum ID in the data being 12278 (MIN_VALUE) and the maximum ID being 15750 (MAX_VALUE).

Screenshot_8_5_16__2_22_PM

Save the report.

4. Service Cloud creates a unique ID for each report.  That ID will be needed in Data Sync to reference the 2 reports.

To find the ID, right click on the report and select ‘View Definition’:

 

Cursor

 

In this example the ‘Data’ query, the ‘Incidents_Data’ report has an internal ID of 100804.

Screenshot_8_3_16__11_13_AM

Do the same for the ‘helper’ report.  In this case, the internal ID is 100805.

These IDs will be needed in the next step.

Cursor

5. In the Data Sync tool, under ‘Project’, ‘Pluggable Source Data’, select ‘Manual Entry’.

Cursor

Chose a Logical Name for the data source, and a Target Name.  Note the Target Name will be the new table created in BICS.  If an existing table already exists, be sure to enter the name correctly.  Make sure the DB_Connection is set to the Service Cloud / RightNow connection created earlier.

Cursor

In the ‘Message’ box that appears next, make sure ‘Analytics Reports’ is selected in the ‘Data from:’ selector.  The message will be updated to display additional information about this import method.

Cursor

In the final screen, the ‘Data’ report ID, from step (4), needs to be entered as the ‘Analytics Report ID’, the ‘Helper’ report ID in the ‘Helper Analytics Report ID’.  The ‘Numeric ID’ needs to be the logical name of the field used in the ‘Data’ report that contains the main ID field for the report.  In this case, that field is ‘Incident ID’.  Be aware that this field is case sensitive and needs to exactly match the name of the report field. The final field, ‘Maximum number of rows to read at a time’, is the ‘chunking’ size.  By default this is 10,000.  This can be changed if needed.

Cursor

6. To set up ‘Incremental’ loads, select the data source that was just created, and in the attributes section, select the value in the ‘Load Strategy’ box.  This will bring up the various load strategies allowed.

Cursor

Select ‘Update table’

Cursor

For the User Key to identify the unique records, select the ID that can be used to identify unique records for updating.  In this example ‘incident ID’ is used.

Cursor

for the Filter, use the Date column that will identify changed data.  In this example ‘Date Last Updated’ is used;

Cursor

7. Run the job and confirm it works.

 

  1. Method 2 – Loading the data directly from individual objects available in Oracle Service Cloud

For cases where an Object exposed in the semantic layer of Service Cloud contains all the data that is needed, then this approach may be the best approach.

1. In Data Sync, select ‘Project’ / ‘Pluggable Source Data’ and then ‘Data from Object(s)’

Cursor

Leave ‘Discover objects by listing’ selected and click ‘OK’.

2. In the next screen, make sure ‘RightNow’ is selected as the Source, and then hit the ‘Search’ button to pull back a list of all the objects available.

Select the Object(s) that are to be included (in this case ‘Billing Payments’), and then the ‘Import’ button.

Cursor

A message will be displayed providing more details of this method.  Click ‘OK’.

Select the data source that was created, and select the ‘Pluggable Attributes’ section.

4. Three options are shown.  The ‘Numeric Column’ and ‘Maximum number of rows to read at a time’ are mandatory.

The ROQL Query Condition field is optional.  This field can be used to filter the data returned.  For instance, if the Billing.Payments object contains many years of history, but for BICS we are only interested in data changed from 2014 onwards, then a ROQL statement of ‘updatedtime > ‘2014-01-01T00:00:00Z’ may be used to restrict the data returned.  This is nothing to do with incremental loading.  This filter will be used every time a job is run, so no data from before this date will every be extracted from Service Cloud.

The ‘Numeric Column’ needs to be an ID field from the Billing.Payments object.  In this case there is a field called ‘id’.  This is case sensitive.

The final column is the ‘chunking’ size to be used.  This defaults to 10,000, but can be changed if required.

Cursor

5. As in the previous load example, to set up incremental updates, go to the ‘Edit’ tab, and select ‘Update table’ as the Load Strategy:

Cursor

and select the appropriate value for the unique ‘User Keys’ and date value for the ‘Filter’ to allow Data Sync to identify rows changed since the last extract.

6. Run the job to confirm it works.

 

  1. Method 3 – Loading the data directly from a ROQL query

ROQL stands for ‘RightNow Object Query Language’.  It has some similarities to SQL and is the query language used to run against the semantic reporting layer in Service Cloud.

In this example the following ROQL query will be used.

SELECT * FROM incidents WHERE updatedtime > ‘2014-01-01T00:00:00Z’

 

1. In the Data Sync tool, select ‘Project’, ‘Pluggable Source Data’ and then ‘Manual Entry’:

Cursor

Enter a logical name for the data source, and a target name.  This should be the existing BICS table that will be loaded, or the name of the new table that will be created:

Cursor

Make sure the ‘Data From’ box is set to ‘ROQL’ then hit ‘OK’:

Cursor

2. In addition to the ROQL query (ROQL Tabular Query), a statement is required to calculate the MAX, MIN, and COUNT of the identity field (in this case ID), as well as the name of the Query Object – in this case ‘incidents’, and the Numeric Column, in this case ‘id’.  NOTE – these last two are case sensitive.

The chunking size (‘Maximum number of rows to read at a time’) can be adjusted if necessary.

Cursor

Click ‘OK’, and the Pluggable Source Data object is created.

3. As before, to set up incremental loads, select the Data Source, then update the load strategy.

Cursor

And select an appropriate key (an ID) and a filter (an update date / time),

4. Run the job to confirm it works.

Summary
This article walked through the steps to configure the Data Sync tool to be able to connect and extract data from Oracle Service Cloud / RightNow.  It covered 3 different approaches.

NOTE – Service Cloud has inbuilt restrictions for extracting data.  These restrictions are intended to protect the underling database to prevent a single query using up too many database resources.  The Data Sync tool has built in automatic error handling to accommodate this.  If the error is encountered while requesting data, then the Data Sync tool will recursively retry the data request, but adding further filters to reduce the data set being returned.  At the time of writing, this recursive error-handling is build into methods (2) and (3) outlined in the article.  It will shortly (within a few weeks) be added for Method (1) as well.

For further information on the Data Sync Tool, and also for steps on how to upgrade a previous version of the tool, see the documentation on OTN.  That documentation can be found here.

For other A-Team articles about BICS and Data Sync, click here

Loading SalesForce Data into BICS with Data Sync

$
0
0

For other A-Team articles about BICS and Data Sync, click here

Introduction

Version 2.2 of the Data Sync tool was released Septembert 2016 and added the ability to connect to a number of different data sources via JDBC.

Setting up these Data Sources in Data Sync varies by the source.  Rather than create a single article to cover them all, 3 different articles have been created.

Select the link in the right column of the table below to view the appropriate article.

This article covers the steps to extract data from SalesForce and load that into BI Cloud Service (BICS).

 

Data Source Article With Details
SalesForce this article
Service Cloud / RightNow link to article
Greenplum, Hive, Impala, Informix, Mongodb. Postgresql. Redshift, Sybase, and Instruction for other JDBC data sources where the JDBC driver is available link to article

 

 

Main Article

Downloading Latest Version of Data Sync Tool

Be sure to download and install the latest version of the Data Sync Tool from OTN through this link.

For further instructions on configuring Data Sync, see this article.  If a previous version of Data Sync is being upgraded, use the documentation on OTN.

Setting up Salesforce Connection

1. Obtain a Security Token from Salesforce. If you have this already, skip to step 2.

a. In the Salesforce GUI select ‘My Settings’ under the drop down beneath the user name as shown:

 

passkey1

 

b. Expand the ‘Personal’ menu item on the left of the page and select ‘Reset Security Token’.  Complete the process and the token – a set of alphanumeric characters – will be e-mailed to the user.

 

Passkey2

2. In the Data Sync tool, create a new connection for SalesForce.

a. Chose the ‘Generic JDBC’ driver, enter the username and password, and edit the ‘URL’.  The URL format should be:

jdbc:oracle:sforce://login.salesforce.com;SecurityToken=##SECURITYTOKEN##

replacing ##SECURITYTOKEN## with the Salesforce token from step (1).

passkey3

b. In the ‘JDBC Driver (Optional)’ field, enter:

com.oracle.bi.jdbc.sforce.SForceDriver

A_CONNECTION2

c. In the ‘Schema Table Owner (Optional)’ section – select the Salesforce schema that houses the data required, and then click ‘OK’.

Cursor

d. Save and test the connection.

If this error is encountered:

Failure connecting to “SalesForce”! [Oracle DataDirect][SForce JDBC Driver][SForce]UNSUPPORTED_CLIENT: TLS 1.0 has been disabled in this organization. Please use TLS 1.1 or higher when connecting to Salesforce using https.

Then changes to the security settings in SalesForce will be required. To do this:

a. In the search box of SalesForce – type ‘Critical Updates’ and then select the ‘Critical Updates’ object that appears below.

Critical_Updates_2

b. Select the ‘Deactivate’ option for ‘Require TLS 1.1 or higher for HTTPS connections’ option (click the next image to see it in more detail).

 

Critical_Updates

3. Return to Data Sync. Testing should now be successful.

Connection_Successful

Making ‘Audit’ Columns Available in SalesForce Tables

SalesForce has Audit / Metadata columns available on many tables.  These are especially useful for incremental loads as they provide ‘CREATED BY’, ‘CREATED DATE’, ‘MODIFIED BY’, ‘MODIFIED DATE’ columns, among others.  HOWEVER – by default, the JDBC driver configuration does not return these fields, so some editing needs to be done.

The SalesForce connection, described above, must be created before making these changes.

1. Shutdown Data Sync Tool

Exit out of Data Sync completely, by closing the GUI, and then selecting ‘Exit’ out of the Data Sync application from the task bar:

Cursor

2. Find Files Related to SalesForce JDBC Configuration

When the SalesForce connection is created in Data Sync, a number of files are created in the root of the Data Sync directory by the JDBC driver.  These files will be named after the user created in the connection string.  For instance, if the user connecting to SalesForce is JSMITH@GMAIL.COM, then the files will be named JSMITH.app.log, JSMITH.config etc.  In the example below, the username was ‘SALESFORCE’.

3. Delete the ‘map’ file

In the Data Sync directory, find the files listed below. DELETE the xxxx.SFORCE.map file.  In this case ‘salesforce.SFORCE.map’. This file contains metadata including tables and columns available in SalesForce.  This needs to be deleted so that it can be rebuilt with the audit columns included.

Cursor

4. Update the ‘config’ file

In your favourite text editor, open the XXXX.config file, in this example ‘salesforce.config’ and find this entry:

auditcolumns=none

and update to:

auditcolumns=all

 

The updated file should look something like this:

Cursor

Save the file,

5. Confirm Audit Columns are now visible

Open the Data Sync tool.  Go to ‘Connections’ and ‘Test’ the SalesForce connection you had created previously.  This will regenerate the .MAP file, this time containing all of the audit columns.

When tables are imported (see the next section ‘Loading SalesForce Data into BICS), the following audit columns should be available.  NOTE – the ‘SYS_ISDELETED’ field data type is not recognized by Data Sync and comes in as ‘UNKNOWN-UNSUPPORTED’.  This needs to be changed in the Target tables.  It can be changed to a VARCHAR with length of 1.

Cursor

NOTE – if new fields are added to SalesForce, then steps 1-3 should be repeated so that the ‘MAP’ field can be refreshed and the news fields made available in the Data Sync tool.

 

Loading SalesForce Data into BICS

As it does with many data sources, the Data Sync tool provides several methods to load data in from the source.  Two will be covered here.  Importing Data from a SalesForce Object, and Importing Data from a SQL query.

 

Import Data from a SalesForce Object

1. Select ‘Project’ / ‘Pluggable Source Data’ and then ‘Data From Object’:

 

Weds_1

2. Make sure ‘Discover objects by listing’ is selected, then hit ‘OK’

Weds_2

3. Select SalesForce Object(s) to Load

If the SalesForce object name is known, or even the first letters of the name, then that may be entered in the ‘Filter’ box.  So for example, to return all SalesForce objects that begin with the letters ACCE, then ‘ACCE*’ (without the quote marks) would be entered in the filter.  To see all objects, leave the default ‘*’.  Then hit ‘Search’.  That will bring up a list of the available SalesForce objects matching the Filter criteria.

Select the Object(s) required by checking the ‘Import Definition’ check box, and then hit ‘Import’

In this example the filter was left as default, and ACCOUNT table was selected.

Cursor

4. Best Practice

Data Sync will give some recommendations.  Read those, then select ‘OK’

You may get a warning for ‘unsupported datatypes’.  Click ‘OK’.

Whether a warning is received or not, it is good practice to confirm that all field data types have been identified correctly, and only those that are truly not nullable have that flag set.  This will prevent errors when running a Job.

a. Go to ‘Project’, ‘Target Tables/Data Sets’ and select the target table created in the last step.  In this example ‘ACCOUNT’, and then in the lower window pane, select ‘Table Columns’.

b. Go through each table column to confirm that the data type is set correctly, and in cases where the Data Type is listed as ‘UNKNOWN-UNSUPPORTED’ change that.  In this case that can be changed to a VARCHAR with length of 1.  NOTE – a field needs to be in an ‘ACTIVE’ state before it can be edited.

c. Go through each field and confirm that only rows that are truly non-nullable have the box unchecked.  If a field has the ‘Nullable’ flag unchecked, but contains null values, then the load will fail.

Cursor

Selecting Objects from SalesForce using this method will bring back every field from the Object, whether it is required or not.  This can result in a lot of data being extracted and loaded in BICS.  Performance can be improved by excluding fields that are not required in BICS.  This is good practice to go through and inactivate any such fields.

d. With the Pluggable Data Source created in the previous steps still selected, check the ‘Inactive’ column to remove fields not required.  In this example the CLEANSTATUS, CUSTOMERPRIORITY, DANDBCOMPANYID and DESCRIPTION are marked as ‘Inactive’.  When the Data Sync tool reads from SalesForce, it won’t select these rows, so the extract will be smaller and will download faster, as will the upload up into BICS.

Cursor

e. Be sure to hit the ‘Save’ button after making any changes.

NOTE – SalesForce has some field names that are considered Reserved Words in Oracle.  For instance, many SalesForce tables have the column ‘ROWID’ which is an Oracle DB reserved word.

The Data Sync tool will automatically rename ROWID to ORARES_ROWID, as shown in the ‘mapping’ sub-select of the ACCOUNT table:

Cursor

5. Incremental loads

a. To set up Incremental Loads, go to ‘Project’ / ‘Pluggable Source Data’, and select the Pluggable Data Object created in the previous steps.  Then in the lower window pane select ‘Edit’ and then click on the value in the ‘Load Strategy’ box to open up the Load Strategy options.

b. Select ‘Update table’

Screenshot_8_9_16__12_46_PM

c. Chose a suitable Key(s) field for the user key, and date field for the filter.  The Audit fields, described in the section ‘Making ‘Audit’ Columns Available in SalesForce Tables’, may make good candidates for the date filter field.

6. Run the job to test it.

 

Import SalesForce Data with SQL Code

1. Select ‘Project’ / ‘Pluggable Source Data’ and then ‘Manual Entry’

Enter a logical name for the pluggable source data object that will be created.  The Target Name will be the table that Data Sync creates in BICS.  If the plan is to load an existing table, enter that name here.

In the DB Connection make sure the ‘SalesForce’ connection is selected:

Cursor

2. In the drop down for ‘Data from’ select ‘Query’, then OK.

Cursor

3. In the Query Override, enter the SQL statement.

This could be in the form of a select *, for instance

select * from account

or a select that specifies the field names for a single table

select accountnumber, accountsource, annualrevenue, billingstate, description, industry, rowid, sys_lastmodifieddate from account

or a select that joins multiple tables and includes calculations (in this case to find the last modified date of the data from 2 tables, and to create a unique ID)

select
contact.accountid,
contact.email,
contact.lastname,
contact.firstname,
case when contact.sys_lastmodifieddate > opportunity.sys_lastmodifieddate then contact.sys_lastmodifieddate else opportunity.sys_lastmodifieddate end as lastmoddate,
contact.rowid + opportunity.rowid as uniqueid,
opportunity.amount,
opportunity.description,
opportunity.campaignid,
opportunity.expectedrevenue
from contact, opportunity
where opportunity.accountid=contact.accountid

Cursor

4. Check the newly created target’s data type.

As before, go to the ‘Target Tables / Data Sets’ created from the new Pluggable Data Source and make sure that the Data Type is correct and there are no ‘Unknown/ Unsupported’ data types.  Also adjust the ‘Nullable’ column so that only columns that are truly Not Null remain unchecked.

Cursor

5. Set up Incremental Updates

As before, change the Load Strategy to ‘Update’ table and select a suitable Key(s) field for the user key, and date field for the filter.

Cursor

6. Run the job to test it.

 

Summary
This article walked through the steps to configure the Data Sync tool to be able to connect and extract data from SalesForce.

For further information on the Data Sync Tool, and also for steps on how to upgrade a previous version of the tool, see the documentation on OTN.  That documentation can be found here.

For other A-Team articles about BICS and Data Sync, click here

Loading Data from Generic JDBC Sources into BICS

$
0
0

For other A-Team articles about BICS and Data Sync, click here

Introduction

Version 2.2 of the Data Sync tool was released September 2016 and added the ability to connect to a number of different data sources via JDBC.

Setting up these Data Sources in Data Sync varies by the source.  Rather than create a single article to cover them all, 3 have been written.  Select the appropriate article to learn more about the steps to set up that Data Source:

This article will cover a number of the JDBC drivers that come pre-loaded with the Data Sync tool (Greenplum, Hive, Impala, Mongodb. Postgresql. Redshift, Sybase) as well as how to set up connections to different data sources where for which a JDBC driver is available.

 

Data Source Article With Details
Greenplum, Hive, Impala, Mongodb. Postgresql. Redshift, Sybase, and directions for other JDBC data sources where the driver is available this article
Service Cloud / RightNow link to article
SalesForce link to article

Downloading Latest Version of Data Sync Tool

Be sure to download and install the latest version of the Data Sync Tool from OTN through this link.

For further instructions on configuring Data Sync, see this article.  If a previous version of Data Sync is being upgraded, use the documentation on OTN.

 

Setting up JDBC Connection

The following JDBC drivers are included with version 2.2 of the Data Sync tool.  To connect to any of these data sources, no additional JDBC drivers need to be downloaded, and the connection can be set up immediately.

For Databases not in this list – see the section in this document ‘Setting up Connection for a Different JDBC Driver

 

Database:     GreenPlum

Driver:           com.oracle.bi.jdbc.greenplum.GreenplumDriver

URL:              jdbc:oracle:greenplum://$hostname:$port;DatabaseName=$databasename;

 

Database:     Hive

Driver:           com.oracle.bi.jdbc.hive.HiveDriver

URL:              jdbc:oracle:hive://$hostname:$port;DatabaseName=$databasename;

 

Database:     Impala

Driver:           com.oracle.bi.jdbc.impala.ImpalaDriver

URL:              jdbc:oracle:impala://hostname:$port;DatabaseName=databasename;

 

Database:     Mongodb

Driver:           com.oracle.bi.jdbc.mongodb.MongoDBDriver

URL:              jdbc:oracle:mongodb://$hostname:$port;DatabaseName=$databasename;

 

Database:     Postgres

Driver:           com.oracle.bi.jdbc.postgresql.PostgreSQLDriver

URL:              jdbc:oracle:postgresql://$hostname:$port;DatabaseName=$databasename;

 

Database:     Redshift

Driver:           com.oracle.bi.jdbc.redshift.RedShiftDriver

URL:              jdbc:oracle:redshift://REDSHIFT_ENDPOINT:$port;DatabaseName=$databasename;

 

Database:     SalesForce

Driver:           com.oracle.bi.jdbc.sforce.SForceDriver

URL:              jdbc:oracle:sforce://login.salesforce.com;SecurityToken=xxxxxxxxxxxxxxxxxxxxxxx

 

Database:     Sybase

Driver:           com.oracle.bi.jdbc.sybase.SybaseDriver

URL:              jdbc:oracle:sybase://$hostname:$port;DatabaseName=$databasename;

 

1. Create a Connection

Within the ‘Connections’ section, select ‘New’

Screenshot_8_12_16__4_20_PM

In the detail box

a, Enter a Name for the connection

b. Select ‘Generic JDBC’ as the Connection Type

c. If the database requires a Username and Password, enter those in the ‘User’ and ‘Password’ boxes

d. In the ‘URL’ box, enter the connection details for the database.  Use the format from the list above, replacing the $host, $port, $database values as appropriate.

e, In the ‘JDBC Driver’ box, cut and paste the text from the relevant ‘Driver’ column.

2. Test the connection and confirm it works

3. Define the Data Schema

It is good practice to pick the source Database Schama that contains the data to be imported.

In the details of the connection – in this case a Greenplum connection – select the schema box to bring up a list of available schemas and chose the one that contains the data.  If multiple schemas are needed, create a new connection for each one and name appropriately.

Cursor

4. Enter Source Database specific record separators

This only applies in some cases where some of the field names in the source database contain spaces.  This can cause a problem when Data Sync generates the select statement.  In these cases database specific separators need to be entered.  If this applies:

a. Select the Connection, and the ‘Advanced Properties’ and enter a value, or values into ‘Enclose object names’

Cursor
If, for instance, a double quote character needs to enclose the start and end of the field name (in the case of SalesForce), then just entering a single ” in the value box will work. If the character is different to start and end a field, for instance if a field needs to be surrounded by [ ] (in the case of MS Access), then enter the characters separated by a comma, so [,]

 

Setting up Connection for a Different JDBC Driver

Data Sync has the ability to work with many other generic JDBC drivers.  This section describes how to set up such a connection.  In this case the example of MS Access is used.

1. Close the Data Sync tool, and ‘Exit’ out completely from the option in the task bar.

Cursor

2. Locate the JDBC driver for the data source and download to the environment where Data Sync in installed.  In this example, the UCanAccess driver, available here, was downloaded.

3. Create a new folder in the /lib directory within the Data Sync folder and save the JDBC driver file(s) in there.  Data Sync will read all files within the /lib directory, so adding folders helps organize the new drivers.  In this case a new folder called ‘Access’ was created, and the UCanAccess driver and support files copied there.

Cursor

4. Open Data Sync and create a new connection

Cursor

5. Select the ‘Connection Type’ to be  Generic JDBC

Cursor

6. Use the documentation for the JDBC driver to figure out the JDBC Driver and URL details.  In this case the format for the JDBC driver is

net.ucanaccess.jdbc.UcanaccessDriver

And the JDBC URL:

jdbc:ucanaccess://FULL FILE PATH TO ACCESS DATABASE

7. Work through steps 2-4 from the ‘Setting up JDBC Connection’ section above.

If issues are encountered with the connection, do due diligence testing to try and resolve whether the issue is with the driver or Data Sync.  If possible, locate and try a different driver for the same Database Source.  While many generic JDBC drivers should work, there are some that may not.  At the current time, the only drivers the tool is certified to work with are the ones that are packaged with it.  If it’s ascertained that the issue is with the Data Sync tool, and you have access to an oracle employee, have them open a bug against product ‘10432’ and the ‘DATASYNC’ component, otherwise create a low priority SR. In both cases providing details of the driver and error.  The DEV team can then look at the issue, and possibly add support in the next release of the Data Sync tool.

Summary
This article walked through the steps to configure the Data Sync tool to be able to connect and extract data via generic JDBC connections to data sources.

For further information on the Data Sync Tool, and also for steps on how to upgrade a previous version of the tool, see the documentation on OTN.  That documentation can be found here.

For other A-Team articles about BICS and Data Sync, click here

Uploading CSV files to BICS Visual Analyzer and Data Visualization Cloud Service

$
0
0

Introduction

This post details a new feature introduced in Version 2.1 of the Oracle BICS Data Sync tool.

Currently BICS Visual Analyzer (VA) and Data Visualization Cloud Service (DVCS) users may upload Microsoft Excel Workbooks (.XLSX) but not Comma Separated Values (CSV) files. This is an issue for use cases that use a CSV file produced from a data extraction utility, particularly if the CSV file is updated regularly.

This post provides an easy way to upload CVS files to BICS or DVCS as a Data Set for the above use case.

The Data Sync tool also provides the following advantages:

* The ability to schedule periodic loads of the file(s)
* The ability for the Data Sync Load to be triggered by the successful completion of an event.

Prerequisites

If necessary, download and install the Oracle BICS Data Sync utility from the Oracle Technology Network http://www.oracle.com/technetwork/middleware/bicloud/downloads/index.html and the accompanying installation documentation BICS Data Sync Getting Started Guide

Other A-Team Chronicles Blogs detail how to perform the installation, for example: Configuring the Data Sync Tool for BI Cloud Service (BICS)

Steps

Create a BICS/DVCS Target Connection

If you already have a BICS or DVCS connection, proceed to Create a Project and/or a Job below.

The Data Sync installation may have created a connection named Target with a connection type of Oracle (BICS). If so, edit this one or create a new one. As shown in the figure below, enter the User and Password for the BICS or DVCS you want to upload to. Enter the URL of the service and click on Test Connection.

Note: The connection type of Oracle (BICS) is the correct type for DVCS also.

P1

Note: The URL is the URL shown in your browser minus the “/va” and everything following. An example is shown in the figure below.

P2

Create a Project and/or a Job

If you already have a project that contains a job whose primary target is the BICS or DVCS connection, proceed to Create a File Data Task below.

From the Menu Bar, select File > Projects > Create a New Project, enter a name and click OK as shown below.

P3.JPG

P4.JPG

Create a File Data Task

Under the Menu Bar, select the Project group, select the new project name, select the File Data tab below the Project group and click New as shown below.

P5.JPG

Select the CSV File Location, accept the File Name, assign a Logical Name (with no spaces) and click Next as shown below.

p6

Edit or accept the Import Options and click Next as shown below.

Note: This step imports only the column metadata in the file (data type, length, etc.) and not the actual data. The sampling size is usually sufficient.

p7

Check the Create New box, enter a Data Set name, select Data Set for the Output Option and click Next as shown below.

p8

Edit or accept the Map Columns settings and click OK as shown below.

p9

Update and Run the Data Sync Job and Review the Results

Under the Menu Bar, select the Jobs group, select the Jobs tab below the Jobs group, right-click on the job name and click Update Job as shown below.

p10

To the right of the Jobs group, click on Run Job as shown below.

p11

The job should run quickly. Select the History tab and the job will show completed. Click on the Tasks tab below the job status line and the task will show the number of records uploaded as shown below.

p12

View the Cloud Service Data Set

Log into the BICS VA or DVCS, click on New Project, select Data Sets as the Source and the uploaded Data Set created from the CSV file will be displayed as shown below.

p13

Summary

This post describes a method of using the Oracle Data Sync utility to upload a CSV file to either BICS or DVCS as a Data Set that may be used in VA / DVCS projects.

Additional information on Data Sync, including the scheduling and triggering Data Sync jobs, may be found on OTN at http://www.oracle.com/technetwork/middleware/bicloud/downloads/index.html.

For more BICS best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for BICS.

 

 

Identity and Cloud Security A-Team at Oracle Open World

$
0
0

I just wanted to let everyone know that Kiran and I will be presenting with our good friend John Griffith from Regions Bank at Oracle Open World next week.

Our session is Oracle Identity Management Production Readiness: Handling the Last Mile in Your Deployment [CON6972]

It will take place on Wednesday, Sep 21, 1:30 p.m. – 2:15 p.m.  at  Moscone West – 2020.

In this session we will be discussing tips and techniques for a successful deployment of Oracle Identity Management. Learn about best practices for performance testing and tuning of Oracle Identity Manager and Oracle Access Manager, setting up production, ready monitoring, and failover and disaster recovery testing.

I encourage everyone to come by and participate.

Also we will be at Open World throughout the week and are always happy to have a conversation on Identity, Access, and Cloud Security with any and all comers.

Hope to see you there!


Integration Design Pattern – Synchronous Facade for Asynchronous Interaction

$
0
0

Introduction

In this blog, we will explore a Hybrid Message Interaction pattern, which combines the characteristics of traditional Synchronous Request-Reply and Asynchronous patterns. We will also see, the need for such a design pattern and how it can be implemented using Oracle SOA Suite.

Need for this Design Pattern

A Hybrid Synchronous-Asynchronous message exchange pattern is a requirement that pops up often in architectural discussions at customer engagements. The below discussion summarizes the need for such a design pattern.

Consider the following scenario:

A Web Client end-user fills in a form and submits a request. This is a blocking request and the client waits for a reply. The process is expected to reply to the user within a short period and let us assume that the client times out after 30 seconds. In a happy path, the backend systems are responsive and user receives the response within 30 seconds. This is shown in the below ‘Synchronous – Happy Path’ diagram.

pic1

Now, consider if a backend delay or system outage prevents a response from the website within 30s. All that the client receives is a Timeout Error. No further information is available and the user can only refill and resubmit the form, another time! This is shown in the below ‘Synchronous – Not so Happy Path’ diagram.

pic2The above solution is designed for the happy path. It can even be optimized for a high throughput in the happy path scenario. But for the negative path, it is desirable to have a more responsive and user friendly behavior. We should note that any design for the negative scenario would inherently be Asynchronous in nature, as we do not want the user to wait for the delayed response. Rather the user should be notified by other means whenever such a response is available.

Even though the percentage of requests that end up in the delayed scenario may be quite less, it is still desirable that such requests are handled in a more user friendly manner. We also want to achieve the Asynchronous error handling without sacrificing the high performance of a purely synchronous interaction.

The next section proposes such a design

The Hybrid Design Pattern

Consider an alternative to the earlier discussed design, where the website is more resilient against such error conditions.

In this design, whenever any backend delay or system outage prevents a response from the website within 30s, then the website provides the user with an acknowledgement ID at the end of 30s. This acknowledgement ID is something that the user can use to track the status of his request offline when he or she calls up the customer care.

Meanwhile the website, after acknowledging the client in 30s, continues to wait for a response from the backend. When the actual response is eventually available, the user is notified promptly via email or sms or any notification channel. The ‘Hybrid Sync-Async Interaction’ diagram below depicts this design.  Note that the positive case of a happy path remains to be synchronous delivering the timely responses immediately to the UI user.

Pic3_Sync_Async_Hybrid

This above use case has a mixture of Synchronous and Asynchronous characteristics. The interaction from the client side for the 30s interval is purely synchronous. Whereas the delayed message processing is asynchronous and may involve the use of Queues, database table or other means of persisting messages.

Implementation using Oracle SOA Suite

Oracle SOA Suite provides the tools to implement such a hybrid message interaction. We will explore the SOA Suite implementation and example source code.

This implementation assumes that the backend legacy systems are represented as a pair of request and response queues.  In reality these could be any complex systems or integration with multiple systems over different protocols and interfaces.

SOA Suite Components

The implementation is divided up in 2 BPEL components.

The SyncWrapperProcess provides the synchronous interface to the UI client. It manages the Synchronous blocking call by the client and is responsible to pass on the backend responses as Synchronous Replies or propagate appropriate fault to the UI Client. This process is implemented using the “Synchronous BPEL Process” template.

Pic4_SyncWrapperCompositeThe AsyncInteractionProcess acts as the Asynchronous client to the Backend legacy systems. It is responsible to orchestrate multiple requests with the back end and also manage the timing of responses to the SyncWrapper. This process is implemented using the “Asynchronous BPEL Process” template.

Pic5_AsyncInteractionCompositeThe below Cross Functional Flow Chart shows the division of responsibilities between the components. It also depicts the interactions with the UI Client the Backend Legacy system. Note that the Backend systems are represented here as the black box behind the pair of queues.

 

pic6_SyncFacade_v0.1

 

Screenshots of the SyncWrapper and AsyncInteraction BPEL processes shed more light into the BPEL orchestration and activities required within these processes. The activities and orchestration logic is use-case specific and we will refrain from deep diving into these BPEL processes themselves.

Pic7_SyncWrapperBPEL

Pic8_AsyncInteractionBPEL

However, what is of more importance is to delve into some finer aspects of the implementation which make the hybrid interaction solution possible.

Transaction Boundaries

  1. The SyncWrapper process is configured with transaction attribute – requiresNew. This is required to start the transaction for the overall interaction. Also the assumption here is that there isn’t a need for transaction propagation from the UI client.
  • config.transaction = ‘requiresNew’ in composite.xml of SyncWrapperProject

 

  1. The AsyncInteractionProcess is configured to participate in the transaction initiated by SyncWrapper. Also the deliveryPolicy is set to sync. This ensures that the same thread from the SyncWrapper is used for the invocation of AsyncInteractionProcess and that any fault can be propagated back to the client during the sync interaction period.
  • config.transaction = ‘required’ in composite.xml of AsyncInteractionProject
  • config.oneWayDeliveryPolicy = ‘sync’ in composite.xml of AsyncInteractionProject

 

Cluster-awareness:

One of the main pitfalls of using Asynchronous activities like ‘receive’ and ‘onMessage’ within Synchronous BPEL process is the issue of cluster awareness.  In multi node clustered environments, there is no guarantee that the callback messages arrive on the same server where the sync UI client is waiting for response.

The below diagram depicts the case when the callback message arrives at the origin node and hence the client receives the response message.

pic9_ClusterAwareness_2On the contrary, the diagram below shows the case when the callback message reaches the Synchronous process where the client is not listening to. This causes the client, which is listening on node 1, to timeout!

pic10_ClusterAwareness_2

In this implementation, the backend response message could arrive on any of the distributed queue members and this can rehydrate the AsyncProcess on any server in a cluster. But we want the callback to Sync process, to arrive on the specific server where the request originated from. This can achieved by setting the wsa.replyToAddress during the ‘invoke’ from Sync to Async BPEL process. Below are the required settings.

  • replyToAddress = Sync process’s service endpoint url of originating server.
  • faultToAddress = Sync process’s service endpoint url of originating server.

pic11_BpelProperties

 

Care should be taken to ensure that the value of the CallbackURL is computed at runtime and it resolves to the Service endpoint url of the SyncWrapperProcess on the originating server.  Please refer to the Java_Embedding block in the SyncWrapperProcess [source code provided here] for one way of obtaining the correct endpoint url. This can potentially be resolved using other methods as well.

OnAlarms

The AsyncInteractionProcess uses 2 ‘onAlarm’ handlers for executing the timebound activities.

The first ‘onAlarm’ is configured for the communications scope which encloses the ‘invoke’ and ‘receive’ activities with the Legacy backend systems. It triggers after elapse of 30s and is designed to send a synchronous acknowledgement message back to the UI client. Let us call this the ‘Comms_onAlarm’.

The second ‘onAlarm’ is set for the main overall scope of the Asynchronous process. This triggers after a sufficiently long wait like 3 minutes. This onAlarm block is responsible to abort the process instance so as not to have lingering long running process instances as a result of missing responses from the backend system.  This is the ‘InstanceCleanup_onAlarm’.

Note that any response messages received after the ’InstanceCleanup_onAlarm’ are effectively orphaned callback messages. These cannot be recovered from the BPEL console and will remain in the DLV_MESSAGE table of SOAINFRA until they are purged.  This should be borne in mind when setting the value for ‘InstanceCleanup_onAlarm’.

 

Sync Timeout settings

The ‘Comms_onAlarm’ should be small enough to complete within the BPEL SyncMaxWaitTime  and JTA Transaction Timeout periods. If this is not set correctly, the end user may receive a Webservice Fault or a Transaction Timeout fault instead of the acknowledgement message response.

  • SyncMaxWaitTime < BPEL EJB Timeout
  • onAlarm(1) duration < SyncMaxWaitTime

References

  •  JDeveloper Project /  Source code of the project is provided here

Installing Data Sync in Compute for Cloud to Cloud Loading into BICS

$
0
0

For other A-Team articles about BICS and Data Sync, click here

Introduction

The Data Sync tool provides the ability to extract from both on-premise, and cloud data sources, and to load that data into BI Cloud Service (BICS), and other relational databases.  In some use cases, both the source databases, and the target, may be in ‘the Cloud’.  Rather than run the Data Sync tool ‘On-Premise’ to extract data down from the cloud, only to load it back up again, this article outlines an approach where the Data Sync tool is installed and run in an Oracle Compute Instance in the Cloud.  In this way all data movement and processing happens in ‘the cloud’ and no on-premise install is required.

 

Main Article

In this example Data Sync will be installed into its own Instance in Oracle Compute.

In theory you could install into any existing compute instance, for example JCS, DBCS, etc, although there the Data Sync tool would be sharing the same file system as other applications.  This could, for example, be a problem in the case of a restore where files may be overwritten.  Where possible, it is therefore recommended that a separate Compute Instance is created for Data Sync.

Create Compute Instance

1. In Compute, chose a suitable Image, Shape and Storage for the planned workload.  It is recommended to give Data Sync at least 8 GB of memory.  It is suggested NOT to select the ‘minimal’ image as that will require additional packages to be loaded later.

2. In this example the OL-6.6-20GB-x11-RD image was used, along with a general purpose oc4 shape with 15 GB of memory and 20GB of storage:

Oracle_Compute_Cloud_Service_-_Instance_Creation

3. Once created, obtain the Public IP from the instance.

Oracle_Compute_Cloud_Service_-_Instance_Details

 

Create SSH Session and Install VNC

We will set up an SSH connection and a VNC session on the Compute Instance for Data Sync to run in. When the user disconnects from the session, Data Sync will continue to operate.  It will also allow multiple developers to connect to VNC and share the same session from anywhere in the world.

There are many SSH tools, in this case the free windows tool, Putty, will be used, although other tools can be configured in a similar manner.  Putty can be download from here.

1. Open Putty and Set Up a Connection using the IP of the Instance obtained in step (a) and port 22.

Cursor

2. Expand the ‘Connection’ / ‘SSH’ / ‘Auth’ menu item.  Browse in the ‘Private key file for authentication’ section to the Private Key companion to the Public Key used in the creation of the Compute Instance in the previous section.

Windows7_x64

3. Return to the ‘Session’ section, give the session a name and save it.  Then hit ‘Open’ to start the connection to the Compute Instance.

Cursor

4. For the ‘Login as’ user, enter ‘opc’ and when prompted for the ‘Passphrase’, use the passphrase for the SSH Key.

If the connection is successful, then a command prompt should appear after these have been entered:

Cursor

5. As the opc user, edit sshd_config.

sudo vi /etc/ssh/sshd_config

Uncomment all instances of X11Forwarding and change the following word to be ‘yes’

Screenshot_9_29_16__5_17_PM

6. Save the file, and then restart sshd by running the following command:

sudo /etc/init.d/sshd restart

7. Switch to the Oracle user

sudo -su oracle

8. Run the following command to prevent the Window Manager from displaying a lock screen:

gconftool-2 -s -t bool /apps/gnome-screensaver/lock_enabled false

9. Start VNC server with the following command:

vncserver :1 -depth 16 -alwaysshared -geometry 1200×750 -s off

10. Figure out which port VNC is using

We’re going to use SSH port forwarding.  To do this, we need to confirm the port that is being used by VNC.

Typically the port is 5900 + N, where N is the display number.

In the screenshot below when VNC was started, it shows the screen is number 1 (the value after the ‘:’ in “d32f4d : 1” ) so in this case the port is 5901.  This will typically be the port number, but if other VNC sessions are already running, then it may be different.

To test this, run this command:

netstat -anp | grep 5901

This should confirm the process listening on that port – in this case, VNC:

Cursor

11. Exit the putty session by typing ‘exit’ and return once to exit the oracle user, and ‘exit’ and return again to exit the putty session.

 

Create SSH Tunnel and Start VNC Session

1. Create the SSH Tunnel

Open putty again and load the saved session from earlier.  Open the ‘Connection’ / ‘SSH’ / ‘Tunnel’ menu item.

We need to create an SSH tunnel to forward VNC traffic from the local host to port 5901 on the Compute Instance.

In this example we enter the Local Port also as 5901, and then in the Destination, the IP address of the Compute Instance, followed by a ‘:’ and then 5901.  Select ‘Add’ to set up the tunnel.

Cursor

2. Return back to the top ‘Session’ menu and ‘Save’ the session again to capture the changes, then Open the session again and connect as ‘opc’ and enter the passphrase.

Cursor

3.  If a VNC client is not installed on the user’s machine, download one.  In this case the free viewer from RealVNC which can be downloaded from here is being used.

4. Open VNC viewer and for the target, enter ‘localhost:5901’.  VNC will attempt to connect to the local port 5901, which will then be redirected by SSH to port 5901 on the target.

Cursor

Anytime a VNC session is going to be used, the putty session must be open (although some VNC tools will also set up the SSH session for you, in which case you can use that if preferred).

5. Enter the VNC password and the session will be connected.  If there is an error message within the VNC session stating ‘Authentication is Required to set the network proxy used for downloading packages’, then click ‘Cancel’ to ignore it.

 

Install Data Sync Software in Compute Instance

1. Within the connected VNC session, open a Terminal session

Screenshot_9_30_16__2_24_PM

2. To turn on copy and paste between the client and the VNC session, enter:

vncconfig -nowin &

 

3. Download the Data Sync and JDK Software

Open Firefox within the VNC session and download the required software.

Data Sync can be found here:  http://www.oracle.com/technetwork/middleware/bicloud/downloads/index.html

JDK downloads can be found here:  http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

For the JDK, select one of the Linux x64 versions.

4. Plan where to in install the software.

Take a look at the file system and see which makes the most case in your scenario.  In this example we are using the /home/oracle directory in a sub-directory we created called ‘datasync’.  Depending on the configuration of the Compute Instance and its storage, there may be better choices.

5. Extract both the JDK and Data Sync software to that directory.

Screenshot_9_30_16__2_43_PM

6. Edit the ‘config.sh’ file to point to the location of the JDK

Screenshot_9_30_16__2_49_PM

7. Start Data Sync by running

./datasync.sh

 

Then go through the standard steps for setting up and configuring the Data Sync tool.

For more information on setting up Data Sync, see this article.

For information on setting up Data Sync to source from Cloud OTBI environments, see this article.

Other Data Sync documentation can be found here.

 

Once the VNC session has been set up, then other users can also connect.  They will just need to complete the following steps from above:

Create SSH Session and Install VNC, Steps 1, 2 & 3

Create SSH Tunnel and Start VNC Session, Steps 1 & 2

 

Summary

This article walked through the steps to create a Compute Instance, accessible through VNC over SSH, and then to install Data Sync into that  for loading scenarios where an on-premise footprint is not required.

For other A-Team articles about BICS and Data Sync, click here.

Cloud Security: Seamless Federated SSO for PaaS and Fusion-based SaaS

$
0
0

Introduction

Oracle Fusion-based SaaS Cloud environments can be extended in many ways. While customization is the standard activity to setup a SaaS environment for your business needs, chances are that you want to extend your SaaS for more sophisticated use cases.

In general this is not a problem and Oracle Cloud offers a great number of possible PaaS components for this. However, user and login experience can be a challenge. Luckily, many Oracle Cloud PaaS offerings use a shared identity management environment to make the integration easier.

This article describes how the integration between Fusion-based SaaS and PaaS works in general and how easy the configuration can be done.

Background

At the moment, Oracle Fusion-based SaaS comes with its own identity management stack. This stack can be shared between Fusion-based SaaS offerings like Global Human Capital Management, Sales Cloud, Financials Cloud, etc.

On the other hand, many Oracle PaaS offerings use a shared identity management (SIM-protected PaaS) and can share it if they are located in the same data center and identity domain. If done right, integration of SIM-protected PaaS and Fusion-based SaaS for Federated SSO can be done quite easily.

Identity Domain vs Identity Management Stack

In Oracle Cloud environments the term identity is used for two different parts and can be quite confusing.

  • Identity Domain – Oracle Cloud environments are part of an Identity Domain that governs service administration, for example, start and restart of instances, user management, etc. The user management always applies to the service administration UI but may not apply to the managed environments.
  • Identity Management Stack – Fusion-based SaaS has its own Identity Management Stack (or IDM Stack) and is also part of an Identity Domain (for managing the service).

Federated Single Sign-On

As described in Cloud Security: Federated SSO for Fusion-based SaaS, Federated Single Sign-on is the major user authentication solution for Cloud components.

Among its advantages are a single source for user management, single location of authentication data and a chance for better data security compared to multiple and distinct silo-ed solutions.

Components

In general, we have two component groups we want to integrate:

  • Fusion-based SaaS Components – HCM Cloud, Sales Cloud, ERP Cloud, CRM Cloud, etc.
  • SIM-protected PaaS Components – Developer Cloud Service, Integration Cloud Service, Messaging Cloud Service, Process Cloud Service, etc.

Each component group should share the Identity Domain. For seamless integration both groups should be in the same Identity Domain.

Integration Scenarios

The integration between both component groups follows two patterns. The first pattern shows the integration of both component groups in general. The second pattern is an extension of the first, but allows the usage of a third-party Identity Provider solution. The inner workings for both patterns are the same.

Federated Single Sign-On

This scenario can be seen as a “standalone” or self-contained scenario. All users are maintained in the Fusion-based IDM stack and synchronized with the shared identity management stack. The SIM stack acts as the Federated SSO Service Provider and the Fusion IDM stack acts as the Identity Provider. Login of all users and for all components is handled by the Fusion IDM stack.

SaaS-SIM-1

Federated Single Sign-On with Third Party Identity Provider

If an existing third-party Identity Provider should be used, the above scenario can be extended as depicted below. The Fusion IDM stack will act as a Federation Proxy and redirect all authentication requests to the third-party Identity Provider.

SaaS-SIM-IdP-2

User and Role Synchronization

User and Role synchronization is the most challenging part of Federated SSO in the Cloud. Although a manageable part, it can be really challenging if the number of identity silos is too high. The lower the number of identity silos the better.

User and Role Synchronization between Fusion-based SaaS and SIM-protected PaaS is expected to be available in the near future.

Requirements and Setup

To get the seamless Federated SSO integration between SIM-protected PaaS and Fusion-based SaaS these requirements have to be fulfilled:

  • All Fusion-based SaaS offerings should be in the same Identity Domain and environment (i.e., sharing the same IDM stack)
  • All SIM-based PaaS offerings should be in the same Identity Domain and data center
  • Fusion-based SaaS and SIM-based PaaS should be in the same Identity Domain and data center

After all, these are just a few manageable requirements which must be mentioned during the ordering process. Once this is done, the integration between Fusion-based SaaS and SIM-protected PaaS will be done automatically.

Integration of a third-party Identity Provider is still an on-request, Service Request based task (see Cloud Security: Federated SSO for Fusion-based SaaS). When requesting this integration adding Federation SSO Proxy setup explicitly to the request is strongly recommended!

Note: The seamless Federated SSO integration is a packaged deal and comes with a WebService level integration setting up the Identity Provider as the trusted SAML issuer, too. You can’t get the one without the other.

References

Using Process Cloud Service REST API Part 1

$
0
0

The Process Cloud Service (PCS) REST API provides an avenue to build UI components for workflow applications based on PCS. The versatility that comes with REST enables modern web application frameworks and just as easily, mobile applications. The API documentation is available here. Notice the endpoints are organized into eight categories. We’ll be focusing on the process and task categories.API-categories

Exploring the API

The API documentation contains samples using cURL which is useful for ad hoc command line calls. More comprehensive and easier to use tools like Postman and SoapUI are recommended. The PCS REST API WADL (Web Application Description Language) is available and can be imported into a Postman collection or SoapUI project. Most modern browsers such as Chrome, Firefox, Microsoft Internet Explorer and Edge have developer tools that can be useful when debugging web applications with REST calls.

A Simple PCS Application

In order to explore the API we will need a simple PCS application with a basic workflow and task form. We’ll build a workflow with a message start which means it will have a SOAP Web Service binding. We’ll create a string parameter on the binding and pass that incoming string to a submit task, then to an approve task and end the flow.

simple-workflow

PCS Composer

Login to PCS and select Develop Processes from the row of buttons on the welcome page.

PCS-welcome

that will take you to PCS Composer where you can select Create New Application

CreateApplication name the application APItest1 ApplicationName Create a new “message start” process MessageStart name the process APItestProc ProcessName The process modeler opens where you can drop activities in swimlanes, route flow lines and set data associations. We need a simple process data object to contain the string value passed in the starting message. Click the Data Objects button

CreateDataObject and add a new process data object named doSimpleString of type string.

doSimpleString

Open the property sheet for the Start event

StartProperties

and select Define Interface to add the string parameter.

DefineInterface

Name the argument argInputString and click OK.

ArgInputString

Next change the End event to None since there is no need for asynchronous call back when the process completes.

EndNone

We need a Submit task for the Submitter role and Approve task for the Approver role. They are in the Human section of the palette on the right hand side of Composer.

SubmitandApprove

 

Drag and drop the tasks onto the flow line as shown

TasksAdded

add a second swimlane using the large plus button

AddLane

create new Approver and Submitter roles and assign them to the swimlanes

AddRole

re-arrange the Approve Task and End event in the Approver swimlane

SwimlaneAssign

One simple form with a textbox to hold the string value will be used for both tasks. Open the property sheet for the Submit Task and click the plus sign to create a New Web Form. The Basic Form (frevvo) will be phased out so it is a good idea to use New Web Forms for new development.

NewWebForm

name the form wfSimple and select Open Immediately and then the Create button.

FormOpenImmediately

Add the textbox control to the form by dragging and dropping from the palette on the right of the form designer.

DropTextBox

Enter nString for the control name and String for the label.

TextBoxNameandLabel

That’s all we need, save the form and return to the process model. Open the property sheet for the Approve Task and assign the wfSimple form by clicking on the lookup button and selecting the wfSimple form.

ApproveFormAssign

Finally, do the data association for the Start event and both tasks. Click the stack icon next to the Start event and select Open Data Association.

StartDataAssociation

Associate the argInputString from the Start event with the doSimpleString process data object.

DAStart

On the Submit Task Input, associate the doSimpleString process data object with the wfSimple textbox (wfSimple.string). Also remove the default association of the form data object with the form (wfSimpleDataObject->wfSimple)

DASubmitInput

On the Submit Task Output, associate the wdSimple textbox with the doSimpleString process data object. Also remove the default association (for the form but leave the task outcome association).

DASubmitOutput

Repeat essentially the same as above for the Approve Task Input

DAApproveInput

and Approve Task Output

DAApproveOutput

The APItest1 application is now complete and ready to Validate, Publish and Deploy.

ValidateProcess

-Publish

PublishProcess

-Deploy – choose either the menu or the Deploy button on the top right

DeployProject

the Deployment tab opens, click the big Deploy new version button in the middle

DeployProjectTab

select Last Published Version

DeployLastPublished

leave the Customize step as is

DeployNoCustomize

reValidate

DeployRevalidate

enter a version number, say 1.0

DeployDeploy

and done.

DeploySuccess

In order to make the Web Service call to invoke a message start process the WSDL URL is needed. To quickly find and copy it, go to the Composer Management page.

ComposerManagement

select Web Services from the Actions drop down list for APItest1

ApplicationWebServices

and copy the link address.

ApplicationCopyLinkAddress

Save the link somewhere, we’ll need the WSDL to define the service call.

The last step in the deployment is to assign user(s) to the application roles for APItest1. Open the Administration page in PCS Workspace (must be logged in with a privileged user account)

WorkspaceAssignRoles

Add one or more users or groups to each of the APItest1 roles, in particular APItest1.Approver and APItest1.Submitter.

RoleAssignment-tuser1

Run the Workflow

The process is now active and the endpoint available to send the message start. Using the WSDL URL copied earlier, create a SOAP project in SoapUI and setup the request as shown. Use the same user in Basic Auth that you assigned the application roles above.

SoapUI-ProcessStart

Go into PCS Workspace using the same user login and select Work on Tasks.

WorkOnTasks-tuser1

There will be a task assigned, waiting at the Submit Task activity in the process flow.

TaskAssigned

Open the task and you will see the task form with the single textbox with the String label and containing the string value that was passed in from the SoapUI call.

SubmitTaskwithSoapUIstartmessage

Edit the contents of the text box changing the string value and click the Submit button on the form.

SubmitTaskwithNewString

The process flow will move to the Approve Task and the new string value displayed in the textbox.

ApproveTaskwithNewString

Click the Approve button and the flow will move to the end activity and the completed process will be listed in the tracking view in Workspace.

CompletedProcessTracking

Instance 10002 of the process shows as complete. During execution when the workflow is waiting at the Submit Task or the Approve Task it will be listed as In Progress in the tracking view.

Using the REST API

REST has exploded in popularity for a very good reason, ease of use. Compared to XML Web Services, REST API’s are simpler, more direct, versatile and easier to consume on the client side. Since using a REST API just involves http methods GET, PUT and POST to URL endpoints any http enabled environment can be used. The command line tool cURL with the associated library libcurl is a great tool for adhoc access to a REST API. SoapUI, a popular application for testing SOAP Web Services also supports REST projects. Recently Postman, which is a Chrome application has become popular for working with REST. For our exploration we’ll mainly use Postman and SoapUI when we’re doing XML Web Services. To leverage the PCS REST API WADL, we’ll import it into SoapUI, export a Swagger version and then import that into a Postman collection. Normally Postman should be able to import WADL directly but there seems to be some problem doing that hence the workaround.

The Import WADL button is on the SoapUI New REST Project dialog.

ImportWADL

the WADL URL will normally be http://<your PCS Server>/bpm/api/4.0/application.wadl , enter it in the location and click OK.

WADL_URL

Now right click the project (will be named application) and select Export Swagger from the menu.

SoapUIexportSwagger

 

In the Export Swagger dialog, select application, set the folder to store the export and set your server as the URL base.

SwaggerBase

Open Postman and import the Swagger file into a collection. Let’s start by getting a list of process definitions. Select the GET process-definitions call, set the parameters interfaceFilter to ‘all’ and the showProcessInstancesCount to ‘false’. Also set Basic Auth username and password.

Postman-get-process-defintions

click Send, the response will look something like below, notice the processDefId

Postman-process-defintions-result

Let’s start a new process instance, send a start message from the SoapUI SOAP project with the string “Process tracking test”.

SoapUI-start-process-tracking

Note the instance number in PCS Workspace.

process-tracking-number

Send a request from Postman with processId set to that instance number.

process-tracking-number-postman-send

The response will look like

process-tracking-postman-result

Notice the Submit Task has been assigned to the user in the task list.

TaskAssigned-tracking

Let’s make a general task query for all assigned tasks.

task-query-postman

the result looks like

task-query-postman-result

Note the task number for the assigned Submit Task, 200007. Let’s get the payload with a tasks/id/payload call.

task-query-payload

Note the payload shows the string value we set in the start message for this instance in SoapUI.

Let’s change the payload, the REST call is a POST. The body is JSON constructed from the XML payload above. Copy the payload and put it together to create the JSON body shown below. Note that the double quotes inside the payload string need backslash escape. Change the payload string to something new so the update can be tracked in PCS Workspace.

task-payload-update-post

you should get a 200 response

task-payload-update-response

 

Check the string has changed by viewing the task in PCS Workspace.

task-payload-update-validate

Now let’s take action on the task by “pressing” the Submit button via REST call. Use the PUT call shown below with the JSON body containing the SUBMIT action and your user identity.

submit-approve-task

The response shows the outcome of the SUBMIT action.

submit-approve-task-response

Looking at the audit diagram in the task history we see the workflow has moved from the Submit Task to the Approve Task and the flow state is In Progress.

submit-approve-task-new-state

The audit diagram is of course available via a REST call. Use the processId, 10006 in this example, and the GET processes/processId/audit call as shown below.

process-get-audit

A nice feature of Postman is honoring the MIME type of response data, image/png, and displaying it accordingly.

process-get-audit-response

The REST API in Web Applications

The simplest web application is an HTML page. We’ll look at the mechanics of calling the API from a basic page here and in Part 2 go deeper into using modern UI frameworks for web applications and mobile applications.

Start with the basic HTML shown below, copy it to a file called APITest1.html.

<!DOCTYPE html>
<html>
  <body>
    <h1>PCS REST API Test</h1>
   <p>Part 1, use process-definitions call to get the list of process</p>
    <input type="button" value="Get Process List">
    <br><br>
    <div id="response"></div>
    <br><br>
   <p>Part 2, Retrieve a Process Instance</p>
    <input type="button" value="Get Process Instance">
    <br><br>
    <div id="resptwo"></div>
    <br><br>
   <p>Part 3, Retrieve Task List</p>
    <input type="button" value="Get Task List">
    <br><br>
    <div id="respthree"></div>
    <br><br>
   <p>Part 4, Retrieve Task Payload</p>
    <input type="button" value="Get Task Payload">
    <br><br>
    <div id="respfour"></div>
    <br><br>
   <p>Part 5, Retrieve the Audit Diagram</p>
    <input type="button" value="Get Audit Diagram">
    <br><br>
    <img src="">
    <br><br>
  </body>
</html>

Opening the page in a browser

APITest-base-page

We’ll use jQuery (https://jquery.com) to make AJAX calls and access elements of the page. Add the following head section to load the jQuery library.

  <head>
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.0/jquery.min.js"></script>
  </head>

also add a bit of style with

  <style>
   input {width:300px;}
   h1    {color: blue;}
  </style>

Next add javascript function calls to the button clicks and use AJAX to make the REST calls and return the response to the document. Most responses will be JSON objects which we’ll just stringify and stuff the text into the document for now. The process definitions call looks just like it did in Postman, http GET on the bpm/api/4.0/process-definitions URL. Set an authorization header for Basic Auth using base64 encoding of “username:password”. Insert the following in the head section of your HTML.

  <script type="text/javascript">
    function getProcessList()
    {
      $.ajax(
      {
        type: "GET",
        url: "http://pcshost:7003/bpm/api/4.0/process-definitions",
        headers: {'Authorization': "Basic dHVzZXIxOndlbGNvbWUx"},
        contentType: "application/json",
        dataType: "json",
        success: function(json){$("#response").html(JSON.stringify(json));},
        failure: function(errMsg) {
          alert(errMsg);
        },
        error: function(xhr){
          alert("An error occured: " + xhr.status + " " + xhr.statusTextt);
        }
      });
    }
  </script>

and add the onClick call to getProcessList() on the first button

    <input type="button" value="Get Process List" onClick="getProcessList()">

Load the HTML file into Chrome (or any other browser) and also open developer tools for the browser. Click on the Get Process List button.

ajax-process-list-response

The JSON response is loaded into the document at the <div> below the button. Notice there are two http method calls, OPTIONS (not shown) and GET. This is CORS preflight which is a topic for another day.

Add the four javascript functions shown below to the script section just after getProcessList(). Remember to fix the server name in the URL’s and username:password in the Authorization headers. Notice that for the last function, getAuditDiagram, we forgo AJAX and use xhr (XMLHttpRequest) directly.

    function getProcessInstance()
    {
      $.ajax(
      {
        type: "GET",
        url: "http://pcshost:7003/bpm/api/4.0/processes/10006",
        headers: {'Authorization': "Basic dHVzZXIxOndlbGNvbWUx"},
        contentType: "application/json",
        dataType: "json",
        success: function(json){$("#resptwo").html(JSON.stringify(json));},
        failure: function(errMsg) {
          alert(errMsg);
        },
        error: function(xhr){
          alert("An error occured: " + xhr.status + " " + xhr.statusText);
        }
      });
    }

    function getTaskList()
    {
      $.ajax(
      {
        type: "GET",
        url: "http://pcshost:7003/bpm/api/4.0/tasks?status=ASSIGNED&assignment=MY_AND_GROUP",
        headers: {'Authorization': "Basic dHVzZXIxOndlbGNvbWUx"},
        contentType: "application/json",
        dataType: "json",
        success: function(json){$("#respthree").html(JSON.stringify(json));},
        failure: function(errMsg) {
          alert(errMsg);
        },
        error: function(xhr){
          alert("An error occured: " + xhr.status + " " + xhr.statusText);
        }
      });
    }

    function getTaskPayload()
    {
      $.ajax(
      {
        type: "GET",
        url: "http://pcshost:7003/bpm/api/4.0/tasks/200008/payload",
        headers: {'Authorization': "Basic dHVzZXIxOndlbGNvbWUx"},
        contentType: "application/xml",
        dataType: "xml",
        success: function(xml){$("#respfour").html($(xml).text());},
        failure: function(errMsg) {
          alert(errMsg);
        },
        error: function(xhr){
          alert("An error occured: " + xhr.status + " " + xhr.statusText);
        }
      });
    }

    function getAuditDiagram()
    {
      var image = document.images[0];
      var oReq = new XMLHttpRequest();
      oReq.open("GET", "http://pcshost:7003/bpm/api/4.0/processes/10006/audit", true);
      oReq.responseType = "blob";
      oReq.setRequestHeader("Authorization", "Basic dHVzZXIxOndlbGNvbWUx");
      oReq.onreadystatechange = function () {
                                  if (oReq.readyState == oReq.DONE) {
                                    image.src = window.URL.createObjectURL(oReq.response);
                                  }
                                }
      oReq.send();
    }

Last of all, add the onClick calls to the four remaing buttons

<input type="button" value="Get Process Instance" onClick="getProcessInstance()">

<input type="button" value="Get Task List" onClick="getTaskList()">

<input type="button" value="Get Task Payload" onClick="getTaskPayload()">

<input type="button" value="Get Audit Diagram" onClick="getAuditDiagram()">

Reload the HTML file in your browser and test all the buttons.

ajax-all-functions

Summary

Access to Process Cloud Service is fast and easy using the REST API. We’ve only scratched the surface here but the mechanics and tools remain the same for exploring the full API. In Part 2 we’ll take a look at the next step beyond a simple HTML page, modern UI frameworks and mobile applications.

Recreating an Oracle Middleware Central Inventory in the Oracle Public Cloud

$
0
0

Introduction

This post provides a simple solution for recreating an Oracle Middleware software central inventory. One rare use case is when a server is lost and a new server is provisioned. The Middleware home may be on a storage device that can be reattached e.g. /u01. However, the central inventory may have been on a storage volume that was also lost e.g. /home.

Note: Although the concepts are the same, the steps are slightly different when using a Windows operating system.  This post refers to Linux/Unix operating systems.

Example of software that can be impacted

Without a central inventory, software may not function correctly, especially the OPatch software which is used to apply patches. An example an OPatch error is below:

$ cd /u01/app/oracle/MW/oracle_common/OPatch

$ ./opatch lsinventory

Inventory load failed… OPatch cannot load inventory for the given Oracle Home.

Possible causes are:

   Oracle Home dir. path does not exist in Central Inventory

   Oracle Home is a symbolic link

   Oracle Home inventory is corrupted

LsInventorySession failed: OracleHomeInventory gets null oracleHomeInfo

OPatch failed with error code 73

The following steps will recreate a functioning central inventory.

Determine if the Central Inventory Location exists

First find an instance of the inventory location pointer file. These files are named oraInst.loc. There is one in each product’s directory and the master pointer file resides in the /etc directory. The content of these files depict the central inventory’s location. An example is below:

$ cat /etc/oraInst.loc

inventory_loc=/home/oracle/oraInventory

inst_group=oinstall

If the master inventory location pointer file does not exist, view the contents from one of the product files.

In this example imagine that a BI home has been lost, although similar steps would be applicable for other Oracle Products. By default OBIEE is installed in a directory named Oracle_BI1 under the Middleware home. If the Middleware home is /u01/app/oracle/MW, then the path to the OBIEE product pointer file is /u01/app/oracle/MW/Oracle_BI1/oraInst.loc. An example is below:

$ cat /u01/app/oracle/MW/Oracle_BI1/oraInst.loc

inventory_loc=/home/oracle/oraInventory

inst_group=oinstall

In these examples, the central inventory location is /home/oracle/oraInventory. Test to see if it exists with the ls command:

$ ls /home/oracle/oraInventory

Recreate the Central Inventory Location and/or Pointer File

If either the central inventory location or the pointer file is missing, then find an instance of the createCentralInventory.sh script. This file needs to be run by the root user or a user with sudo root privileges e.g. opc. This script will create either of the items if they are missing and assign the correct privileges. If the script does not exist, an example of the contents may be viewed here.

Disclaimer: The contents of this script may change as versions change. Make sure that these contents are correct for the version of software you are using.

After running the script or commands, resume working with the user id that owns the oracle software. Usually this is the oracle user.

Attach a Product home for each Product to the Central Inventory

The central inventory itself is in a directory named ContentsXML under the central inventory location directory, for example:  /home/oracle/oraInventory/ContentsXML.

This post assumes the central inventory is missing. However, it may exist and be corrupted.

If it exists, rename it to something different, for example:

$ mv /home/oracle/oraInventory/ContentsXML /home/oracle/oraInventory/ContentsXML.bad

Each product contains a script named attachHome.sh. This script will be in the oui/bin directory under the product home. For example, the OBIEE location would be /u01/app/oracle/MW/Oracle_BI1/oui/bin/attachHome.sh

For each product home, including the oracle_common directory, run its attachHome script. For example, if the products installed are OBIEE and ODI, then the following three commands are run:

$ /u01/app/oracle/MW/oracle_common/oui/bin/attachHome.sh

$ /u01/app/oracle/ MW/Oracle_BI1/oui/bin/attachHome.sh

$ /u01/app/oracle/ MW/Oracle_ODI1/oui/bin/attachHome.sh

Test Each Product

To ensure the correct results, test each product including the oracle_common directory using OPatch. For example:

$ cd /u01/app/oracle/MW/oracle_common/OPatch

$ ./opatch lsinventory

Summary

This post provided a simple solution for recreating an Oracle Middleware software central inventory for cases where it has been lost or damaged.

For more BICS and BI best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for BICS.

Viewing all 376 articles
Browse latest View live