Quantcast
Channel: ATeam Chronicles
Viewing all 376 articles
Browse latest View live

Integrating Oracle GoldenGate Cloud Service (GGCS) with Oracle Business Intelligence Cloud Service (BICS)

$
0
0

Introduction

This article describes an overview of how to integrate Oracle GoldenGate Cloud Service (GGCS) in terms of populating or loading data into Oracle Business Intelligence Cloud Service (BICS) from On-Premises. Both GGCS and BICS are Platform as a Service (PaaS) services that runs in the Oracle Public Cloud (OPC).

For GGCS to be integrated with BICS, the following prerequisites must be met:

  • BICS has to be provisioned with Database Cloud Service (DBCS) not Schema as a Service as it’s data repository
  • GGCS has to be provisioned and attached to the DBCS used by BICS as it’s data repository
  • The DBCS used by BICS for its data repository and GGCS must be in the same domain

The high level steps for this On-Premises data with GGCS & BICS data integration are as follows:

  • Configure and Start GGCS Oracle GoldenGate Manager on the OPC side
  • Configure and Start SSH proxy server process on the On-Premises
  • Configure and Start On-Premises OGG Extract process for the tables to be moved to BICS DBCS
  • Configure and Start On-Premises OGG Extract Data Pump process
  • Configure and Start GGCS Replicat process on the OPC side to deliver data into BICS database

The following assumptions have been made during the writing of this article:

  • The reader has a general understanding of Windows and Unix platforms.
  • The reader has basic knowledge of Oracle GoldenGate products and concepts.
  • The reader has a general understanding of Cloud Computing Principles
  • The reader has basic knowledge of Oracle Cloud Services
  • The reader has basic knowledge of Oracle GoldenGate Cloud Service (GGCS)
  • The reader has basic knowledge of Oracle Business Intelligence Cloud Service (BICS)
  • The reader has a general understanding of Network Computing Principles

Main Article

GoldenGate Cloud Service (GGCS)

The GoldenGate Cloud Service (GGCS), is a cloud based real-time data integration and replication service, which provides seamless and easy data movement from various On-Premises relational databases to databases in the cloud with sub-second latency while maintaining data consistency and offering fault tolerance and resiliency.

Figure 1: GoldenGate Cloud Service (GGCS) Architecture Diagram

ggcs_architecture_01

Business Intelligence Cloud Service (BICS)

Oracle Business Intelligence Cloud Service (BICS) is a robust platform designed for customers who wants to simplify the creation, management, and deployment of analyses through interactive visualizations, data model designs, reports and dashboards. It extends customer analytics—enhances data while ensuring consistency, and maintaining governance through standard definitions, advanced calculations and predictive analytical functions.

Figure 2: On-Premises to Business Intelligence Cloud Service (BICS) Architecture Diagram

GGCS_BICS_Architecture

As illustrated on figure 2, there are various ways or tools to integrate or move data between On-Premises and Business Intelligence in the cloud, such as Oracle Data Integration (ODI) application, Data Sync for BICS, even direct upload via secured File Transfer, and Remote Data Connector (RDC) are just some of the examples to access data from On-Premises and integrate or move it into BICS platform in the cloud.

For near real time integration of On-Premises data to Business Intelligence Cloud Service (BICS); GoldenGate replication platform is the tool to use and this article will present an overview on how to configure or load data from On-Premises to Business Intelligence Cloud Service (BICS) via GoldenGate Cloud Service (GGCS).

 

Oracle GoldenGate Replication

The high level steps for GoldenGate replication between On-Premises (Source) data with BICS (Target) data via GGCS are as follows:

  • Configure and Start GGCS Oracle GoldenGate Manager on the OPC side
  • Configure and Start SSH proxy server process on the On-Premises
  • Configure and Start On-Premises OGG Extract process for the tables to be moved to BICS DBCS
  • Configure and Start On-Premises OGG Extract Data Pump process
  • Configure and Start GGCS Replicat process on the OPC side to deliver data into BICS database

GGCS Oracle GoldenGate Manager

To start configuring Oracle GoldenGate on the GGCS instance, the manager process must be running. Manager is the controller process that instantiates the other Oracle GoldenGate processes such as Extract, Extract Data Pump, Collector and Replicat processes.

Connect to GGCS Instance through ssh and start the Manager process via the GoldenGate Software Command Interface (GGSCI).

[oracle@ogg-wkshp db_1]$ ssh -i mp_opc_ssh_key opc@mp-ggcs-bics-01

[opc@bics-gg-ggcs-1 ~]$ sudo su – oracle
[oracle@bics-gg-ggcs-1 ~]$ cd $GGHOME

Note: By default, “opc” user is the only one allowed to ssh to GGCS instance. We need to switch user to “oracle” via “su” command to manage the GoldenGate processes. The environment variable $GGHOME is  pre-defined in the GGCS instance and it’s the directory where GoldenGate was installled.

[oracle@bics-gg-ggcs-1 gghome]$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.160517 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_160711.1401_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Jul 12 2016 02:21:38
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2016, Oracle and/or its affiliates. All rights reserved.

GGSCI (bics-gg-ggcs-1) 1> start mgr

Manager started.

GGSCI (bics-gg-ggcs-1) 2> info mgr

Manager is running (IP port bics-gg-ggcs-1.7777, Process ID 79806).

Important Note: By default, GoldenGate processes doesn’t accept any connection remotely. To enable connection from other hosts via the SSH proxy we need to add an ACCESS RULE to the Manager parameter File (MGR.prm) to allow connectivity through the public interface of the GGCS Instance.

Here’s the MGR.prm file used in this example:

–###############################################################
–## MGR.prm
–## Manager Parameter Template
— Manager port number
— PORT <port number>
PORT 7777
— For allocate dynamicportlist. Here the range is starting from
— port n1 through n2.
Dynamicportlist 7740-7760
— Enable secrule for collector
ACCESSRULE, PROG COLLECTOR, IPADDR 129.145.1.180, ALLOW
— Purge extract trail files
PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS, MINKEEPHOURS 24
— Start one or more Extract and Replicat processes automatically
— after they fail. –AUTORESTART provides fault tolerance when
— something temporary interferes with a process, such as
— intermittent network outages or programs that interrupt access
— to transaction logs.
— AUTORESTART ER *, RETRIES <x>, WAITMINUTES <y>, RESETMINUTES <z>
— This is to specify a lag threshold that is considered
— critical, and to force a warning message to the error log.
— Lagreport parameter specifies the interval at which manager
— checks for extract / replicat –lag.
–LAGREPORTMINUTES <x>
–LAGCRITICALMINUTES <y>
–Reports down processes
–DOWNREPORTMINUTES <n>
–DOWNCRITICAL

Start SSH Proxy Server on the On-Premises

By default, the only access allowed to GGCS is via ssh, so to allow communication of GoldenGate processes between On-Premises and GGCS instance we would need to run SSH proxy server on the on-premises side to communicate to GoldenGate processes on the GGCS side.

Start the SSH proxy via the following ssh command:

[oracle@ogg-wkshp db_1]$ ssh -i keys/mp_opc_ssh_key -v -N -f -D 127.0.0.1:8888 opc@129.145.1.180 > ./dirrpt/socks.log 2>&1

Command Syntax: ssh –i -v –N –f –D listening_ip_address:listening_tcp_port_address @ > output_file 2>&1

SSH Command Options Explained:

  1. -i = Private Key file
  2. -v = Verbose Mode
  3. -N = Do no execute remote command; mainly used for port forwarding 
  4. -f = Run ssh process in the background
  5. -D Specifies to run as local dynamic application level forwarding; act as a SOCKS proxy server on a specified interface and port
  6. listening_ip_address = Host Name or Host IP Address where this SOCKS proxy will listen (127.0.0.1 is the loopback address)
  7. listening_tcp_port_address = TCP/IP Port Number to listen on
  8. 2>&1 = Redirect Stdout and Stderr to the output file
  9. Verify the SSH Socks Proxy server has started successfully.

    1. Check the socks proxy output file via the “cat” utility and look for the messages “Local connections to forwarded…” and “Local forwarding listening on port ”.  Make sure it’s connected to GGCS instance and listening on the right IP and port address.

[oracle@ogg-wkshp db_1]$ cat ./dirrpt/socks.log

OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to 129.145.1.180 [129.145.1.180] port 22.
debug1: Connection established.
debug1: identity file keys/mp_opc_ssh_key type 1
debug1: loaded 1 keys
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3
debug1: match: OpenSSH_5.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_4.3
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host ‘129.145.1.180’ is known and matches the RSA host key.

debug1: Authentication succeeded (publickey).
debug1: Local connections to 127.0.0.1:8888 forwarded to remote address socks:0
debug1: Local forwarding listening on 127.0.0.1 port 8888.
debug1: channel 0: new [port listener]
debug1: Entering interactive session.

Configure On-Premises Oracle GoldenGate

For our test, we shall use the following tables for source (On-Premises) and target (GGCS delivering to BICS DBCS):

CREATE TABLE ACCTN
(
ACCOUNT_NO NUMBER (10,0) NOT NULL
, BALANCE NUMBER (8,2) NULL
, PREVIOUS_BAL NUMBER (8,2) NULL
, LAST_CREDIT_AMT NUMBER (8,2) NULL
, LAST_DEBIT_AMT NUMBER (8,2) NULL
, LAST_CREDIT_TS TIMESTAMP NULL
, LAST_DEBIT_TS TIMESTAMP NULL
, ACCOUNT_BRANCH NUMBER (10,0) NULL
, CONSTRAINT PK_ACCTN
PRIMARY KEY
(
ACCOUNT_NO
)
USING INDEX
)
;
CREATE TABLE ACCTS
(
ACCOUNT_NO NUMBER (10,0) NOT NULL
, FIRST_NAME VARCHAR2 (25) NULL
, LAST_NAME VARCHAR2 (25) NULL
, ADDRESS_1 VARCHAR2 (25) NULL
, ADDRESS_2 VARCHAR2 (25) NULL
, CITY VARCHAR2 (20) NULL
, STATE VARCHAR2 (2) NULL
, ZIP_CODE NUMBER (10,0) NULL
, CUSTOMER_SINCE DATE NULL
, COMMENTS VARCHAR2 (30) NULL
, CONSTRAINT PK_ACCTS
PRIMARY KEY
(
ACCOUNT_NO
)
USING INDEX
)
;
CREATE TABLE BRANCH
(
BRANCH_NO NUMBER (10,0) NOT NULL
, OPENING_BALANCE NUMBER (8,2) NULL
, CURRENT_BALANCE NUMBER (8,2) NULL
, CREDITS NUMBER (8,2) NULL
, DEBITS NUMBER (8,2) NULL
, TOTAL_ACCTS NUMBER (10,0) NULL
, ADDRESS_1 VARCHAR2 (25) NULL
, ADDRESS_2 VARCHAR2 (25) NULL
, CITY VARCHAR2 (20) NULL
, STATE VARCHAR2 (2) NULL
, ZIP_CODE NUMBER (10,0) NULL
, CONSTRAINT PK_BRANCH
PRIMARY KEY
(
BRANCH_NO
)
USING INDEX
)
;
CREATE TABLE TELLER
(
TELLER_NO NUMBER (10,0) NOT NULL
, BRANCH_NO NUMBER (10,0) NOT NULL
, OPENING_BALANCE NUMBER (8,2) NULL
, CURRENT_BALANCE NUMBER (8,2) NULL
, CREDITS NUMBER (8,2) NULL
, DEBITS NUMBER (8,2) NULL
, CONSTRAINT PK_TELLER
PRIMARY KEY
(
TELLER_NO
)
USING INDEX
)
;

Start On-Premises Oracle GoldenGate Manager

[oracle@ogg-wkshp db_1]$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.1.10 21604177 23004694_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Apr 29 2016 01:06:03
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (ogg-wkshp.us.oracle.com) 1> start mgr

Manager started.

GGSCI (ogg-wkshp.us.oracle.com) 2> info mgr

Manager is running (IP port ogg-wkshp.us.oracle.com.7809, Process ID 8998).

Configure and Start Oracle GoldenGate Extract Online Change Capture process 

Before we can configure the Oracle GoldenGate Extract Online Change process, we need to enable supplemental logging for the schema/tables we need to capture on the source database via the GGCSI utility.

Enable Table Supplemental Logging via GGCSI:

GGSCI (ogg-wkshp.us.oracle.com) 1> dblogin userid tpcadb password tpcadb

Successfully logged into database.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 2> add schematrandata tpcadb

2017-02-04 11:59:20 INFO OGG-01788 SCHEMATRANDATA has been added on schema tpcadb.
2017-02-04 11:59:20 INFO OGG-01976 SCHEMATRANDATA for scheduling columns has been added on schema tpcadb.

Note: The GGSCI “dblogin” command let’s the GGSCI session logged into the database. Your GGSCI session needs to be connected to the database before you can execute the “add schematrandata” command.

Create an Online Change Data Capture Extract Group (Process)

For this test, we will name our Online Change Data Capture group process to ETPCADB.

-> Register the Extract group with the database via GGSCI:

GGSCI (ogg-wkshp.us.oracle.com) 1> dblogin userid tpcadb password tpcadb

Successfully logged into database.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 2> register extract etpcadb database

Extract ETPCADB successfully registered with database at SCN 3112244.

-> Create/Add the Extract Group in GoldenGate via GGSCI:

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 3> add extract etpcadb, integrated, tranlog, begin now

EXTRACT added.

Note: To edit/create the Extract Configuration/Parameter file, you need to execute “edit param <group_name>” via the GGSCI utility.

GGSCI (ogg-wkshp.us.oracle.com) 1> edit param etpcadb

Here’s the Online Change Capture Parameter (etpcadb.prm) file used in this example:

EXTRACT ETPCADB
userid tpcadb, password tpcadb
EXTTRAIL ./dirdat/ea
discardfile ./dirrpt/etpcadb.dsc, append
TABLE TPCADB.ACCTN;
TABLE TPCADB.ACCTS;
TABLE TPCADB.BRANCH;
TABLE TPCADB.TELLER;

Add a local extract trail to the Online Change Data Capture  Extract Group via GGSCI

GGSCI (ogg-wkshp.us.oracle.com) 1> add exttrail ./dirdat/ea, extract etpcadb

EXTTRAIL added.

Start the Online Change Data Capture  Extract Group via GGSCI

GGSCI (ogg-wkshp.us.oracle.com) 2> start extract etpcadb

Sending START request to MANAGER …
EXTRACT ETPCADB starting

Check the Status of Online Change Data Capture  Extract Group via GGSCI

GGSCI (ogg-wkshp.us.oracle.com) 4> dblogin userid tpcadb password tpcadb

Successfully logged into database.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 5> info extract etpcadb detail

EXTRACT ETPCADB Last Started 2017-02-04 12:43 Status RUNNING
Checkpoint Lag 00:00:03 (updated 00:00:07 ago)
Process ID 18259
Log Read Checkpoint Oracle Integrated Redo Logs
2017-02-04 12:50:52
SCN 0.3135902 (3135902)
Target Extract Trails:
Trail Name Seqno RBA Max MB Trail Type
./dirdat/ea 0 1418 100 EXTTRAIL
Integrated Extract outbound server first scn: 0.3112244 (3112244)
Integrated Extract outbound server filtering start scn: 0.3112244 (3112244)
Extract Source Begin End
Not Available 2017-02-04 12:39 2017-02-04 12:50
Not Available * Initialized * 2017-02-04 12:39
Not Available * Initialized * 2017-02-04 12:39
Current directory /u01/app/oracle/product/12cOGG/v1212110
Report file /u01/app/oracle/product/12cOGG/v1212110/dirrpt/ETPCADB.rpt
Parameter file /u01/app/oracle/product/12cOGG/v1212110/dirprm/etpcadb.prm
Checkpoint file /u01/app/oracle/product/12cOGG/v1212110/dirchk/ETPCADB.cpe
Process file /u01/app/oracle/product/12cOGG/v1212110/dirpcs/ETPCADB.pce
Error log /u01/app/oracle/product/12cOGG/v1212110/ggserr.log

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 6> info all

Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING ETPCADB 00:00:09 00:00:03

Configure and Start Oracle GoldenGate Extract Data Pump process 

For this test, we will name our GoldenGate Extract Data Pump group process to PTPCADB.

Create the Extract Data Pump Group (Process) via GGSCI

The Extract Data Pump group process will read the trail created by the Online Change Data Capture Extract (ETPCADB) process and sends the data to the GoldenGate process running on the GGCS instance via the SSH Socks Proxy server.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 7> add extract ptpcadb, exttrailsource ./dirdat/ea

EXTRACT added.

Note: To edit/create the Extract Configuration/Parameter file, you need to execute “edit param <group_name>” via the GGSCI utility.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 8> edit param ptpcadb

Here’s the Extract Data Pump Parameter (ptpcadb.prm) file used in this example:

EXTRACT PTPCADB
RMTHOST 129.145.1.180, MGRPORT 7777, SOCKSPROXY 127.0.0.1:8888
discardfile ./dirrpt/ptpcadb.dsc, append
rmttrail ./dirdat/pa
passthru
table TPCADB.ACCTN;
table TPCADB.ACCTS;
table TPCADB.BRANCH;
table TPCADB.TELLER;

Add the remote trail to the Extract Data Pump Group via GGSCI

The remote trail is the location output file on the remote side (GGCS instance) used by the Extract Data Pump to write data to be read by the Replicat Delivery process and apply to the target database in this case the DBCS used as a data repository of BICS.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 9> add rmttrail ./dirdat/pa, extract ptpcadb

RMTTRAIL added.

Start the Extract Data Pump Group via GGSCI

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 10> start extract ptpcadb

Sending START request to MANAGER …
EXTRACT PTPCADB starting

Check the Status of Extract Data Pump Group via GGSCI 

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 11> info extract ptpcadb detail

EXTRACT PTPCADB Last Started 2017-02-04 13:48 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:03 ago)
Process ID 29285
Log Read Checkpoint File ./dirdat/ea000000
First Record RBA 0
Target Extract Trails:
Trail Name Seqno RBA Max MB Trail Type
./dirdat/pa 0 0 100 RMTTRAIL
Extract Source Begin End
./dirdat/ea000000 * Initialized * First Record
./dirdat/ea000000 * Initialized * First Record
Current directory /u01/app/oracle/product/12cOGG/v1212110
Report file /u01/app/oracle/product/12cOGG/v1212110/dirrpt/PTPCADB.rpt
Parameter file /u01/app/oracle/product/12cOGG/v1212110/dirprm/ptpcadb.prm
Checkpoint file /u01/app/oracle/product/12cOGG/v1212110/dirchk/PTPCADB.cpe
Process file /u01/app/oracle/product/12cOGG/v1212110/dirpcs/PTPCADB.pce
Error log /u01/app/oracle/product/12cOGG/v1212110/ggserr.log

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 13> info all

Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING ETPCADB 00:00:10 00:00:08
EXTRACT RUNNING PTPCADB 00:00:00 00:00:02

Configure and Start GGCS Oracle GoldenGate Delivery Process

Connect to GGCS Instance through ssh and go GoldenGate Software Command Interface (GGSCI) utility to configure GoldenGate Delivery process.

[oracle@ogg-wkshp db_1]$ ssh -i mp_opc_ssh_key opc@mp-ggcs-bics-01

[opc@bics-gg-ggcs-1 ~]$ sudo su – oracle
[oracle@bics-gg-ggcs-1 ~]$ cd $GGHOME

Note: By default, “opc” user is the only one allowed to ssh to GGCS instance. We need to switch user to “oracle” via “su” command to manage the GoldenGate processes. The environment variable $GGHOME is  pre-defined in the GGCS instance and it’s the directory where GoldenGate was installled.

[oracle@bics-gg-ggcs-1 gghome]$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.160517 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_160711.1401_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Jul 12 2016 02:21:38
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2016, Oracle and/or its affiliates. All rights reserved.

Configure GGCS Oracle GoldenGate Replicat Online Delivery process

Configure the Replicat Online Delivery group that reads the trail file that the Data Pump writes to and deliver the changes into the BICS DBCS.

Before configuring the delievery group make sure that the GGSCI session is connected to the database via the GGSCI “dblogin” command.

GGSCI (bics-gg-ggcs-1) 1> dblogin useridalias ggcsuser_alias

Successfully logged into database BICSPDB1.

Create / Add the Replicat Delivery group and in this example we will name our Replicat DElivery group to RTPCADB.

GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 2> add replicat rtpcadb, integrated, exttrail ./dirdat/pa

REPLICAT (Integrated) added.

Note: To edit/create the Replicat Delivery Configuration/Parameter file, you need to execute “edit param <group_name>” via the GGSCI utility.

GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 3> edit param rtpcadb

Here’s the GGCS Replicat Online Delivery Parameter (rtpcadb.prm) file used in this example:

REPLICAT RTPCADB
useridalias ggcsuser_alias
–Integrated parameter
DBOPTIONS INTEGRATEDPARAMS (parallelism 2)
DISCARDFILE ./dirrpt/rtpcadb.dsc, APPEND Megabytes 25
ASSUMETARGETDEFS
MAP TPCADB.ACCTN, TARGET GGCSBICS.ACCTN;
MAP TPCADB.ACCTS, TARGET GGCSBICS.ACCTS;
MAP TPCADB.BRANCH, TARGET GGCSBICS.BRANCH;
MAP TPCADB.TELLER, TARGET GGCSBICS.TELLER;

Start the GGCS Replicat Online Delivery process via GGCSI 

GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 3> start replicat rtpcadb

Sending START request to MANAGER …
REPLICAT RTPCADB starting

Check the Status of GGCS Replicat Online Delivery process via GGSCI 

GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 4> info replicat rtpcadb detail

REPLICAT RTPCADB Last Started 2017-02-04 17:12 Status RUNNING
INTEGRATED
Checkpoint Lag 00:00:00 (updated 00:00:45 ago)
Process ID 80936
Log Read Checkpoint File ./dirdat/pa000000000
First Record RBA 0
INTEGRATED Replicat
DBLOGIN Provided, no inbound server is defined
Inbound server status may be innacurate if the specified DBLOGIN connects to a different PDB than the one Replicat connects to.
Current Log BSN value: (no data)
Integrated Replicat low watermark: (no data)
(All source transactions prior to this scn have been applied)
Integrated Replicat high watermark: (no data)
(Some source transactions between this scn and the low watermark may have been applied)
Extract Source Begin End
./dirdat/pa000000000 * Initialized * First Record
./dirdat/pa000000000 * Initialized * First Record
Current directory /u02/data/gghome
Report file /u02/data/gghome/dirrpt/RTPCADB.rpt
Parameter file /u02/data/gghome/dirprm/rtpcadb.prm
Checkpoint file /u02/data/gghome/dirchk/RTPCADB.cpr
Process file /u02/data/gghome/dirpcs/RTPCADB.pcr
Error log /u02/data/gghome/ggserr.log

At this juncture, you now have a complete replication platform that integrates data between On-Premises and BICS DBCS via GGCS; any On-Premises changes on the table you make will be moved or integrated to the BICS Database Cloud Service via this GGCS replication platform.

Summary

This article walked through the steps to configure the Oracle GoldenGate Data Integration tool to be able to connect and extract data from On-Premise and integrate or deliver data to Business Intelligence Cloud Service (BICS) using Database Cloud Service (DBCS) as it’s data repository via GoldenGate Cloud Service (GGCS).

For further information on other ways to integrate or move data from On-Premises to BICS, check the following A-Team articles:

 


IDCS and Weblogic Federation with Virtual Users and Groups

$
0
0

Introduction

Federation is a well-known pattern and has been discussed at length on this blog. Almost every vendor or cloud provider out there supports Federation and it’s been around for quite some time now.

In this blog post, I will talk about Federation again, but this time in combination with Weblogic’s Virtual Users and Groups.

What that means, in practical terms, is that users and groups won’t have to be synchronized between the Identity Provider (Oracle Identity Cloud Service) and the Service Provider (Weblogic).

This approach presents a great advantage when integrating web applications running in Weblogic with Oracle Identity Cloud Service (IDCS), since we don’t have to worry about keeping IDs in synch, and administrators can concentrate the users/groups management on one single place: IDCS.

1

In the following topics we will demonstrate how to implement such use case, please read on…

Configuration

Configure Weblogic as the Service Provider (SP).

Go to “Security Realms > Providers > Authentication”.

Create a new SAML2IdentityAsserter provider.

2

Go to “Security Realms > Providers > Authentication”.

Create a new SAMLAuthenticator.

3

Reorder the SAMLAuthenticator and SAML2IdentityAsserter. Move them to the top, as shown below.

4

Click on SAMLAuthenticator, and set its control flag to “SUFFICIENT”.

5

Click on the DefaultAuthenticator and set its Control Flag to “OPTIONAL”.

6

Restart all servers in the domain.

Repeat the below steps for each of the managed server hosting the applications that will be federated with IDCS.

Go to Servers > MANAGED_SERVER > Configuration > Federation Services > SAML 2.0 Service Provider.

Enter the following:

Enabled checked
Preferred Binding POST
Default URL* https://HOST:PORT/FederationSampleApp

The default URL is the landing page of the Federated application.

HOST:PORT is the host and port of the managed server running the sample application.

The configuration should look like the picture below.

Click Save.

7

Go to Servers > MANAGED_SERVER > Configuration > Federation Services > SAML 2.0 General.

Fill the information as the picture below.

The field “Published Site URL” must be in the format https://HOST:PORT/saml2.

The field “Entity ID” is the unique identifier of the Service Provider, it will be used later in the IdP configuration.

HOST:PORT is the host and port of the managed server running the sample application.

8

Configure IDCS as Identity Provider (IdP)

Login to IDCS Admin Console.

Go to Applications and click “Add”.

From the list, choose “SAML Application”.

9

Enter the following:

Name Federation Sample Application
Description Sample application to showcase WLS Virtual Users/Groups.
Application URL https://HOST:PORT/FederationSampleApp

HOST:PORT is the host and port of the managed server running the sample application.

For Application URL use the main page on the application deployed in WLS. Click “Next”.

30
In the General panel, enter the following:

Entity ID FederationDomain
Assertion Consumer URL https://HOST:PORT/saml2/sp/acs/post
NameID Format Email address
NameID Format Primary Email

Entity ID must match the value used in the Service Provider configuration.

HOST:PORT is the host and port of the managed server running the sample application.

11

In the Advanced Settings panel, enter the following:

Signed SSO Assertion
Include Signing Certificate in Signature checked
Signature Hashing Algorithm SHA-256
Enable Single Logout checked
Logout Binding POST
Single Logout URL https://HOST:PORT/FederationSampleApp/logout
Logout Response URL https://HOST:PORT/FederationSampleApp

Single Logout URL is the logout URL of the sample application.

HOST:PORT is the host and port of the managed server running the sample application.

12

In the Attribute Configuration section, add one Group Attribute, with the following information:

Name Groups
Format Basic
Condition All Groups

Name must be “Groups” and format must be “Basic” so the SAML Identity Asserter can pick up the groups attributes when the SAML Assertion is posted back to WLS.

13

Click Finish. And Activate.

29

Open the application page, go to “SSO Configuration” tab and click “Download IDCS Metadata” and save the XML file (IDCSMetadata.xml).

15

Assign users to your application in IDCS

Users need to be assigned to applications in the IdP (IDCS) before they can authenticate to those apps.

We do it by assigning individual users to the application in the “Users” tab.

Open the application page and go to “Users” tab.

Click in “Assign Users”.

16

Select the users that should have access to the application.

17

Configure the Identity Provider Partner in WLS

Upload the “IDCSMetadata.xml” file to the <DOMAIN_HOME> folder where the WLS Managed Server is running, for example: “/u01/oracle/domains/FederationDomain”

Login to WLS Admin Console, go to Security Realms > Providers > Authentication and click on the SAML2IdentityAsserter.

18

Go to the Management tab and click “New”, and select “New Web Single Sign-On Identity Provider Partner”.

19

Enter the following information for the IdP Partner:

Name: IDCS-IdP

Choose the IDCSMetadata.xml and click “OK” button.

20

Click on the “IDCS-IdP” partner from the Identity Provider Partners list.

21

Fill in the following information:

Enabled checked
Virtual User checked
Redirect URIs /FederationSampleApp/protected/*
Process Attributes checked

the “Redirect URIs” are all the URIs that should be protected by the SAML SSO policy, that is, every URI that would trigger the SAML SSO flow and/or require authorization.

22

Click “Save”.

this is the key point of this use case, by enabling “Virtual User” and “Process Attributes” we will allow users that are only defined in the IdP (IDCS) to login to our application.

Testing the setup

The sample application (FederationSampleApp) deployment descriptor is configured to allow access to resources under <APP_CONTEXT_PATH>/protected/* to users that belong to “FederationSampleAppMembers” group.

You can modify the deployment descriptor to add the groups you already have created in IDCS or you can create a new group called “FederationSampleAppMembers”.

Web.xml:

<web-resource-collection>

<web-resource-name>ProtectedPages</web-resource-name>

<url-pattern>/protected/*</url-pattern>

</web-resource-collection>

Weblogic.xml:

<security-role-assignment>

<role-name>allowedGroups</role-name>

<principal-name>FederationSampleAppMembers</principal-name>

</security-role-assignment>

To create a new Group in IDCS called “FederationSampleAppMembers”, log in to IDCS admin console and go to Groups. Click Add, and provide the group name.

23

Assign users to the “FederationSampleAppMembers group”. These users will have access to the sample application deployed in Weblogic.

24

Deploy the sample application in your Weblogic and target it to the managed server(s) we configured to Federate with IDCS.

Open a browser and go to https://HOST:PORT/FederationSampleApp/

You should see the sample application main page, which is not protected by any security constraint.

25

If you click in any of the links, the SAML SSO flow is triggered, and you will be redirected to the IdP (IDCS) for authentication.

26

Once you provide your credentials, the IdP (IDCS) will validate them and create a SAMLResponse containing a SAML Assertion that will be posted back to the Service Provider (WLS).

We can inspect the SAMLResponse that was posted to our SP (WLS), by using Chrome Dev Tools.

27

Decoding the SAML Assertion we can see that the interesting pieces are:

The authenticated Subject

<saml:Subject>

<saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress">paulo.pereira@oracle.com</saml:NameID>

<saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">

<saml:SubjectConfirmationData InResponseTo="_0x81d7ccb7b40001c2b13366d827ab79bf" NotOnOrAfter="2017-01-11T23:56:58Z" Recipient="https://HOST:PORT/saml2/sp/acs/post"/>

</saml:SubjectConfirmation>

</saml:Subject>

Assertion Attributes (additional attributes we configured to include group membership).

<saml:Attribute Name="Groups" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">

<saml:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xsi:type="xs:string">SalesMembers</saml:AttributeValue>

<saml:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xsi:type="xs:string">FederationSampleAppMembers</saml:AttributeValue>

<saml:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xsi:type="xs:string">Teste1</saml:AttributeValue>

</saml:Attribute>

If we navigate to the Principals page, in the sample application, we see that Weblogic created the the principals that correspond to our Assertion’s authenticated subject and the groups contained in the Assertion’s additional attributes.

28

The magic here happens because we configured our Identity Asserter (SAML2IdentityAsserter) to use Virtual users and process SAML Attributes. That means the Identity Asserter, when working with a SAML Authenticator (the SAML Authenticator must run with control flag set as “SUFFICIENT” and must be invoked before other authenticators), will create Principals that does not correspond to any user or group in the configured ID store in WLS.

For more information on the SAML Authentication Provider and SAML Identity Asserter, consult the documentation here.

The other key piece of the solution happens on the IDCS side. We configured our application to generate a SAML assertion that includes the user’s groups as additional attributes of the assertion.

That way, we can “propagate” down to WLS the authenticated user and his group membership.

Conclusion

Configuring Federation between IDCS and Weblogic server with virtual users and groups makes it much easier for applications to integrate with IDCS as a single source of Identity administration. The approach discussed here has the advantage of eliminating the user/groups synch between Service Providers (applications) and the IdP (IDCS). Also, legacy applications or new ones that use standard Java Container Security can leverage this use case with minimal changes – if any at all – since the authorization is already defined in the application’s deployment descriptors.

Applications that need to obtain additional user profile information can also be registered with IDCS as OAuth clients and consume IDCS APIs to obtain the logged in user information, but that is material for another blog post…

ICS Connectivity Agent – Update Credentials

$
0
0

When installing the Connectivity Agent, there are several mandatory command-line arguments that include a valid ICS username (-u=[username]) and password (-p=[password]). These arguments are used to verify connectivity with ICS during installation and are also stored (agent credentials store) for later use by the agent server. The purpose of storing them is to allow the running agent to make a heartbeat call to ICS. This heartbeat is used to provide status in the ICS console regarding the state of the Connectivity Agent. This blog will detail some situations/behaviors relating to the heartbeat that cause confusion when the ICS console contradicts observations on the Connectivity Agent machine.

Confusing Behaviors/Observations

The following is a real-world series of events that occurred for an ICS subscriber. Their agent had been started and running for quite a while. The ICS console was used to monitor the health of the agent (i.e., the green icon which indicates the agent is running). Then out of the blue, the console suddenly showed the agent was down (i.e., the red icon):

AgentCredUpdate-01

The obvious next step was to check on the agent machine to make sure the agent was running. When looking through the standard out that was being captured, it shows that the agent was in fact still running:

AgentCredUpdate-02

Further investigation showed that the agent server logs did not indicate any problems. In an attempt to resolve this strange scenario, the agent server was bounced … but it failed to start with the following:

AgentCredUpdate-03

Although the -u and -p command-line parameters contained the correct credentials, the startAgent.sh indicated an error code of 401 (i.e., unauthorized). This error was very perplexing since the agent had been started earlier with the same command-line arguments. After leaving the agent server down for a while, another start was kicked off to demonstrate the 401 problem. Interestingly enough, this time the agent started successfully and went to a running state. However, the ICS console was still showing that the agent was down with no indication of problems on the Connectivity Agent machine. Another attempt was made to bounce the agent server and it again failed to start with a 401.

At this point, the diagnostic logs were downloaded from the ICS console to see if there was any indication of problems on the ICS side. When analyzing the AdminServer-diagnostic.log, it showed many HTTP authentication/authorization failure messages:

AgentCredUpdate-04

At this point it was determined that the password for the ICS user associated with the Connectivity Agent had been changed without notifying the person responsible for managing the agent server. The series of odd behaviors were all tied to the heartbeat. When the ICS user password was changed, the running agent still had the old password. It was the repeated heartbeat calls with invalid credentials that caused the user account to be locked out in ICS. When a user account is locked, it is not accessible for approximately 30 minutes.

This account locking scenario explained why the agent server could be started successfully and then fail with the 401 within a short period of time. When the account was not locked, the startAgent.sh script would successfully call ICS using the credentials from the command-line. Then the server would start and use the incorrect credentials from the credentials store for the heartbeat, thus locking the user account which caused the problem to repeat itself.

The Fix

To fix this issue, a WLST script (updateicscredentials.py) has been provided that will update the Connectivity Agent credentials store. The details on running it can be found in the comments at the top of the script:

AgentCredUpdate-05

When executing this script, it is important to make sure the agent server is running. Once the script is done you should see something like the following:

AgentCredUpdate-06

At this point, stop the agent server and wait 30 minutes to allow the user account to be unlocked before restarting the server. Everything should now be back to normal:

AgentCredUpdate-07

Possible Options For Less Than 30 Minute Waiting Period

Although I have not yet had an opportunity to test the following out, in theory it should work. To avoid the 30 minute lockout period on ICS due to the Connectivity Agent heartbeat:

1. Change the credentials on the Connectivity Agent server.
2. Shutdown the Connectivity Agent server.
3. Access the Oracle CLOUD My Services console and Reset Password / Unlock Account with the password just used for the agent:

AgentCredUpdate-08

4. Verify that the user can login to the ICS console (i.e., that the account is unlocked).
5. Start the Connectivity Agent and allow the server to get to running state.
6. Verify that “all is green” in the ICS console.

Using Oracle Managed File Transfer (MFT) to Push Files to ICS for Processing

$
0
0

Introduction

In a previous article I discussed the use of the Enterprise Scheduler Service (ESS) to poll for files, on a scheduled basis, to read from MFT.  In that article we discussed how to process many files that have been posted to the FTP server.  At the end of that article I mentioned the use of the push pattern for file processing.

This article will cover how to implement that push pattern with Managed-File Transfer (MFT) and the Integration Cloud Service (ICS).  We’ll walk through the configuration of MFT, creating the connections in ICS, and developing the integration in ICS.

The following figure is a high-level diagram of this file-based integration using MFT, ICS, and an Oracle SaaS application.

mft2ics

 

Create the Integration Cloud Service Flow

This integration will be a basic integration with an orchestrated flow.  The purpose is to demonstrate how the integration is invoked and the processing of the message as it enters the ICS application.  For this implementation we only need to create two endpoints.  The first is a SOAP connection that MFT will invoke, and the second connection will be to the MFT to write the file to an output directory.

The flow could include other endpoints but for this discussion additional endpoints will not add any benefits to understanding the push model.

Create the Connections

The first thing to do is the create the connections to the endpoints required for the integration.  For this integration we will create two required connections.

 

  1. SOAP connection.  This connection is what will be used by the MFT to trigger the integration as soon as the file arrives in the specified directory within the MFT (This will be covered in the MFT section of this article).
    1. FTP connection: This connection will be used to write the file to an output directory within the FTP server.  This second connection is only to demonstrate the flow and the processing of the file and then writing the file to an endpoint.  This endpoint could have been any endpoint, to invoke another operation.  For instance, we could have used the input file to invoke a REST, SOAP, or one of many other endpoints.

Let’s define the SOAP connection.

SOAP_Identifier

Figure 1

Identifier: Provide a name for the connection

Adapter: When selecting the adapter type choose the SOAP Adapter

Connection Role: There are three choices for the connection role; Trigger, Invoke, and Trigger and Invoke.  We will use a role of Trigger, since the MFT will be triggering our integration.

SOAPConnectionProperties

Figure 2

Figure 2 shows the properties that define the endpoint.  The WSDL URL may be added by specifying the actual WSDL as shown above, or the WSDL can be consumed by specifying the host:port/uri/?WSDL.

In this connection the WSDL was retrieved from the MFT embedded server.  This can be found at $MW_HOME/mft/integration/wsdl/MFTSOAService.wsdl.

The suppression of the timestamp is specified as true, since the policy being used at MFT does not require the timestamp to be passed.

Security Policy

SOAP_Security

 

Figure 3

For this scenario we will be using the username-password token policy.  The policy specified on this connection needs to match the policy that is specified for the MFT SOAP invocation.

The second connection, as mentioned previously, is for the purpose of demonstrating an end-to-end flow.  This connection is not important for the purpose of demonstrating the push pattern.  The connection is a connection back to the MFT server.

MFT_FTP_Identifier

Figure 4

Identifier: Provide a unique name for the connection

Adapter: When selecting the adapter type choose the FTP Adapter

Connection Role: For this connection we will specify “Trigger and Invoke”.

Connection Properties

MFT_FTP_Connection_Properties

Figure 5

FTP Server Host Address:  The IP address of the FTP server.

FTP Server Port: The listening port of the FTP Server

SFTP Connection:  Specify “Yes”, since the invocation will be over sFTP

FTP Server Time Zone: The time zone where the FTP server is located.

Security Policy

MFT_FTP_Security

Figure 6

Security Policy:  FTP Server Access Policy

User Name:  The name of the user that has been created in the MFT environment.

Password: The password for the specified user.

Create the Integration

Now that the connections have been created we can begin to create the integration flow.  When the flow is triggered by the MFT SOAP request the file will be passed by reference.  The file contents are not passed, but rather a reference to the file is passed in the SOAP request.  When the integration is triggered the first step is to capture the size of the file.  The file size is used to determine the path to traverse through the flow.  A file size of greater than one megabyte is the determining factor.

integration

 

Figure 7

The selected path is determined by the incoming file size.  When MFT passes the file reference it also passes the size of the file.  We can then use this file size to determine the path to take.  Why do we want to do this?

If the file is of significant size then reading the entire file into memory could cause an out-of-memory condition.  Keep in mind that memory requirements are not just about reading the file but also the XML objects that are created and the supporting objects needed to complete any required transformations.

ICS product provides a feature to prevent an OOM condition when reading large files.  The top path shown in Figure 7 demonstrates how to handle the processing of large files.  When processing a file of significant size it is best to process the file by downloading the file to ICS (This is an option provided by the FTP adapter when configuring the work flow). After downloading the file to ICS it is processed by using a “stage” action.  The stage action is able to chunk the large file and read the file across multiple threads.  This article will not provide an in-depth discussion on the stage action.  To better understand the “stage” action, refer to the Oracle ICS documentation.

The “otherwise” path is the execution flow above is taken when the file size is less than the configured maximum file size.  For the scenario in this blog, I set the maximum size to one megabyte.

The use case being demonstrated involves passing the file by reference.  Therefore, in order to read or download the file we must obtain the reference location from MFT.  The incoming request provides the reference location.  We must provide this reference location and the target filename to the read or download operation.  This is done with the XSLT mapping shown in figure 8.

FileReferenceMapping

Figure 8

The result mapping is shown in Figure 9.

MappingPage

Figure 9

 

The mapping of the fields is provided below.

Headers.SOAPHeaders.MFTHeader.TargetFilename -> DownFileToICS.DownloadRequest.filename.

Substring-before(

substring-after(InboundSOAPRequestDocument.Body.MFTServiceInput.FTPReference.URL,’7522’),

InboundSOAPRequestDocument.Headers.SOAPHeaders.MFTHeader.TargetFilename) -> DownloadFileToICS.DownloadRequest.directory

As previously stated, this is a basic scenario intended to demonstrate the push process.  The integration flow may be as simple or complex as necessary to satisfy your specific use case.

Configuring MFT

Now that the integration has been completed it is time to implement the MFT transfer and configure the SOAP request for the callout.  We will first configure the MFT Source.

Create the Source

The source specifies the location of the incoming file.  For our scenario the directory we place our file in will be /users/zern/in.  The directory location is your choice but it must be relative to the embedded FTP server and one must have permissions to read from that directory.  Figure 10 shows the configuration for the MFT Source.

MFT_Source

Figure 10

As soon as the file is placed in the directory an “event” is triggered for the MFT target to perform the specified action.

Create the Target

The MFT target specifies the endpoint of the service to invoke.  In figure 11, the URL has been specified to the ICS integration that was implemented above.

MFT_Target_Location

 

Figure 11

The next step to specify is the security policy.  This policy must match what was specified by the connection defined in the ICS platform.  We are specifying the username_token_over_ssl_policy as seen in Figure 12.

MFT_Target_Policy

 

Figure 12

Besides specifying the security policy we must also specify to ignore the timestamp in the response. Since the policy is the username_token policy the request  must also specify the credentials in the request.  The credentials are retrieved from the keystore by providing the csf-key value.

Create the Transfer

The last step in this process is to bring the source and target together which is the transfer.  It is within the transfer configuration that we specify the delivery preferences.  In this example we set the “Delivery Method” to “Reference” and the Reference Type to be “sFTP”.

 

MFT_Transfer_Overview

Figure 13

Putting it all together

  1. A “.csv” file is dropped at the source location, /users/zern/in.
  2. MFT invokes the ICS integration via a SOAP request.
  3. The integration is triggered.
  4. The integration determines the size of the incoming file and determines the path of execution
  5. The file is either downloaded to ICS or read into memory.  This is determined by the path of execution.
  6. The file is transformed and then written back to the output directory specified by the FTP write operation.
  7. The integration is completed.

Push versus Polling

There is no right or wrong when choosing either a push or poll pattern.  Each pattern has its benefits.  I’ve listed a couple of points to consider for each pattern.

Push Pattern

  1. The file gets processed as soon as it arrives in the input directory.
  2. You need to create two connections; one SOAP connection and one FTP connection.
  3. Normally used to process only one file.
  4. The files can arrive at any time and there is no need to setup a schedule.

Polling Pattern

  1. You must create a schedule to consume the file(s).  The polling schedule can be at either specific intervals or at a given time.
  2. You only create one connection for the file consumption.
  3. Many files can be placed in the input directory and the scheduler will make sure each file is consumed by the integration flow.
  4. The file processing is delayed upwards to the maximum time of the polling schedule.

Summary

Oracle offers many SaaS cloud applications such as Fusion ERP and several of these SaaS solutions provide file-based interfaces.  These products require the input files to be in a specific format for each interface.  The Integration Cloud Service is an integration gateway that can enrich and/or transform these files and then pass them along directly to an application or an intermediate storage location like UCM where the file is staged as input to SaaS applications like Fusion ERP HCM.

With potentially many source systems interacting with Oracle SaaS applications it is beneficial to provide a set of common patterns to enable successful integrations.  The Integration Cloud Service offers a wide range of features, functionality, and flexibility and is instrumental in assisting with the implementation of these common patterns.

 

Invoking IDCS REST API from PL/SQL

$
0
0

This post shows a way to make REST API calls to IDCS from an Oracle Database using PL/SQL.  The idea is that a PL/SQL application can manage and search for user and group entities directly in IDCS.

In the sample code we’ll see how to obtain an access token from IDCS and make calls to create users, query group membership, and retrieve user profile attributes.  The PL/SQL code uses APEX 5.1 with the packages APEX_WEBSERVICE to call IDCS and APEX_JASON to parse the JSON response.

Setup

 

  1. 1- Since the Oracle Database is acting as an IDCS client we need to register it in IDCS using Client Credentials as grant type and with permission to invoke Administratio APIs with Identity Domain Administrator.  The Client ID and Client Secret returned by the registration are used in the sample code to request an access code.

Screenshot 2017-02-08 14.35.43

Screenshot 2017-02-08 14.41.40

  1. 2- Now, we give the Database the appropriate ACLs so it can resolve and call the IDCS URL. Note that DB Cloud instances seem to have existing ACLs for any external host (*).  If needed, execute the existing commands to create ACL’s for the IDCS Host and Port:

 

exec dbms_network_acl_admin.create_acl (acl => ‘idcs_apex_acl.xml’,description => ‘IDCS HTTP ACL’,principal => ‘APEX_05XXXX‘, is_grant => TRUE,privilege => ‘connect’,start_date => null,end_date => null);

exec DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE(acl => ‘idcs_apex_acl.xml’,principal => ‘APEX_05XXXX‘,is_grant => true,privilege => ‘resolve’);

exec dbms_network_acl_admin.assign_acl (acl => ‘idcs_apex_acl.xml’,host => ‘myidcshost.com‘,lower_port => 8943,upper_port => 8943);

commit;

 

Replace the following values accordingly:

  1. 1. ‘APEX_05XXXX’: the APEX schema owner (it varies with version)
  2. 2. ‘myidcshost.com’: IDCS host for one tenant.
  3. 3. 8943: IDCS SSL Port

Verify the ACLs if you see the error “ORA-24247: network access denied by access control list (ACL)” when submitting a request.

 

  1. 3- In a Database Schema of your choice create the PL/SQL Package using this SQL Script.  Before executing the script replace all ocurrences of idcs_app for your schema name.  Run the script as a SYSDBA user or with the permissions to create procedures and types.

 

  1. 4- Add the appropiate root certificate chain to the Database Wallet (trusted certificates) for SSL communication with IDCS

 

  1. 5- Test requests to IDCS using curl with SSL to verify proper access.

 

PL/SQL Code

The sample code is in the form of a PL/SQL Package.  The script to create the package can be downloaded here.  The package specification is as follows:

 

CREATE or REPLACE PACKAGE IDCS_CLIENT as

g_base_url    VARCHAR2(500):= ‘https://myidcs.oracle.com:8943’;                           — Replace with IDCS Base URL
g_client_id   VARCHAR2(100):=’8105a4f266c745b09a7bbed42ff151eb’;                  — Replace with Client ID
g_client_pwd  VARCHAR2(100):=’a664583b-2115-4921-bd48-8e4a84b0c7a3′;       — Replace with Client Secret
g_wallet_path VARCHAR2(200):= ‘file:/home/oracle/wallet’;                                      — Replace with DB Wallet location
g_wallet_pwd  VARCHAR2(50):= ‘Welcome1’;                                                           — Replace with DB Wallet password
g_users_uri   VARCHAR2(200):=’/admin/v1/Users’;
g_groups_uri  VARCHAR2(200):=’/admin/v1/Groups’;

g_bearer_token  VARCHAR2(32767);      — Stores Access token

— Used to return a list of groups on function get_user_membership
TYPE group_list_t  
IS TABLE OF VARCHAR2(100);

—  Used to return list of users and their profiles
TYPE user_list_t 
IS TABLE OF idcs_user_t;

— Create the following TYPE outside of the package using SQLPLUS with SYSDBA account
— TYPE idcs_user_t is used to store a user’s profile
/*
CREATE  TYPE idcs_user_t 
AS OBJECT (
username      VARCHAR2(100),
displayname   VARCHAR2(100),
firstname     VARCHAR2(50),
lastname      VARCHAR2(50),
email         VARCHAR2(100)
);
*/

Note that the variables g_client_id and g_client_pwd need to have the respective values obtained during the DB Client registration above.  The variable g_base_url is the IDCS Base URL for a specific tenant.  Also, since communication with IDCS is with SSL a wallet is needed for the database using g_wallet_path, and g_wallet_pwd for that purpose.

Three types are defined to return multiple groups, and users.  The type idcs_user_t is an object to store user profile information.  This kind of Type can not be created inside the package so it has to be done in SQLPlus before creating the package.  The type user_list_t is a table type to hold user profiles and the type group_list_t is another table type to hold group names.

— Obtain access token from IDCS
PROCEDURE get_authz_token;

— Creates user in IDCS
PROCEDURE create_user (
username      varchar2,
first_name  varchar2,
last_name   varchar2,
email       varchar2);

—  Assigns group groupname to user username
PROCEDURE grant_group (
username      varchar2,
groupname   varchar2);

— Returns list of all IDCS groups a user is a member of
FUNCTION get_user_membership (
username      varchar2)
RETURN group_list_t;

— Returns internal user id from username
FUNCTION get_user_id (
username      varchar2)
RETURN VARCHAR2;

— Table Function to retrieve username, displayname, firstname, lastname, and email for all users
/* Sample usage for Table Function user_profiles:

    SELECT * from TABLE((idcs_client.user_profiles));  

    SELECT email from TABLE((idcs_client.get_user_profiles)) where username=’myuser1′;
    
*/
FUNCTION user_profiles        — Table function to query user profiles
RETURN user_list_t PIPELINED;

END idcs_client;

The use for procedures and functions is self explanatory.  Just the function user_profiles is different in the sense that it’s defined as an Oracle Table Function that is used to issue SELECT statements to retrieve idcs user profile attributes.  For example, the query:

SELECT last_name from TABLE((idcs_client.get_user_profiles)) where username=’myuser1@demo.com’;

would retrieve, in real time, the last name of the user with username myuser1@demo.com directly from IDCS.

The actual code is in the Package Body below

 

CREATE or REPLACE PACKAGE BODY idcs_client AS

— Gets access token from IDCS
PROCEDURE get_authz_token IS

v_token_request_uri VARCHAR2(50):=’/oauth2/v1/token’;
v_creds VARCHAR2(500):=g_client_id||’:’||g_client_pwd; –Client credentials unencoded
v_client_creds VARCHAR2(1000):=replace(replace(replace(utl_encode.text_encode(v_creds,’WE8ISO8859P1′, UTL_ENCODE.BASE64),chr(9)),chr(10)),chr(13)); — BASE64 – encodes credentials
l_idcs_response_CLOB CLOB; — JSON response from IDCS
l_idcs_url VARCHAR2(500);

BEGIN
–Build request Headers
apex_web_service.g_request_headers(1).name := ‘Content-Type’;
apex_web_service.g_request_headers(1).value := ‘application/x-www-form-urlencoded; charset=UTF-8’;

apex_web_service.g_request_headers(2).name := ‘Authorization’;
apex_web_service.g_request_headers(2).value := ‘Basic ‘||v_client_creds;

l_idcs_url := g_base_url||v_token_request_uri ; –Request URL

— Sends a POST SSL Request to /oauth2/v1/token with grant_type=client_credentials and appropiate scope
l_idcs_response_clob := apex_web_service.make_rest_request
( p_url => l_idcs_url,
p_http_method => ‘POST‘,
p_wallet_path => g_wallet_path,
p_wallet_pwd => g_wallet_pwd,
p_body => ‘grant_type=client_credentials’||’&’||’scope=urn:opc:idm:__myscopes__’);
dbms_output.put_line(‘IDCS Response getting token: ‘||l_idcs_response_clob);

— APEX_JSON Package used to parse response
apex_json.parse(l_idcs_response_clob); — Parse JSON response. No ERROR Checking for simplicity.
— Implement verification of response code and error check

g_bearer_token := apex_json.get_varchar2(p_path => ‘access_token’); — Obtain access_token from parsed response and set variable with token value
–dbms_output.put_line(‘Bearer Token: ‘||g_bearer_token);

END get_authz_token;

The function get_autz_token obtains the access token from IDCS using the credential obtained during the application registration.  The credentials in v_creds are in the form ‘clientID:clientSecret’ and then BASE64 encoded in v_client_creds.  The local variable l_idcs_response_CLOB will have the JSON response from IDCS.  After setting the request headers the request is sent using apex_web_service.make_rest_request with the URL l_idcs_url.  The code is not checking for errors in the response, the entire response can be seen in DBMS Output.  The response is parsed using apex_json.parse from l_idcs_response_CLOB and the access token is retrieved from the response resource ‘access_token’ and stored in g_bearer_token.

 

— Creates user in IDCS
PROCEDURE create_user (
     username varchar2,
     first_name varchar2,
     last_name varchar2,
     email varchar2) IS — work email

l_idcs_url VARCHAR2(1000);

l_authz_header APEX_APPLICATION_GLOBAL.VC_ARR2;
l_idcs_response_clob CLOB;       — JSON IDCS response
— Quickly build a JSON Request Body for Create User Request from parameter values
l_users_body VARCHAR2(1000):='{
          “schemas”: [
               “urn:ietf:params:scim:schemas:core:2.0:User”
           ],
             “userName”: “‘||username||'”,
            “name”: {
                “familyName”: “‘||last_name||'”,
               “givenName”: “‘||first_name||'”
            },
          “emails”: [
          {
             “value”: “‘||email||'”,
             “type”: “work”,
             “primary”: true
          }
         ]
}’;

BEGIN

IF g_bearer_token IS NULL THEN
     idcs_client.get_authz_token; — Get an access token to be able to make request
END IF;
IF g_bearer_token IS NOT NULL THEN
— Build Request Headers
apex_web_service.g_request_headers(1).name := ‘Content-Type’;
apex_web_service.g_request_headers(1).value := ‘application/scim+json’;
apex_web_service.g_request_headers(2).name := ‘Authorization’;
apex_web_service.g_request_headers(2).value := ‘Bearer ‘ || g_bearer_token; — Access Token
l_idcs_url := g_base_url||g_users_uri ; — IDCS URL

— Sends a POST SSL Request to /admin/vi/Users with new user in Body
l_idcs_response_clob := apex_web_service.make_rest_request
( p_url => l_idcs_url,
p_http_method => ‘POST’,
p_wallet_path => g_wallet_path,
p_wallet_pwd => g_wallet_pwd,
p_body => l_users_body);

dbms_output.put_line(‘IDCS Response creating user: ‘||l_idcs_response_clob);

apex_json.parse(l_idcs_response_clob); — Parse JSON response. No ERROR Checking for simplicity.
— Implement verification of response code and error check
–dbms_output.put_line(l_idcs_response_clob);
END IF;
END create_user;

The procedure create_user creates a user in IDCS with specified username, first name, last name, and work email values.  The variable l_users_body is the request body to create a user with the provided parameters.  It first requests the access token that is stored in g_bearer_token.  After building the headers it invokes apex_web_service.make_rest_request and the response in parsed.   The code is not checking for code or errors in the response, the entire response can be seen in DBMS Output.

 

— Returns list of groups user username is a member of
FUNCTION get_user_membership (
     username varchar2)
RETURN group_list_t IS

l_idcs_url VARCHAR2(1000);
l_idcs_response_clob CLOB; –JSON IDCS Response
— Request filter to return the displayname user username is a member of
l_groups_filter VARCHAR2(100):=’?attributes=displayName&filter=members+eq+%22’||get_user_id(username)||’%22′;
l_group_count PLS_INTEGER; –Number of group the user is a member of
l_group_names group_list_t:=group_list_t(); — List of user’s groups to return

BEGIN
IF g_bearer_token IS NULL THEN
        idcs_client.get_authz_token; — Get access token
END IF;
IF g_bearer_token IS NOT NULL THEN
    — Build Request Headers
    apex_web_service.g_request_headers(1).name := ‘Content-Type’;
    apex_web_service.g_request_headers(1).value := ‘application/scim+json’;
    apex_web_service.g_request_headers(2).name := ‘Authorization’;
    apex_web_service.g_request_headers(2).value := ‘Bearer ‘ || g_bearer_token;
    l_idcs_url := g_base_url||g_groups_uri||l_groups_filter ;

— Sends a GET SSL Request to /admin/vi/Groups?attributes=displayName&filter=members+eq+%22’||get_user_id(username)||’%22′;
l_idcs_response_clob := apex_web_service.make_rest_request
    ( p_url => l_idcs_url,
     p_http_method => ‘GET’,
     p_wallet_path => g_wallet_path,
     p_wallet_pwd => g_wallet_pwd);

dbms_output.put_line(‘IDCS Response getting membership: ‘||l_idcs_response_clob);
apex_json.parse(l_idcs_response_clob); — Parse JSON response. No ERROR Checking for simplicity.
— Implement verification of response code and error check
–dbms_output.put_line(l_idcs_response_clob);
l_group_count:=apex_json.get_count(p_path=>’Resources’); — Obtained number of Resources (groups) to extract groups below

–List of groups is returned as l_group_names.  This loop populates the table variable.
FOR i in 1..l_group_count LOOP — Through all returned groups.
   l_group_names.extend;
–Find displayname for current group %d

   l_group_names(l_group_names.last):=apex_json.get_varchar2(p_path=>’Resources[%d].displayName’,p0=>i); 
  –dbms_output.put_line(l_group_names(i)); — Print group displayName
END LOOP;

RETURN l_group_names; — Returns list of the user’s groups
END IF;
RETURN null;
END get_user_membership;

The function get_user_membership returns all groups a user belongs to.  A filter is specified in the variable l_groups_filter to retrieve the group displayname and a filter to retrieve only the groups with the specified user_id as a value in the members attribute.  The user id is retrieve using username from another call to IDCS using the function get_user_id.  After building the headers it invokes apex_web_service.make_rest_request and the response in parsed.   The group count is returned from apex_json.get_count into l_grouip_count which is used in a Loop to populate the variable l_group_names with the displayname for each of the Resources in the response.   The table variable l_group_names is returned with the results.

 

— This is a Table Function, can be queried as SELECT * from TABLE((idcs_client.get_user_profiles));

FUNCTION user_profiles 
RETURN user_list_t PIPELINED IS

l_idcs_url VARCHAR2(1000);
l_idcs_response_clob CLOB; — JSON IDCS Response
— Filter to get displayname, username, active, firstname, lastname and primary email
l_users_filter VARCHAR2(100):=’?attributes=displayname,username,active,name.givenName,name.familyName,emails.value,emails.primary’;
l_user_count PLS_INTEGER; — Number of users returned.
l_user_profile idcs_user_t:=idcs_user_t(NULL,NULL,NULL,NULL,NULL); –initialize variable that holds user profiles

BEGIN
IF g_bearer_token IS NULL THEN
     idcs_client.get_authz_token; — Get Access Token
END IF;
IF g_bearer_token IS NOT NULL THEN
    — Build Request Headers
    apex_web_service.g_request_headers(1).name := ‘Content-Type’;
    apex_web_service.g_request_headers(1).value := ‘application/scim+json’;
    apex_web_service.g_request_headers(2).name := ‘Authorization’;
    apex_web_service.g_request_headers(2).value := ‘Bearer ‘ || g_bearer_token;
    l_idcs_url := g_base_url||g_users_uri||l_users_filter ;

— Sends a GET SSL Request to /admin/vi/Users?attributes=displayname,username,active,name.givenName,name.familyName,emails.value,emails.primary to retrieve ALL USERS.
— No Paging is done
l_idcs_response_clob := apex_web_service.make_rest_request
( p_url => l_idcs_url,
p_http_method => ‘GET’,
p_wallet_path => g_wallet_path,
p_wallet_pwd => g_wallet_pwd);
–dbms_output.put_line(‘IDCS Response getting profiles: ‘||l_idcs_response_clob);
apex_json.parse(l_idcs_response_clob); — Parse JSON response. No ERROR Checking for simplicity.
— Implement verification of response code and error check

l_user_count:=apex_json.get_count(p_path=>’Resources’); — Number of Resources (users) in response

— LOOP through all returned users and idcs_user_t table with the profile attributes for each user
— No Paging implemented
FOR i in 1..l_user_count LOOP
      l_user_profile:=idcs_user_t(apex_json.get_varchar2(p_path=>’Resources[%d].userName’,p0=>i),
                                                      apex_json.get_varchar2(p_path=>’Resources[%d].displayName’,p0=>i),
                                                      apex_json.get_varchar2(p_path=>’Resources[%d].name.givenName’,p0=>i),
                                                      apex_json.get_varchar2(p_path=>’Resources[%d].name.familyName’,p0=>i),
                                                      apex_json.get_varchar2(p_path=>’Resources[%d].emails[1].value’,p0=>i)
);

— dbms_output.put_line(l_user_profile.username);
PIPE ROW(l_user_profile); — Pipe out rows to invoking select statement
END LOOP;

END IF;
RETURN;
END user_profiles;

The table function user_profiles as mentioned above is invoked from a select statement to retrieve user profiles.  With the variable l_users_filter it can limit the data that comes from IDCS.  As declared is only retrieving a list of attributes per user and it’s not filtering users by attribute, so it will retrieve all users.  An example of SCIM filters when searching users is in this tutorial.  After building the headers it invokes apex_web_service.make_rest_request, the response in parsed.  The number of users returned by the request are stored in l_user_count by calling apex_json.get_count to get the number of Resources returned.  The table type variable l_user_profile is populated with the attributes from each user returned in a Loop.  Finally, the rows are piped out to the select statement that was issued.  Here’s a sample of the result of a select statement on user_profiles.

 

Screenshot 2017-02-14 11.37.00

 

Connecting ICS and Apache Kafka via REST Proxy API

$
0
0

Introduction

Apache Kafka (Kafka for short) is a proven and well known technology for a variety of reasons. First it is very scalable and has the capability of handling hundreds of thousands of messages per second without the need of expensive hardware; and close to zero fine tuning, as you can read here. But another reason is due its client API capabilities. Kafka allows connections from different platforms, by leveraging a number of client APIs that make it easy for developers to connect to and transact with Kafka. Being able to easily connect to a technology is a major requirement for open-source projects.

In nutshell, Kafka clients APIs are divided into three categories:

* Native Clients: This is the preferred way to develop client applications that must connect to Kafka. These APIs allow high-performance connectivity and leverage most of the features found in Kafka’s clustering protocol. By using this API, developers are responsible for writing code to handle aspects like fault-tolerance, offset management, etc. An example of this is the Oracle Service Bus Transport for Kafka has been built using the native clients, which can be found here.

* Connect API: SDK that allows the creation of reusable clients, which run on top of a pre-built connector infrastructure that takes care of details such as fault-tolerance, execution runtime and offset management. The Oracle GoldenGate adapter has been built on top of this SDK, as you can read here

* Rest Proxy API: For all those applications that for some reason can neither use the native clients nor the connect API, there is an option to connect to Kafka using the REST Proxy API. This is an open-source project maintained by Confluent, the company behind Kafka that allows REST-based calls against Kafka, to perform transactions and administrative tasks. You can read more about this project here.

The objective of this blog is to detail how Kafka’s REST Proxy API can be used to allow connectivity from Oracle ICS (Integration Cloud Service). By leveraging the native REST adapter from ICS, it is possible to implement integration scenarios in which messages can be sent to Kafka. This blog is going to show the technical details about the REST Proxy API infrastructure and how to implement a use case on top of it.

Use_Case_Diagram

Figure 1: Use case where a request is made using SOAP and ICS delivers it to Kafka.

The use case is about leveraging ICS transformation capabilities to allow applications limited to the SOAP protocol to be able to send messages to Kafka. Maybe there are some applications out there that have no REST support, and can only interact with SOAP-based endpoints. In this pattern, ICS can be used to adapt and transform the message so it could be properly delivered to Kafka. SOAP is just an example; it could be the case of any other protocol/technology supported by ICS. Plus, any Oracle SaaS application that has built-in connectivity with ICS can also benefit from this pattern.

Getting Started with the REST Proxy API

As mentioned before, the REST Proxy API is an open-source project maintained by Confluent. Its source-code can be found on GitHub, here. So please be aware that the REST Proxy API will not be part of any Kafka deployment by default. That means that if you download and install a community version of Kafka, the bits for the REST Proxy API will not be there. You need to explicitly build the project and integrate with your Kafka install. This can be a little tricky since the REST Proxy API project depends on other projects such as commons, rest-utils and the schema-registry.

Luckily; the Confluent folks provide an open-source version of their product, which has everything pre-integrated including the REST API Proxy and the other dependencies. This distribution is called Confluent Open Source and can be downloaded here. It is strongly recommended to start using this distribution, so you can be sure that you face no errors that might be the result of bad compilation/building/packaging. Oracle’s own distribution of Kafka called Event Hub Cloud Service could be used as well.

Once you have a Kafka install that has the REST Proxy API, you will be good to go. Everything was built to be O.O.T.B through easy-to-use scripts. The only thing you have to keep in mind is about the services dependencies. In a typical Kafka deployment, the brokers depend on the Zookeeper service that has to be continuously up and running. Zookeeper is required to keep metadata information about the brokers, partitions and topics in a highly available fashion. Zookeeper’s default port is 2181.

The services from the REST Proxy API also depend on Zookeeper. To have a REST Proxy API deployment, you need a service called the REST Server. The REST Server depends on Zookeeper. Also, the REST Server depends on another service called Schema Registry – which in turn depends on Zookeeper as well. Figure 2 summarizes this dependency relationship between the services.

Services_Depedencies

Figure 2: Dependency relationship between the REST Proxy API services.

Although may look like, but none of these services can become a SPOF (Single Point of Failure) or SPOB (Single Point of Bottleneck) in Kafka’s architecture. All of them were designed from scratch to be idempotent and stateless. Therefore, you can have multiple copies of each service running behind a load balancer to meet your performance and availability goals. In order to start a Kafka deployment with the REST Proxy API; you need to execute the following scripts, in the order shown on listing 1.

/bin/zookeeper-server-start /etc/kafka/zookeeper.properties &

/bin/kafka-server-start /etc/kafka/server.properties &

/bin/schema-registry-start /etc/schema-registry/schema-registry.properties &

/bin/kafka-rest-start /etc/kafka-rest/kafka-rest.properties &

Listing 1: Starting a Kafka deployment with the REST Proxy API.

As you can see on listing 1, every script references a properties configuration file. These files are used to customize the behavior of a given service. Most properties on these files has been preset to meet a variety of workloads; so unless you are trying to fine tune a given service – most likely you won’t need to change them.

There is an exception though. For most production environments you will run these services on different boxes for high availability purposes. However; if you choose to run them within the same box, you might need to adjust some ports to avoid conflicts. That can be easily accomplished by editing the respective properties file and adjusting the corresponding property. If you are unsure about which property to change, consult the configuration properties documentation here.

Setting Up a Public Load Balancer

This section might be considered optional depending on the situation. In order for ICS to connect to the REST Proxy API, it needs to have network access to the endpoints exposed by the REST Server. This happens because ICS runs on the OPC (Oracle Public Cloud) and can only access endpoints that are publicly available on the internet (or endpoints exposed through the connectivity agent). Therefore, you may need to set up a load balancer in front of your REST Servers to allow for this connection. This should be considered a best practice because otherwise, you would need to setup firewall rules to allow public internet access to the boxes that holds your REST Servers. Moreover, running without a load balancer would make difficult to transparently change your infrastructure if you need to scale up/down your REST servers. This blog will show how to set up OTD (Oracle Traffic Director) in front of the REST servers but any other load balancer that supports TCP/HTTP would also suit the needs.

In OTD, the first step would be creating a server pool that has all the exposed REST Server endpoints. In the setup built for this blog, I had a REST Server running over the port 6666. Figure 3 shows an example of server pool named rest-proxy-pool.

OTD_Config_1

Figure 3: Creating a server pool that references the REST Server services.

The second step is the creation of a route under your virtual server configuration that will forward any request that matches a certain pattern to the server pool created above. In the REST Proxy API, any request that intends to perform a transaction (which could be either to produce or consume messages) goes through a URI pattern that starts with /topics/*. Therefore; create a route that uses this pattern, as shown on figure 4.

OTD_Config_2

Figure 4: Creating a route to forward requests to the server pool.

Finally, you need to make sure that a functional HTTP listener is associated with your virtual server. This HTTP listener will be used by ICS when it sends messages out. In the setup built for this blog, I have used a HTTP listener on top of the port 8080 for non-SSL requests. Figure 5 depicts this.

OTD_Config_3

Figure 5: HTTP listener created to allow external communication.

Before moving to the following sections; it would be a good idea to validate the setup built so far, since there are a lot of moving parts that can fail. The best way to validate this is by sending a message to a topic using the REST Proxy API and checking if that message is received using Kafka’s console consumer. Thus, start a new console consumer instance to listen for messages sent to the topic orders as shown in listing 2.

/bin/kafka-console-consumer –bootstrap-server <BROKER_ADDRESS>:9092 –topic orders

Listing 2: Starting a new console consumer that listens for messages.

Then, send a message out using the REST Proxy API exposed by your load balancer. Remember that the request should pass through the HTTP listener configured on OTD. Listing 3 shows a cURL example that sends a simple message to the topic using the infrastructure built so far.

curl -X POST -H “Content-Type: application/vnd.kafka.json.v1+json” –data ‘{“records”:[{“key”:”12345″, “value”:{“message”:”Hello World”}}]}’ “http://<OTD_ADDRESS>:8080/topics/orders”

Listing 3: HTTP POST to send a message to the topic using the REST Proxy API.

If everything was setup correctly, you should see the JSON payload sent to the topic in the output of the console consumer started on listing 2. There are some interesting things to comment about the example shown in listing 3. Firstly, you may have noticed that actual payload sent has a strictly defined structure. It is a JSON payload with only one root element called “records”. This element’s value is an array with multiple entries of type key/value. This means that you can send multiple records in once with a single request to maximize throughput, since you can avoid having to perform multiple network calls.

Secondly, the “key” field is not mandatory. If you send a record containing only the value, that will work as well. However, it is highly recommended to use a key every time you send a message out. That will give you more control over how the messages will be grouped together within the partitions in Kafka, therefore considerably improving the partition persistence/replication over the cluster.

Thirdly, you may also have noticed the content type header used in the cURL command. Instead of using a simple application/json as most applications would use, we used application/vnd.kafka.json.v1+json. This is a requirement for the REST Proxy API to work. Keep this in mind while developing flows in ICS.

Message Design for REST Proxy API

Now it is time to start thinking about how are we going to map the SOAP messages sent to ICS into the JSON payload that needs to be sent to the REST Proxy API. This exercise is important because once you start using ICS to build the flow, it will ask for payload samples and message schemas that you may not have in first hand. Therefore, this section will focus on generating these artifacts.

Let’s start by designing the SOAP messages. In this use case we are going to have ICS receiving order confirmation requests. Each order confirmation request will contain the details of an order made by a certain customer. Listing 4 shows an example of this SOAP message.

<soapenv:Envelope xmlns:blog=”http://cloud.oracle.com/paas/ics/blogs”

   xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/”>

   <soapenv:Body>

      <blog:confirmOrder>

         <order>

            <orderId>PO000897</orderId>

            <custId>C00803</custId>

            <dateTime>2017-02-09:11:06:35</dateTime>

            <amount>89.90</amount>

         </order>

      </blog:confirmOrder>

   </soapenv:Body>

</soapenv:Envelope>

Listing 4: SOAP message containing the order confirmation request.

In order to build the SOAP message shown in listing 4, it is necessary to have the corresponding message schemas typically found in a WSDL document. You can download the WSDL used to build this blog here. It will be necessary when we setup the connection in ICS later.

The message that we really want to send to Kafka is in the JSON format. It has essentially all the fields shown on listing 4, except for the “orderId” field. Listing 5 shows the JSON message we need to send.

   “custId”:”C00803″,

   “dateTime”:”2017-02-09:11:06:35″,

   “amount”:89.90

}

Listing 5: JSON message containing the order confirmation request.

The “orderId” field was omitted on purpose. We are going to use this field as the key for the record that will be sent to Kafka. By using this design we can provide a way to track orders by their identifiers. If you recall to the JSON payload shown in listing 3, you will figure that the JSON payload shown in listing 5 will be the portion used in the “value” field. Listing 6 shows the concrete payload that needs to be built so the REST Proxy API can properly process the payload.

{  
   “records”:[  
      {  
         “key”:”PO000897″,
         “value”: {  
            “custId”:”C00803″,
            “dateTime”:”2017-02-09:11:06:35″,
            “amount”:89.90
         }
      }
   ]
}

Listing 6: JSON payload used to process messages using the REST Proxy API.

Keep in mind that although the REST Proxy API will receive the payload shown on listing 6, what the topic consumers will effectively receive is only the record containing the key and the value set. When the consumer reads the “value” field of the record, it will have access to the actual payload containing the order confirmation request. Figure 6 shows the mapping that needs to be implemented in ICS.

Abstract_Source_Target_Mapping

Figure 6: Message mapping to be implemented in ICS.

Developing the Integration Flow

Now that we passed by the configuration necessary to establish communication with the REST Proxy API; we can start the development of the integration flow in ICS. Let’s start with the configuration of the connections.

Create an SOAP-based connection as shown in figure 7. Since this connection will be used for inbound, you can skip any configuration about security. Go ahead and attach the WSDL that contains the schemas into this newly created connection.

Creating_SOAP_Conn_2

Figure 7: SOAP-based connection used for inbound processing.

Next, create a REST-based connection as shown in figure 8. This is the connection that will be used to send messages out to Kafka. Therefore, make sure to set in the “REST API Base URL” field the correct endpoint that should point to your load balancer. Also make sure to set the /topics resource after the port.

Creating_REST_Conn_2

Figure 8: REST-based connection used for outbound processing.

With the inbound and outbound connections properly created, go ahead and create a new integration. For this use case we are going to use Basic Map Data as integration style/pattern, although you could also leverage the outbound connection to REST Proxy API in orchestration -based integrations.

Creating_Flow_1

Figure 9: Using Basic Map Data as the integration style for the use case.

Name the integration as OrderService and provide some description, as shown in figure 10. Once the integration flow is created, go ahead and drag the SOAP connection to the source area of the flow. That will trigger the SOAP endpoint creation wizard. Go through the wizard details until you reach the last page. Accept all values suggested by default. Then, click on the “Done” button to finish it.

Creating_Flow_2

Figure 10: Setting up the details for the newly created integration.

ICS will create the source mapping according to the information gathered from the wizard; along with the information from the WSDL attached to the connection, as shown in figure 11. Up to this point, we can now drag the REST connection to the target area of the flow. That will trigger the REST endpoint creation wizard.

Creating_Flow_6

Figure 11: Integration flow with the inbound mapping built.

Differently from the SOAP endpoint creation wizard; we will make some changes in the options shown in the REST endpoint creation wizard. The first one is setting the Kafka topic name in the “Relative Source URI” field. This is important because ICS will use this information to build the final URI that will be sent to the REST Proxy API. Therefore, make sure to set the appropriate topic name. For this use case, we are using a topic named orders, as shown in figure 12. Also, please select the option “Configure Request Payload” before clicking next.

Creating_Flow_7

Figure 12: Setting up details about the REST endpoint behavior.

In the next page, you will need to associate the schema that will be used to parse the request payload. Select the “JSON Sample” and upload a JSON sample file that contains a payload like the one shown in listing 6. Please make sure to provide a JSON sample that has at least two sample values in the array section. ICS validates if the samples provided has enough information that can be used to generate the internal schemas. If some JSON sample has an array construct then ICS will ask for at least two values within the array, to make sure that it is going to deal with a list of values instead of a single value. You can grab a copy of a valid JSON sample for this use case here.

Creating_Flow_8

Figure 13: Setting up details about schemas and media types.

In the “Type of Payload” section, make sure to select the “Other Media Type” option to allow the usage of custom media types. Then; set application/vnd.kafka.json.v1+json as the value, as shown in figure 13. Click next and review the options set. If everything looks like what is shown in figure 14 then you can click the “Done” button to finish the wizard.

Creating_Flow_9

Figure 14: Summary page of the REST endpoint creation wizard.

ICS will bring together the request and response mappings and expects that you set them up. Thus, go ahead and create the mappings for both request and response. For the request mapping, you should simply associate the fields as shown in figure 15. Remember that this field mapping should mimic what we had shown before in figure 6, including the usage of the “orderId” field as the record key.

Creating_Flow_11

Figure 15: Request mapping configuration.

The response mapping is way simpler: the only thing you have to do is associating the “orderId” field to the “confirmationId” field. The idea here is providing the user a way to know if the transaction was 100% successful or not. By returning the same order identifier value provided we will be doing this because otherwise; if any failure happens during the message transmission then the REST Proxy API will make sure to propagate an fault back to the caller, which in turn will force ICS to simply catch this fault and propagated it back. Figure 16 shows the response mapping.

Creating_Flow_12

Figure 16: Response mapping configuration.

Now set up some tracking fields (For this use case using the “orderId” field would be a good idea) and finish the integration flow as shown in figure 17. Now you are ready to activate and test the integration to check the end-to-end behavior of the use case.

Creating_Flow_13

Figure 17: Integration flow 100% complete in ICS.

You can download a copy of this use case here. Once the integration is active, you can validate if it is working correctly by starting a console consumer like shown in listing 2. Then, open your favorite SOAP client utility and import the WSDL from the integration. You can easily access the integration’s WSDL in the UI; by clicking in the information icon of the integration, just like shown in figure 18.

Creating_Flow_14

Figure 18: Retrieving the integration’s WSDL from the UI.

Once the WSDL is properly imported in your SOAP client utility, send a payload request like the one shown in listing 4 to validate the integration. If everything was setup correctly, you should see the JSON payload sent to the topic in the output of the console consumer started on listing 2.

Conclusion

This blog has shown in details how to configure ICS to send messages to Kafka. Since ICS has no built-in adapter for Kafka, it was used the REST Proxy API project that is part of the Kafka ecosystem.

GoldenGate Cloud Service (GGCS): Replication from On-Premises to Oracle Public Cloud (OPC)

$
0
0

Introduction

This document will walk you through how to configure Oracle GoldenGate (OGG) replication between On-Premises Oracle Database to an Oracle Database Cloud Service (DBCS) on Oracle Public Cloud (OPC) via GoldenGate Cloud Service (GGCS).

Installation of Oracle GoldenGate for Oracle Database on the On-Premises and the provisioning of Oracle GGCS and DBCS are not discussed in this article, it is assumed that Oracle GoldenGate Software has been installed on the On-Premises server and instances for GGCS and DBCS already exist.

The scripts and information provided in this article are for educational purposes only. They are not supported by Oracle Development or Support, and come with no guarantee or warrant for functionality in any environment other than the test system used to prepare this article.

For details on OGG installation and provisioning of DBCS and GGCS, please check the following Oracle Documentation links:

GoldenGate Cloud Service (GGCS)

The GoldenGate Cloud Service (GGCS), is a cloud based real-time data integration and replication service, which provides seamless and easy data movement from various On-Premises relational databases to databases in the cloud with sub-second latency while maintaining data consistency and offering fault tolerance and resiliency.

Figure 1: GoldenGate Cloud Service (GGCS) Architecture Diagram

ggcs_architecture_01

OGG Replication between On-Premises and OPC via GGCS

The high level steps for OGG replication between On-Premises (source) database and DBaaS/DBCS (target) database in the Oracle Public Cloud (OPC) are as follows:

  • Configure and Start GGCS Oracle GoldenGate Manager on the OPC side
  • Configure and Start SSH proxy server process on the On-Premises
  • Configure and Start On-Premises OGG Extract process
  • Configure and Start On-Premises OGG Extract Data Pump process
  • Configure and Start GGCS Replicat process on the OPC side to deliver data into the target DBaaS/DBCS

GGCS Oracle GoldenGate Manager

To start configuring Oracle GoldenGate on the GGCS instance, the manager process must be running. Manager is the controller process that instantiates the other Oracle GoldenGate processes such as Extract, Extract Data Pump, Collector and Replicat processes.

Connect to GGCS Instance through ssh and start the Manager process via the GoldenGate Software Command Interface (GGSCI).

[oracle@ogg-wkshp db_1]$ ssh -i mp_opc_ssh_key opc@129.145.1.180

[opc@bics-gg-ggcs-1 ~]$ sudo su – oracle
[oracle@bics-gg-ggcs-1 ~]$ cd $GGHOME

Note: By default, “opc” user is the only one allowed to ssh to GGCS instance. We need to switch user to “oracle” via “su” command to manage the GoldenGate processes. The environment variable $GGHOME is  pre-defined in the GGCS instance and it’s the directory where GoldenGate was installled.

[oracle@bics-gg-ggcs-1 gghome]$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.160517 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_160711.1401_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Jul 12 2016 02:21:38
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2016, Oracle and/or its affiliates. All rights reserved.

GGSCI (bics-gg-ggcs-1) 1> start mgr

Manager started.

GGSCI (bics-gg-ggcs-1) 2> info mgr

Manager is running (IP port bics-gg-ggcs-1.7777, Process ID 25272).

Note: By default, GoldenGate processes doesn’t accept any connection remotely. To enable connection from other hosts via the SSH proxy we need to add an ACCESS RULE to the Manager parameter File (MGR.prm) to allow connectivity through the public IP Address of the GGCS Instance.

Here’s the MGR.prm file used in this example:

–###############################################################
–## MGR.prm
–## Manager Parameter Template
— Manager port number
— PORT <port number>
PORT 7777
— For allocate dynamicportlist. Here the range is starting from
— port n1 through n2.
Dynamicportlist 7740-7760
— Enable secrule for collector
ACCESSRULE, PROG COLLECTOR, IPADDR 129.145.1.180, ALLOW
— Purge extract trail files
PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS, MINKEEPHOURS 24
— Start one or more Extract and Replicat processes automatically
— after they fail. –AUTORESTART provides fault tolerance when
— something temporary interferes with a process, such as
— intermittent network outages or programs that interrupt access
— to transaction logs.
— AUTORESTART ER *, RETRIES <x>, WAITMINUTES <y>, RESETMINUTES <z>
— This is to specify a lag threshold that is considered
— critical, and to force a warning message to the error log.
— Lagreport parameter specifies the interval at which manager
— checks for extract / replicat –lag.
–LAGREPORTMINUTES <x>
–LAGCRITICALMINUTES <y>
–Reports down processes
–DOWNREPORTMINUTES <n>
–DOWNCRITICAL

Start SSH Proxy Server on the On-Premises

By default, the only access allowed to GGCS instance is via ssh, so to allow communication of GoldenGate processes between On-Premises and GGCS instance we would need to run SSH proxy server on the on-premises side to communicate to GoldenGate processes on the GGCS instance.

Start the SSH proxy server process via the following ssh command (all in one line):

[oracle@ogg-wkshp db_1]$ ssh -i mp_opc_ssh_key -v -N -f -D 127.0.0.1:8888 opc@129.145.1.180 > ./dirrpt/socks.log 2>&1

Command Syntax: ssh –i {private_key_file} -v –N –f –D {istening_ip_address:listening_tcp_port_address} {user}@{GGCS_Instance_IP_address} > {output_file} 2>&1

SSH Command Options Explained:

  1. -i = Private Key file
  2. -v = Verbose Mode
  3. -N = Do no execute remote command; mainly used for port forwarding 
  4. -f = Run ssh process in the background
  5. -D Specifies to run as local dynamic application level forwarding; act as a SOCKS proxy server on a specified interface and port
  6. listening_ip_address = Host Name or Host IP Address where this SOCKS proxy will listen (127.0.0.1 is the loopback address)
  7. listening_tcp_port_address = TCP/IP Port Number to listen on
  8. 2>&1 = Redirect Stdout and Stderr to the output file
  9. Verify the SSH Socks Proxy server process has started successfully.

    1. Check the socks proxy output file via the “cat” utility and look for the messages “Local connections to forwarded…” and “Local forwarding listening on port ”.  Make sure it’s connected to GGCS instance and listening on the right IP and port address.

[oracle@ogg-wkshp db_1]$ cat ./dirrpt/socks.log

OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to 129.145.1.180 [129.145.1.180] port 22.
debug1: Connection established.
debug1: identity file keys/mp_opc_ssh_key type 1
debug1: loaded 1 keys
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3
debug1: match: OpenSSH_5.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_4.3
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host ‘129.145.1.180’ is known and matches the RSA host key.

debug1: Authentication succeeded (publickey).
debug1: Local connections to 127.0.0.1:8888 forwarded to remote address socks:0
debug1: Local forwarding listening on 127.0.0.1 port 8888.
debug1: channel 0: new [port listener]
debug1: Entering interactive session.

Configure On-Premises Oracle GoldenGate

For our test, we shall use the following tables for source and target database:

CREATE TABLE ACCTN
(
ACCOUNT_NO NUMBER (10,0) NOT NULL
, BALANCE NUMBER (8,2) NULL
, PREVIOUS_BAL NUMBER (8,2) NULL
, LAST_CREDIT_AMT NUMBER (8,2) NULL
, LAST_DEBIT_AMT NUMBER (8,2) NULL
, LAST_CREDIT_TS TIMESTAMP NULL
, LAST_DEBIT_TS TIMESTAMP NULL
, ACCOUNT_BRANCH NUMBER (10,0) NULL
, CONSTRAINT PK_ACCTN
PRIMARY KEY
(
ACCOUNT_NO
)
USING INDEX
)
;
CREATE TABLE ACCTS
(
ACCOUNT_NO NUMBER (10,0) NOT NULL
, FIRST_NAME VARCHAR2 (25) NULL
, LAST_NAME VARCHAR2 (25) NULL
, ADDRESS_1 VARCHAR2 (25) NULL
, ADDRESS_2 VARCHAR2 (25) NULL
, CITY VARCHAR2 (20) NULL
, STATE VARCHAR2 (2) NULL
, ZIP_CODE NUMBER (10,0) NULL
, CUSTOMER_SINCE DATE NULL
, COMMENTS VARCHAR2 (30) NULL
, CONSTRAINT PK_ACCTS
PRIMARY KEY
(
ACCOUNT_NO
)
USING INDEX
)
;
CREATE TABLE BRANCH
(
BRANCH_NO NUMBER (10,0) NOT NULL
, OPENING_BALANCE NUMBER (8,2) NULL
, CURRENT_BALANCE NUMBER (8,2) NULL
, CREDITS NUMBER (8,2) NULL
, DEBITS NUMBER (8,2) NULL
, TOTAL_ACCTS NUMBER (10,0) NULL
, ADDRESS_1 VARCHAR2 (25) NULL
, ADDRESS_2 VARCHAR2 (25) NULL
, CITY VARCHAR2 (20) NULL
, STATE VARCHAR2 (2) NULL
, ZIP_CODE NUMBER (10,0) NULL
, CONSTRAINT PK_BRANCH
PRIMARY KEY
(
BRANCH_NO
)
USING INDEX
)
;
CREATE TABLE TELLER
(
TELLER_NO NUMBER (10,0) NOT NULL
, BRANCH_NO NUMBER (10,0) NOT NULL
, OPENING_BALANCE NUMBER (8,2) NULL
, CURRENT_BALANCE NUMBER (8,2) NULL
, CREDITS NUMBER (8,2) NULL
, DEBITS NUMBER (8,2) NULL
, CONSTRAINT PK_TELLER
PRIMARY KEY
(
TELLER_NO
)
USING INDEX
)
;

Start On-Premises Oracle GoldenGate Manager

[oracle@ogg-wkshp db_1]$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.1.10 21604177 23004694_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Apr 29 2016 01:06:03
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (ogg-wkshp.us.oracle.com) 1> start mgr

Manager started.

GGSCI (ogg-wkshp.us.oracle.com) 2> info mgr

Manager is running (IP port ogg-wkshp.us.oracle.com.7809, Process ID 7526).

Configure and Start Oracle GoldenGate Extract Online Change Capture process 

Before we can configure the Oracle GoldenGate Extract Online Change process, we need to enable supplemental logging for the schema/tables we need to capture on the source database via the GGCSI utility.

Enable Table Supplemental Logging via GGCSI:

GGSCI (ogg-wkshp.us.oracle.com) 1> dblogin userid tpcadb password tpcadb

Successfully logged into database.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 2> add schematrandata tpcadb

2017-02-22 10:38:01 INFO OGG-01788 SCHEMATRANDATA has been added on schema tpcadb.
2017-02-22 10:38:01 INFO OGG-01976 SCHEMATRANDATA for scheduling columns has been added on schema tpcadb.

Note: The GGSCI “dblogin” command let’s the GGSCI session logged into the database. Your GGSCI session needs to be connected to the database before you can execute the “add schematrandata” command.

Create an Online Change Data Capture Extract Group via Integrated Extract process

For this test, we will name our Online Change Data Capture group process to ETPCADB.

-> Register the Extract group with the database via GGSCI:

GGSCI (ogg-wkshp.us.oracle.com) 1> dblogin userid tpcadb password tpcadb

Successfully logged into database.

Note: When creating/adding/managing an Extract group as an Integrated Extract process, your GGSCI session needs to be connected to the database via the “dblogin” command.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 2> register extract etpcadb database

Extract ETPCADB successfully registered with database at SCN 2373172.

-> Create/Add the Extract Group in GoldenGate via GGSCI:

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 3> add extract etpcadb, integrated, tranlog, begin now

EXTRACT added.

Note: To edit/create the Extract Configuration/Parameter file, you need to execute “edit param <group_name>” via the GGSCI utility.

GGSCI (ogg-wkshp.us.oracle.com) 1> edit param etpcadb

Here’s the Online Change Capture Parameter (etpcadb.prm) file used in this example:

EXTRACT ETPCADB
userid tpcadb, password tpcadb
EXTTRAIL ./dirdat/ea
discardfile ./dirrpt/etpcadb.dsc, append
TABLE TPCADB.ACCTN;
TABLE TPCADB.ACCTS;
TABLE TPCADB.BRANCH;
TABLE TPCADB.TELLER;

Add a local extract trail to the Online Change Data Capture  Extract Group via GGSCI

GGSCI (ogg-wkshp.us.oracle.com) 1> add exttrail ./dirdat/ea, extract etpcadb

EXTTRAIL added.

Start the Online Change Data Capture  Extract Group via GGSCI

GGSCI (ogg-wkshp.us.oracle.com) 2> start extract etpcadb

Sending START request to MANAGER …
EXTRACT ETPCADB starting

Check the Status of Online Change Data Capture  Extract Group via GGSCI

GGSCI (ogg-wkshp.us.oracle.com) 4> dblogin userid tpcadb password tpcadb

Successfully logged into database.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 5> info extract etpcadb detail

EXTRACT ETPCADB Last Started 2017-02-22 10:46 Status RUNNING
Checkpoint Lag 00:00:10 (updated 00:00:09 ago)
Process ID 10705
Log Read Checkpoint Oracle Integrated Redo Logs
2017-02-22 10:59:17
SCN 0.2394754 (2394754)
Target Extract Trails:
Trail Name Seqno RBA Max MB Trail Type
./dirdat/ea 0 1450 100 EXTTRAIL
Integrated Extract outbound server first scn: 0.2373172 (2373172)
Integrated Extract outbound server filtering start scn: 0.2373172 (2373172)
Extract Source Begin End
Not Available 2017-02-22 10:44 2017-02-22 10:59
Not Available * Initialized * 2017-02-22 10:44
Not Available * Initialized * 2017-02-22 10:44
Current directory /u01/app/oracle/product/12cOGG/db_1
Report file /u01/app/oracle/product/12cOGG/db_1/dirrpt/ETPCADB.rpt
Parameter file /u01/app/oracle/product/12cOGG/db_1/dirprm/etpcadb.prm
Checkpoint file /u01/app/oracle/product/12cOGG/db_1/dirchk/ETPCADB.cpe
Process file /u01/app/oracle/product/12cOGG/db_1/dirpcs/ETPCADB.pce
Error log /u01/app/oracle/product/12cOGG/db_1/ggserr.log

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 6> info all

Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING ETPCADB 00:00:09 00:00:08

Configure and Start Oracle GoldenGate Extract Data Pump process 

For this test, we will name our GoldenGate Extract Data Pump group process to PTPCADB.

Create the Extract Data Pump Group (Process) via GGSCI

The Extract Data Pump group process will read the trail created by the Online Change Data Capture Extract (ETPCADB) process and sends the data to the GoldenGate process running on the GGCS instance via the SSH Socks Proxy server.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 7> add extract ptpcadb, exttrailsource ./dirdat/ea

EXTRACT added.

Note: To edit/create the Extract Configuration/Parameter file, you need to execute “edit param <group_name>” via the GGSCI utility.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 8> edit param ptpcadb

Here’s the Extract Data Pump Parameter (ptpcadb.prm) file used in this example:

EXTRACT PTPCADB
RMTHOST 129.145.1.180, MGRPORT 7777, SOCKSPROXY 127.0.0.1:8888
discardfile ./dirrpt/ptpcadb.dsc, append
rmttrail ./dirdat/pa
passthru
table TPCADB.ACCTN;
table TPCADB.ACCTS;
table TPCADB.BRANCH;
table TPCADB.TELLER;

Add the remote trail to the Extract Data Pump Group via GGSCI

The remote trail is the location output file on the remote side (GGCS instance) used by the Extract Data Pump to write data to be read by the Replicat Delivery process and apply to the target database on the Oracle Database Cloud Service (DBCS) instance.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 9> add rmttrail ./dirdat/pa, extract ptpcadb

RMTTRAIL added.

Start the Extract Data Pump Group via GGSCI

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 10> start extract ptpcadb

Sending START request to MANAGER …
EXTRACT PTPCADB starting

Check the Status of Extract Data Pump Group via GGSCI 

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 11> info extract ptpcadb detail

EXTRACT PTPCADB Last Started 2017-02-22 11:12 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:08 ago)
Process ID 15281
Log Read Checkpoint File ./dirdat/ea000000
First Record RBA 0
Target Extract Trails:
Trail Name Seqno RBA Max MB Trail Type
./dirdat/pa 0 0 100 RMTTRAIL
Extract Source Begin End
./dirdat/ea000000 * Initialized * First Record
./dirdat/ea000000 * Initialized * First Record
Current directory /u01/app/oracle/product/12cOGG/db_1
Report file /u01/app/oracle/product/12cOGG/db_1/dirrpt/PTPCADB.rpt
Parameter file /u01/app/oracle/product/12cOGG/db_1/dirprm/ptpcadb.prm
Checkpoint file /u01/app/oracle/product/12cOGG/db_1/dirchk/PTPCADB.cpe
Process file /u01/app/oracle/product/12cOGG/db_1/dirpcs/PTPCADB.pce
Error log /u01/app/oracle/product/12cOGG/db_1/ggserr.log

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 13> info all

Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING ETPCADB 00:00:10 00:00:06
EXTRACT RUNNING PTPCADB 00:00:00 00:00:00

Configure and Start GGCS Oracle GoldenGate Delivery Process

Connect to GGCS Instance through ssh and go GoldenGate Software Command Interface (GGSCI) utility to configure GoldenGate Delivery process.

[oracle@ogg-wkshp db_1]$ ssh -i mp_opc_ssh_key opc@129.145.1.180

[opc@bics-gg-ggcs-1 ~]$ sudo su – oracle
[oracle@bics-gg-ggcs-1 ~]$ cd $GGHOME

Note: By default, “opc” user is the only one allowed to ssh to GGCS instance. We need to switch user to “oracle” via “su” command to manage the GoldenGate processes. The environment variable $GGHOME is  pre-defined in the GGCS instance and it’s the directory where GoldenGate was installled.

[oracle@bics-gg-ggcs-1 gghome]$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.160517 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_160711.1401_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Jul 12 2016 02:21:38
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2016, Oracle and/or its affiliates. All rights reserved.

Configure GGCS Oracle GoldenGate Replicat Online Delivery group via Integrated process

Configure the Replicat Online Delivery group that reads the trail file that the Data Pump writes to and deliver the changes into the BICS DBCS.

Before configuring the delivery group as an Integrated delivery process, make sure that the GGSCI session is connected to the database via the GGSCI “dblogin” command.

GGSCI (bics-gg-ggcs-1) 1> dblogin useridalias ggcsuser_alias

Successfully logged into database BICSPDB1.

Create / Add the Replicat Delivery group as an Integrated process  and in this example we will name our Replicat Delivery group to RTPCADB.

GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 2> add replicat rtpcadb, integrated, exttrail ./dirdat/pa

REPLICAT (Integrated) added.

Note: To edit/create the Replicat Delivery Configuration/Parameter file, you need to execute “edit param <group_name>” via the GGSCI utility.

GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 3> edit param rtpcadb

Here’s the GGCS Replicat Online Delivery Parameter (rtpcadb.prm) file used in this example:

REPLICAT RTPCADB
useridalias ggcsuser_alias
–Integrated parameter
DBOPTIONS INTEGRATEDPARAMS (parallelism 2)
DISCARDFILE ./dirrpt/rtpcadb.dsc, APPEND Megabytes 25
ASSUMETARGETDEFS
MAP TPCADB.ACCTN, TARGET GGCSBICS.ACCTN;
MAP TPCADB.ACCTS, TARGET GGCSBICS.ACCTS;
MAP TPCADB.BRANCH, TARGET GGCSBICS.BRANCH;
MAP TPCADB.TELLER, TARGET GGCSBICS.TELLER;

Start the GGCS Replicat Online Delivery process via GGCSI 

GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 3> start replicat rtpcadb

Sending START request to MANAGER …
REPLICAT RTPCADB starting

Check the Status of GGCS Replicat Online Delivery process via GGSCI 

GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 4> info replicat rtpcadb detail

REPLICAT RTPCADB Last Started 2017-02-22 14:23 Status RUNNING
INTEGRATED
Checkpoint Lag 00:00:00 (updated 00:00:06 ago)
Process ID 25601
Log Read Checkpoint File ./dirdat/pa000000
2017-02-22 14:23:38.468569 RBA 0
INTEGRATED Replicat
DBLOGIN Provided, inbound server name is OGG$RTPCADB in ATTACHED state
Current Log BSN value: (no data)
Integrated Replicat low watermark: (no data)
(All source transactions prior to this scn have been applied)
Integrated Replicat high watermark: (no data)
(Some source transactions between this scn and the low watermark may have been applied)
Extract Source Begin End
./dirdat/pa000000 * Initialized * 2017-02-22 14:23
./dirdat/pa000000000 * Initialized * First Record
./dirdat/pa000000000 * Initialized * First Record
Current directory /u02/data/gghome
Report file /u02/data/gghome/dirrpt/RTPCADB.rpt
Parameter file /u02/data/gghome/dirprm/rtpcadb.prm
Checkpoint file /u02/data/gghome/dirchk/RTPCADB.cpr
Process file /u02/data/gghome/dirpcs/RTPCADB.pcr
Error log /u02/data/gghome/ggserr.log

At this point, we now have a complete OGG replication processes between the source Oracle database on the On-Premises to the target Oracle database on the OPC via GGCS.

Run Test Transactions

Now we are ready to run some transactions on the On-Premises source database and to be replicated by the GGCS onto the target database running on the DBCS instance on the OPC.

In this example, we start with empty tables on both source and target.

Check of Source Tables (On-Premises)

[oracle@ogg-wkshp db_1]$ sqlplus tpcadb/tpcadb <<EOF
select count(*) from ACCTN;
select count(*) from ACCTS;
select count(*) from BRANCH;
select count(*) from TELLER;
EOF

SQL*Plus: Release 12.1.0.2.0 Production on Wed Feb 22 12:56:14 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Wed Feb 22 2017 12:49:42 -08:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL>
COUNT(*)
———-
0
SQL>
COUNT(*)
———-
0
SQL>
COUNT(*)
———-
0
SQL>
COUNT(*)
———-
0
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Check of Target Tables from GGCS Instance

[oracle@bics-gg-ggcs-1 ~]$ sqlplus ggcsbics@target/ggcsbics <<EOF
select count(*) from ACCTN;
select count(*) from ACCTS;
select count(*) from BRANCH;
select count(*) from TELLER;
EOF

SQL*Plus: Release 12.1.0.2.0 Production on Wed Feb 22 16:02:23 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Wed Feb 22 2017 16:01:10 -05:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production
SQL>
COUNT(*)
———-
0
SQL>
COUNT(*)
———-
0
SQL>
COUNT(*)
———-
0
SQL>
COUNT(*)
———-
0
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production

Note: When GGCS instance is provisioned, a default TNS net service name gets created in the tnsnames.ora of the GGCS instance and that is the “target” net service. This is the connection net service name that contains the connection information for the database that was associated to the GGCS instance when it was provisioned. The location of the tnsnames.ora can be found under the /u01/app/oracle/oci/network/admin directory.

Here’s a sample of the tnsnames.ora file that gets generated after the GGCS instance has been provisioned:

#GGCS generated file
target =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = BICS-DB)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = BICSPDB1.usoracle55293.oraclecloud.internal)
)
)

Run Test Transactions on the Source Tables (On-Premises) via SQLPLUS

Let’s start with insert transactions into the tables – inserting 2 records on each table a total of 8 operations since we have 4 tables.

[oracle@ogg-wkshp dirsql]$ sqlplus tpcadb/tpcadb <<EOF
INSERT INTO ACCTN (ACCOUNT_NO, BALANCE, PREVIOUS_BAL, LAST_CREDIT_AMT, LAST_CREDIT_TS, ACCOUNT_BRANCH) VALUES ( 83915, 1000, 0, 1000, TO_TIMESTAMP (‘2005-08-18:15:11:37.123456’, ‘YYYY-MM-DD:HH24:MI:SS.FF’), 82);
INSERT INTO ACCTN (ACCOUNT_NO, BALANCE, PREVIOUS_BAL, LAST_CREDIT_AMT, LAST_CREDIT_TS, ACCOUNT_BRANCH) VALUES ( 83916, 1000, 0, 1000, TO_TIMESTAMP (‘2005-08-18:15:11:37.123456’, ‘YYYY-MM-DD:HH24:MI:SS.FF’), 82);
COMMIT WORK;
INSERT INTO ACCTS (ACCOUNT_NO, FIRST_NAME, LAST_NAME, ADDRESS_1, ADDRESS_2, CITY, STATE, ZIP_CODE, CUSTOMER_SINCE) VALUES ( 83915, ‘Margarete’, ‘Smith’, ‘222 8th Ave’, ‘ ‘, ‘San Diego’, ‘CA’, 97827, to_date (‘1992-08-18’, ‘YYYY-MM-DD’));
INSERT INTO ACCTS (ACCOUNT_NO, FIRST_NAME, LAST_NAME, ADDRESS_1, ADDRESS_2, CITY, STATE, ZIP_CODE, CUSTOMER_SINCE) VALUES ( 83916, ‘Margarete’, ‘Howsler’, ‘1615 Ramona Ave’, ‘ ‘, ‘Fresno’, ‘CA’, 91111, to_date (‘1985-08-18’, ‘YYYY-MM-DD’));
COMMIT WORK;
INSERT INTO TELLER (TELLER_NO, BRANCH_NO, OPENING_BALANCE) VALUES ( 9815, 82, 10000 );
INSERT INTO TELLER (TELLER_NO, BRANCH_NO, OPENING_BALANCE) VALUES ( 9816, 83, 10000 );
COMMIT WORK;
INSERT INTO BRANCH (BRANCH_NO, OPENING_BALANCE, ADDRESS_1, ADDRESS_2, CITY, STATE, ZIP_CODE) VALUES ( 82, 100000, ‘7 Market St’, ‘ ‘, ‘Los Angeles’, ‘CA’, 90001);
INSERT INTO BRANCH (BRANCH_NO, OPENING_BALANCE, ADDRESS_1, ADDRESS_2, CITY, STATE, ZIP_CODE) VALUES ( 83, 100000, ‘222 8th Ave’, ‘ ‘, ‘Salinas’, ‘CA’, 95899);
COMMIT WORK;
EOF

SQL*Plus: Release 12.1.0.2.0 Production on Wed Feb 22 18:26:29 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Wed Feb 22 2017 18:25:24 -08:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL>
1 row created.
SQL>
1 row created.
SQL>
Commit complete.
SQL>
1 row created.
SQL>
1 row created.
SQL>
Commit complete.
SQL>
1 row created.
SQL>
1 row created.
SQL>
Commit complete.
SQL>
1 row created.
SQL>
1 row created.
SQL>
Commit complete.
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Now, will do an update transactions into the tables – updating 2 records on each table a total of 8 update operations since we have 4 tables.

[oracle@ogg-wkshp dirsql]$ sqlplus tpcadb/tpcadb <<EOF
UPDATE ACCTN SET BALANCE=25000, PREVIOUS_BAL=1000 WHERE ACCOUNT_NO=83915;
UPDATE ACCTN SET BALANCE=55789, PREVIOUS_BAL=1000 WHERE ACCOUNT_NO=83916;
COMMIT WORK;
UPDATE ACCTS SET FIRST_NAME = ‘Margie’ WHERE ACCOUNT_NO=83915;
UPDATE ACCTS SET FIRST_NAME = ‘Mandela’ WHERE ACCOUNT_NO=83916;
COMMIT WORK;
UPDATE TELLER SET OPENING_BALANCE=99900 WHERE TELLER_NO=9815;
UPDATE TELLER SET OPENING_BALANCE=77777 WHERE TELLER_NO=9816;
COMMIT WORK;
UPDATE BRANCH SET TOTAL_ACCTS = 25000 WHERE BRANCH_NO = 82;
UPDATE BRANCH SET TOTAL_ACCTS = 55789 WHERE BRANCH_NO = 83;
COMMIT WORK;
EOF

SQL*Plus: Release 12.1.0.2.0 Production on Wed Feb 22 18:37:13 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Wed Feb 22 2017 18:26:29 -08:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL>
1 row updated.
SQL>
1 row updated.
SQL>
Commit complete.
SQL>
1 row updated.
SQL>
1 row updated.
SQL>
Commit complete.
SQL>
1 row updated.
SQL>
1 row updated.
SQL>
Commit complete.
SQL>
1 row updated.
SQL>
1 row updated.
SQL>
Commit complete.
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Now, will do a delete transactions into the tables – deleting 1 record on each table a total of 4 delete operations since we have 4 tables.

[oracle@ogg-wkshp dirsql]$ sqlplus tpcadb/tpcadb <<EOF
DELETE FROM ACCTN WHERE ACCOUNT_NO = 83916;
DELETE FROM ACCTS WHERE ACCOUNT_NO = 83916;
DELETE FROM TELLER WHERE TELLER_NO = 9816;
DELETE FROM BRANCH where BRANCH_NO = 83;
COMMIT WORK;
EOF

SQL*Plus: Release 12.1.0.2.0 Production on Wed Feb 22 18:43:34 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Wed Feb 22 2017 18:37:13 -08:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL>
1 row deleted.
SQL>
1 row deleted.
SQL>
1 row deleted.
SQL>
1 row deleted.
SQL>
Commit complete.
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Now, let’s just do a simple count via sqlplus of the final total number of records in our source database.

[oracle@ogg-wkshp db_1]$ sqlplus tpcadb/tpcadb <<EOF
select count(*) from ACCTN;
select count(*) from ACCTS;
select count(*) from BRANCH;
select count(*) from TELLER;
EOF

SQL*Plus: Release 12.1.0.2.0 Production on Wed Feb 22 21:40:28 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Wed Feb 22 2017 21:18:00 -08:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL>
COUNT(*)
———-
1
SQL>
COUNT(*)
———-
1
SQL>
COUNT(*)
———-
1
SQL>
COUNT(*)
———-
1
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

At this point, we have executed the following operations:

Table Name Insert Update Delete Total Operations Final # of Rows/Records
TPCADB.ACCTN 2 2 1 5 1
TPCADB.ACCTS 2 2 1 5 1
TPCADB.TELLER 2 2 1 5 1
TPCADB.BRANCH 2 2 1 5 1

A total of 8 inserts, 8 updates, and 4 deletes.

Check Online Change Data Capture Extract process ETPCADB Statistics (On-Premises)

Now, let’s check the statistics for our Extract process ETPCADB via GGCSI “STATS” command, this should capture and reflect the operations we have just executed on the source tables.

GGSCI (ogg-wkshp.us.oracle.com) 1> dblogin userid tpcadb password tpcadb

Successfully logged into database.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 2> stats extract etpcadb, total, table *.*

Sending STATS request to EXTRACT ETPCADB …
Start of Statistics at 2017-02-22 21:26:13.
DDL replication statistics (for all trails):
*** Total statistics since extract started ***
Operations 21.00
Output to ./dirdat/ea:

Extracting from TPCADB.ACCTN to TPCADB.ACCTN:
*** Total statistics since 2017-02-22 18:49:44 ***
Total inserts                                 2.00
Total updates                                 2.00
Total deletes                                 1.00
Total discards                                0.00
Total operations                              5.00

Extracting from TPCADB.ACCTS to TPCADB.ACCTS:
*** Total statistics since 2017-02-22 18:49:44 ***
Total inserts                                 2.00
Total updates                                 2.00
Total deletes                                 1.00
Total discards                                0.00
Total operations                              5.00

Extracting from TPCADB.TELLER to TPCADB.TELLER:
*** Total statistics since 2017-02-22 18:49:44 ***
Total inserts                                 2.00
Total updates                                 2.00
Total deletes                                 1.00
Total discards                                0.00
Total operations                              5.00

Extracting from TPCADB.BRANCH to TPCADB.BRANCH:
*** Total statistics since 2017-02-22 18:49:44 ***
Total inserts                                 2.00
Total updates                                 2.00
Total deletes                                 1.00
Total discards                                0.00
Total operations                              5.00
End of Statistics.

Check Extract Datapump process PTPCADB Statistics (On-Premises)

Now, let’s check the statistics for our Extract Datapump process PTPCADB via the same GGCSI “STATS” command, this should also reflect the same number of operations we have just executed on the source tables.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 3> stats extract ptpcadb, total, table *.*

Sending STATS request to EXTRACT PTPCADB …
Start of Statistics at 2017-02-22 21:48:44.
Output to ./dirdat/pa:

Extracting from TPCADB.ACCTN to TPCADB.ACCTN:
*** Total statistics since 2017-02-22 18:49:45 ***
Total inserts                                 2.00
Total updates                                 2.00
Total deletes                                 1.00
Total discards                                0.00
Total operations                              5.00

Extracting from TPCADB.ACCTS to TPCADB.ACCTS:
*** Total statistics since 2017-02-22 18:49:45 ***
Total inserts                                 2.00
Total updates                                 2.00
Total deletes                                 1.00
Total discards                                0.00
Total operations                              5.00

Extracting from TPCADB.TELLER to TPCADB.TELLER:
*** Total statistics since 2017-02-22 18:49:45 ***
Total inserts                                 2.00
Total updates                                 2.00
Total deletes                                 1.00
Total discards                                0.00
Total operations                              5.00

Extracting from TPCADB.BRANCH to TPCADB.BRANCH:
*** Total statistics since 2017-02-22 18:49:45 ***
Total inserts                                 2.00
Total updates                                 2.00
Total deletes                                 1.00
Total discards                                0.00
Total operations                              5.00
End of Statistics.

Check Online Change Delivery Replicat process RTPCADB Statistics (GGCS Instance on the OPC)

Now, let’s check the statistics for our Online Delivery Replicat process RTPCADB via the same GGCSI “STATS” command we did for our Extract processes. This should also reflect the same number of operations we have just executed on the source tables and captured by the Extract (ETPCADB) process and was sent by Extract Datapump (PTPCADB) process.

GGSCI (bics-gg-ggcs-1) 1> dblogin useridalias ggcsuser_alias

Successfully logged into database BICSPDB1.

GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 2> stats replicat rtpcadb, total, table *.*

Sending STATS request to REPLICAT RTPCADB …
Start of Statistics at 2017-02-23 01:03:12.
Integrated Replicat Statistics:
Total transactions                             9.00
Redirected                                     0.00
DDL operations                                 0.00
Stored procedures                              0.00
Datatype functionality                         0.00
Event actions                                  0.00
Direct transactions ratio                      0.00%

Replicating from TPCADB.ACCTN to BICSPDB1.GGCSBICS.ACCTN:
*** Total statistics since 2017-02-23 00:59:41 ***
Total inserts                                  2.00
Total updates                                  2.00
Total deletes                                  1.00
Total discards                                 0.00
Total operations                               5.00

Replicating from TPCADB.ACCTS to BICSPDB1.GGCSBICS.ACCTS:
*** Total statistics since 2017-02-23 00:59:41 ***
Total inserts                                  2.00
Total updates                                  2.00
Total deletes                                  1.00
Total discards                                 0.00
Total operations                               5.00

Replicating from TPCADB.TELLER to BICSPDB1.GGCSBICS.TELLER:
*** Total statistics since 2017-02-23 00:59:41 ***
Total inserts                                  2.00
Total updates                                  2.00
Total deletes                                  1.00
Total discards                                 0.00
Total operations                               5.00

Replicating from TPCADB.BRANCH to BICSPDB1.GGCSBICS.BRANCH:
*** Total statistics since 2017-02-23 00:59:41 ***
Total inserts                                  2.00
Total updates                                  2.00
Total deletes                                  1.00
Total discards                                 0.00
Total operations                               5.00
End of Statistics.

Now, for the final step, let’s just do a simple count via sqlplus of the final total number of records in our target database and make sure that the result matches the total number of final records in our source database.

[oracle@bics-gg-ggcs-1 ~]$ sqlplus ggcsbics@target/ggcsbics <<EOF
select count(*) from ACCTN;
select count(*) from ACCTS;
select count(*) from BRANCH;
select count(*) from TELLER;
EOF

SQL*Plus: Release 12.1.0.2.0 Production on Thu Feb 23 01:12:24 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Wed Feb 22 2017 16:02:34 -05:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production
SQL>
COUNT(*)
———-
1
SQL>
COUNT(*)
———-
1
SQL>
COUNT(*)
———-
1
SQL>
COUNT(*)
———-
1
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production

Summary

This article walked through the steps to on how to configure Oracle GoldenGate (OGG) replication between a source Oracle database on the On-Premises to a target Oracle Database running on the Database Cloud Service (DBCS) on the Oracle Public Cloud (OPC) using GoldenGate Cloud Service (GGCS).

Additional Resources:

Oracle Database Cloud Service (DBCS) 

Oracle GoldenGate Cloud Service (GGCS)

GGCS User Guide Documentation

GGCS Tutorial Section

Integrating with Taleo Enterprise Edition using Integration Cloud Service (ICS)

$
0
0

17:39:20Introduction

Oracle Taleo provides talent management functions as Software as a service (SaaS). Taleo often needs to be integrated with other human resource systems. In this post, let’s look at few integration patterns for Taleo and implementing a recommended pattern using Integration Cloud Service (ICS), a cloud-based integration platform (iPaaS).

Main Article

Oracle Taleo is offered in Enterprise and Business editions.  Both are SaaS applications that often need to be integrated with other enterprise systems, on-premise or on the cloud. Here are the integration capabilities of Taleo editions:

  • Taleo Business Edition offers integration via SOAP and REST interfaces.
  • Taleo Enterprise Edition offers integration via SOAP services and Taleo Connect Client (TCC).

Integrating with Taleo Business Edition can be achieved with SOAP or REST adapters in ICS, using a simple “Basic Map Data” pattern. Integrating with Taleo Enterprise Edition, however, deserves a closer look and consideration of alterative patterns. Taleo Enterprise provides three ways to integrate, each with its own merits.

Integration using Taleo Connect Client(TCC) is recommended for bulk integration. We’ll also address SOAP integration for sake of completeness. To jump to a sub-section directly, click one of the links below.


Taleo SOAP web services
Taleo Connect Client (TCC)
Integrating Taleo with EBS using ICS and TCC
Launching TCC client through a SOAP interface


Taleo SOAP web services

Taleo SOAP web services provide synchronous integration. Web services update the system immediately. However, there are restrictive metered-limits to number of invocations and number of records per invocation, in order to minimize impact to live application. These limits might necessitate several web service invocations to finish a job that might need only one execution of other job alternatives.  Figure 1 shows a logical view of such integration using ICS.

Figure1

Figure1

ICS integration could be implemented using “Basic Map Data” for each distinct flow or using “Orchestration” for more complex use cases.


Taleo Connect Client (TCC)

As stated previously, TCC provides the best way to integrate with Taleo Enterprise. TCC has design editor to author exports and imports and run configurations. It also could be run from command line to execute the import or export jobs. A link to another post introducing TCC is provided in References section.

Figure3

Figure 3

Figure 3 shows a logical view of a solution using TCC and ICS. In this case, ICS orchestrates the flow by interacting with HCM and Taleo.   TCC is launched remotely through SOAP service. TCC, the SOAP launcher service and a staging file system are deployed to an IaaS compute node running Linux.


Integrating Taleo with EBS using ICS and TCC

Let’s look at a solution to integrate Taleo and EBS Human resources module, using ICS as the central point for scheduling and orchestration. This solution is suitable for on-going scheduled updates involving few hundred records for each run. Figure 4 represents the solution.

Figure4

Figure 4

TCC is deployed to a host accessible from ICS. The same host runs a J-EE container, such as WebLogic or Tomcat. The launcher web service deployed to the container launches TCC client upon a request from ICS. TCC client, depending on the type of job, either writes a file to a staging folder or reads a file from the folder.  The staging folder could be local or on a shared file system, accessible to ICS via SFTP.  Here are the steps performed by the ICS orchestration.

  • Invoke launcher service to run a TCC export configuration. Wait for completion of the export.
  • Initiate SFTP connection to retrieve the export file.
  • Loop through contents of the file. For each row, transform the data and invoke EBS REST adapter to add the record. Stage the response from EBS locally.
  • Write the staged responses from EBS to a file and transfer via SFTP to folder accessible to TCC.
  • Invoke launcher to run a TCC import configuration. Wait for completion of the import.
  • At this point, bi-direction integration between Taleo and EBS is complete.

This solution demonstrates the capabilities of ICS to seamlessly integrate SaaS applications and on-premise systems. ICS triggers the job and orchestrates export and import activities in single flow. When the orchestration completes, both, Taleo and EBS are updated. Without ICS, the solution would contain a disjointed set of jobs that could be managed by different teams and might require lengthy triage to resolve issues.


Launching TCC client through a SOAP interface

Taleo Connect Client could be run from command line to execute a configuration to export or import data. A Cron job or Enterprise Scheduling service (ESS) could launch the client. However, enabling the client to be launched through a service will allow a more cohesive flow in integration tier and eliminate redundant scheduled jobs.

Here is a sample java code to launch a command line program. This code launches TCC code and wait for completion, capturing the command output. Note that the code should be tailored to specific needs and suitable error handing, and, tested for function and performance.

package com.test.demo;
import com.taleo.integration.client.Client;
import java.io.BufferedReader;
import java.io.InputStreamReader;
public class tccClient {
    public boolean runTCCJoB(String strJobLocation) {
        Process p=null;
        try {
            System.out.println("Launching Taleo client. Path:" + strJobLocation);
            String cmd = "/home/runuser/tcc/scripts/client.sh " + strJobLocation;
            p = Runtime.getRuntime().exec(cmd);
	//Read both Input and Error streams.
            ReadStream s1 = new ReadStream("stdin", p.getInputStream());
            ReadStream s2 = new ReadStream("stderr", p.getErrorStream());
            s1.start();
            s2.start();
            p.waitFor();
            return true;
        } catch (Exception e) {
            //log and notify as appropriate
            e.printStackTrace();
            return false;
        } finally {
            if (p != null) {
                p.destroy();
            }
        }
    }
}

Here is a sample service for a launcher service using JAX-WS and SOAP.

package com.oracle.demo;
import javax.jws.WebService;
import javax.jws.WebMethod;
import javax.jws.WebParam;

@WebService(serviceName = "tccJobService")
public class tccJobService {

    @WebMethod(operationName = "runTCCJob")
    public String runTCCJob(@WebParam(name = "JobPath") String JobPath) {
        try{
        //tccClient().runTCCJob(JobPath);
        return new tccClient().runTCCJoB(JobPath) ;
        }
        catch(Exception ex)
        {
            ex.printStackTrace();
            return ex.getMessage();
        }
    }
}

Finally, this is a SOAP request that could be sent from an ICS orchestration, to launch TCC client.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:demo="http://demo.oracle.com/">
   <soapenv:Header/>
   <soapenv:Body>
      <demo:runTCCJob>
         <!--Optional:-->
         <JobPath>/home/runuser/tcc/exportdef/TCC-Candidate-export_cfg.xml</JobPath>
      </demo:runTCCJob>
   </soapenv:Body>
</soapenv:Envelope>

Summary

This post addressed alternative patterns to integrate with Taleo Enterprise Edition, along with pros and cons of each pattern. It explained a demo solution based on the recommended pattern using TCC and provided code snippets and steps to launch TCC client via web service. At the time of this post’s publication, ICS does not offer Taleo-specific adapter. A link to current list of supported adapters is provided in references section.

 

References

·        Getting started with Taleo Connect Client (TCC) – ATeam Chronicles

·        Taleo business edition REST API guide

·        Latest documentation for Integration Cloud Service

·        Currently available ICS adapters


Identity and Cloud Security A-Team at Oracle Open World

$
0
0
I just wanted to let everyone know that Kiran and I will be presenting with our good friend John Griffith from Regions Bank at Oracle Open World next week. Our session is Oracle Identity Management Production Readiness: Handling the Last Mile in Your Deployment [CON6972] It will take place on Wednesday, Sep 21, 1:30 p.m. […]

Installing Data Sync in Compute for Cloud to Cloud Loading into BICS

$
0
0
For other A-Team articles about BICS and Data Sync, click here Introduction The Data Sync tool provides the ability to extract from both on-premise, and cloud data sources, and to load that data into BI Cloud Service (BICS), and other relational databases.  In some use cases, both the source databases, and the target, may be in ‘the […]

Cloud Security: Seamless Federated SSO for PaaS and Fusion-based SaaS

$
0
0
Introduction Oracle Fusion-based SaaS Cloud environments can be extended in many ways. While customization is the standard activity to setup a SaaS environment for your business needs, chances are that you want to extend your SaaS for more sophisticated use cases. In general this is not a problem and Oracle Cloud offers a great number […]

Using Process Cloud Service REST API Part 1

$
0
0
The Process Cloud Service (PCS) REST API provides an avenue to build UI components for workflow applications based on PCS. The versatility that comes with REST enables modern web application frameworks and just as easily, mobile applications. The API documentation is available here. Notice the endpoints are organized into eight categories. We’ll be focusing on […]

Integrating Commerce Cloud using ICS and WebHooks

$
0
0
Introduction: Oracle Commerce Cloud is a SaaS application and is a part of the comprehensive CX suite of applications. It is the most extensible, cloud-based ecommerce platform offering retailers the flexibility and agility needed to get to market faster and deliver desired user experiences across any device. Oracle’s iPaaS solution is the most comprehensive cloud […]

Recreating an Oracle Middleware Central Inventory in the Oracle Public Cloud

$
0
0
Introduction This post provides a simple solution for recreating an Oracle Middleware software central inventory. One rare use case is when a server is lost and a new server is provisioned. The Middleware home may be on a storage device that can be reattached e.g. /u01. However, the central inventory may have been on a […]

Extracting Data from Oracle Business Intelligence 12c Using the BI Publisher REST API

$
0
0
Introduction This post details a method of extracting data from an Oracle Business Intelligence Enterprise Edition (OBIEE) environment that is integrated with Oracle Business Intelligence Publisher (BIP) 12c. The environment may either be Cloud-Based or On-Premise. The method utilizes the BI Publisher REST API to extract data from a BIP report. It also uses BIP […]

ICS Connectivity Agent Advanced Configuration

$
0
0
Oracle’s Integration Cloud Service (ICS) provides a feature that helps with the integration challenge of cloud to ground integrations with resources behind a firewall. This feature is called the ICS Connectivity Agent (additional details about the Agent can be found under New Agent Simplifies Cloud to On-premises Integration). The design of the Connectivity Agent is […]

Oracle GoldenGate: How to Configure On-Premise to GoldenGate Cloud Services (GGCS) Replication with Corente VPN

$
0
0
Introduction This document will walk you through how to configure Oracle GoldenGate replication between On-Premise to GoldenGate Cloud Service (GGCS) on Oracle Public Cloud (OPC) via Virtual Private Network (VPN) using Corente Services Gateway (CSG). The high level steps for this replication configuration are as follows: Creation of SSH Public/Private Key Files Provisioning of Database […]

IDCS Audit Reports using Visual Analyzer

$
0
0
Introduction This article is to help expand on topics of integration with Oracle’s Cloud Identity Management service called Identity Cloud Service (IDCS).  IDCS delivers core essentials around identity and access management through a multi-tenant Cloud platform.  As part of the IDCS framework, audit events are captured for all significant events, changes, and actions, which are […]

Testing Oracle ATG Commerce with ATG Dust

$
0
0
  Introduction ATG Dust is a Java unit testing framework based on JUnit meant for use with Oracle ATG Commerce.   How ATG Dust works In a non ATG application, when you create a unit test against a class, the test is often executing by instantiating the class directly, and calling methods inside it. Code […]

Using OpenID Connect to delegate authentication to Oracle Identity Cloud Service

$
0
0
In this post, I will describe the process of using the Oracle Identity Cloud Service to provide authentication for a custom web application, using the OpenID Connect protocol. I will focus on the sequence of calls between the application and IDCS in order to focus on building an understanding of how OpenID Connect actually works. […]
Viewing all 376 articles
Browse latest View live