Java ArrayList, Callable Statements and PL/SQL Procedures

If you’re developing a Java Application which integrates with an Oracle back-end, you’ve probably run into the following technicalities:
transforming an arraylist as a parameter to a callable statement to process and persist the data in your Oracle Database.

How can you accomplish this in a generic and re-usable manner:
1. Create an objecttype (PL/SQL) – Backend:

CREATE OR REPLACE type ot_emp as object
( emp_id number(10),
emp_cd varchar2(20),
name varchar2(100));

2. Create a collection-type based on the objecttype (PL/SQL) – Backend:

CREATE OR REPLACE type ct_emp as table of ot_emp;

3. Create a PL/SQL procedure to process and persist the data coming from the front-end app:

PROCEDURE p_insert_emp(
pi_employees IN ct_emp_coll,
po_message_info OUT ot_message_info);

4. Create a Java Object for each Oracle object-type implementing the SQLData-API:

public class Employee extends SuperEmployee implements SQLData
{
public void readSQL(SQLInput stream, String typeName) {}
public String
getSQLTypeName() { return
“HR.OT_EMP”; }
public void writeSQL(SQLOutput stream)
throws SQLException {
stream.writeLong(getId().longValue());
stream.writeString(getCode());
stream.writeString(getName());}}

The readSQL() method is used to map the Oracle data to Java data, the getSQLTypeName() and writeSQL() is used to map java data to Oracle data.

After putting together the framework, you can then pass the arraylist with the callablestatement and process the information into your back-end environment.

Of course this kind of functionality is handled by ORM frameworks as well, such as Hibernate, Toplink, iBatis, …

Data Integration Services – Consolidate your data (sources)

When talking about data integration services and depending on the audience you’re talking to, this could be:

  • Data needs to flow between different applications, instead of persisting the data on different data containers => Data Consolidation, Data messaging
  • Extract, transform and load => data warehousing, data consolidation
  • Services that are offered within an enterprise, or enterprise-wide, to share data => web services, messaging services, Service Bus, …

In other words, data integration services can hold a number of key performance indicators which are defined by the business metrics and requirements.

Besides the integration-aspect, the data-aspect is even more crucial such as defining the Enteprise Information Model so every consumer perceives the data in the same manner.

In regards to this aspect and the retail-sector, there’s an interesting blog-post regarding the Oracle Retail Data Model (ORDM).

OWB 11G (11.1.0.6) – Overall Goal, new features, installation guidelines

The overall goal :

Integrate the OWB technical stack with the 11g database. This simplifies the OWB installation by incorporating it as a database option during database installation.


Additional goals include:

  • Not requiring SYSDBA credentials at OWB installation time
  • Requiring only one OWB schema per Oracle Database instance
  • Requires only a single Control Center Service for the database instance, serving the OWBSYS schema.
  • The single unified repository enables maintaining a single copy of OWB database objects in OWBSYS (tables, views, PL/SQL packages, and so on).

Features (also available in the OWB 10.2.0.3 patch):

  • Versioning of Type 2 SCDs: Hierarchy versioning supports multiple hierarchies. Hierarchy versioning is not enabled by default for type 2 SCDs (slowly changing dimensions). You must use the Data Object Editor to enable hierarchy versioning.
  • Set-based DELETE in Mapping Code Generation: You can design a mapping with the loading type set to DELETE and code generation set to SET BASED.
  • Merge Optimization for Table Operators: You can enable the Merge Optimization property for table operators. When set to true, this property optimizes the invocation or execution of expressions and transformations in the MERGE statement.
  • DML Error Logging to Handle Error Tables: DML error logging is available for set-based maps.

When Do You Need Stand-alone Installation?
The OWB 11g stand-alone installation is required only if you must:

  • Deploy to an Oracle Database 10g, release 2 target
  • Perform Discoverer deployment

The stand-alone DVD is bundled with the 11g database pack.

Debugging Bpel processes

You could debug bpel processes using different kinds of approaches, such as looking into audit trails, sniffing the soap envelopes with the request- and response message, using junit, using the bpel console, …

In this post I’ll mention the 2 approaches I use the most, namely the bpel console itself and Junit.


Visual Debugging Using BPEL Console

Oracle Enterprise Manager 10g BPEL Control reduces the cost and complexity of deploying and managing your business processes. Visually monitor the execution of each BPEL process, drill down into the audit trail and view the details of each conversation, or debug a running flow against its BPEL implementation.

If you’ve deployed a bpel process, you can test the execution in the BPEL Console: http://server:port/ BPELConsole.

In the screen above you can see the deployed bpel processes on the lef-hand side of the screen. To instantiate such a process and create a test instance you can click on the process name and the below screen will be shown:

In this screen you can test the process by defining your own payload, data to be processed by the BPEL process. To define the payload you can use an html-form, the default screen or you can paste the soap-envelop, an xml-based message into the xml-source textarea. To actually test the instance you just need to click on the ‘Send XML-Message’-button. You can also perform stress tests on the bpel processes to verify if performance problems may occure on peak moments.

When you’ve clicked on the button, the process is instantiated and the following screen is shown:

In the tabbed windows you can have a detailed look at the instantiated process, depending on your requirements:

Visual flow:

The activities that failed, threw an exception, are shown with a red background. Each activity in the visual flow holds all needed information for that specific activity. When you double click an activity, the needed data will be shown in an xml-format, because xml is the standard messaging-format for web services.

Audit instance:

Debug instance:


Debug BPEL Processes via JUnit

As soon as a BPEL process is deployed, the BPEL process lives on as being a web service. The webservice can be accessed by its endpoint, or wsdl location.

On the wsdl-tab of your BPEL Process, in the Bpel Console, you can look-up the end-point of the deployed bpel process = web service.

In Jdeveloper you can define a Web Service Proxy and integrate a Junit-test case for this web service proxy:

package test.proxy;

import javax.xml.rpc.ServiceFactory;

public
class BPELTest_ServiceTest extends junit.framework.TestCase{
private
BPELTest_Service myBPELTest_Service;

public
BPELTest_ServiceTest(java.lang.String name){
super(name);
}

protected void setUp() throws Exception {
ServiceFactory factory =
ServiceFactory.newInstance();
myBPELTest_Service =
(test.proxy.BPELTest_Service)factory.loadService(test.proxy.BPELTest_Service.class);
}

protected void tearDown(){
myBPELTest_Service = null;
}

public void testBPELTestPortprocess() throws java.lang.Exception {
test.proxy.BPELTest_PortType port = myBPELTest_Service.getBPELTestPort();
test.proxy.BPELTestProcessRequest payload = null;
BPELTestProcessResponse response = port.process(payload);
assertEquals(“389″, response.toString());
}
}

How to handle logging in BPEL Processes

1.1. Logging In Bpel

Logging can be performed on domain-level and system-level and you can use different mechanisms to log events, task details, …

In this post I’ve summarized the basic logging functionality you can use on bpel processes.


1.1.1. Process Logging Information

Oracle BPEL Process Manager uses the log4j tool to generate log files containing messages that describe startup and shutdown information, errors, warning messages, access information on HTTP requests, and additional information.

The log4j tool enables logging at runtime without modifying the application binary.
Instead, logging behavior is controlled by editing properties in Oracle BPEL Control
and Oracle BPEL Admin Console.

Two logging levels are supported in Oracle BPEL Process Manager:
· Domain
o Manages logging information within specific domains
· System
o Manages logging information on a system-wide level

1.1.1.1. Domain-wide Logging

These can be configured through the BPEL Console (http://hostname:port/BPELConsole) > manage BPEL domain > logging or by editing log4j-config.xml in $BPEL_HOME\integration\orabpel\domains\\config

The different domains to log about:
· .collaxa.cube.engine.deployment – deployment related logging
· .collaxa.cube.compiler – compilation related logging
· .collaxa.cube.messaging – messaging layer (as bpel used messaging services to scale)
· .collaxa.cube.security – server side security (fwrk)
· .oracle.bpel.security – inside validator logging
· .collaxa.cube.ws – everything that is related to communication (WSIF layer, SOAP, Adapters) – shows you at least a longer stack if something breaks there
· .collaxa.cube.xml – xml transformation, storage, hydration
· .collaxa.cube.services – logging for services like Notification or Human Workflow
· .collaxa.cube.engine.delivery – Delivery Service and Manager, responsible for callbacks, and first (initiating) delivery
· .collaxa.cube – cube related logging (system)

1.1.1.2. System-wide Logging

System-wide loggers are used for logging information about infrastructure, AXIS and WSIF bindings. They can be configured through the BPEL Admin Console (http://hostname:port/BPELAdmin) > logging or by editing log4j-config.xml in $BPEL_HOME\integration\orabpel\system\config

The different systems to log about:
· org.collaxa.thirdparty.apache.wsif – logger for system-wide WSIF
· org.collaxa.thirdparty.apache.axis.transport – logger to see what axis is sending on the wire
· org.collaxa.thirdparty.apache.axis – general axis related logging
· collaxa.cube.services – all BPEL PM wide services
· collaxa.cube.infrastructure – infrastructure such as DB connectors

1.1.1.3. Log Level

The following logging levels are available and listed here from highest priority to lowest priority. When a logging level is specified, all messages with a lower priority level than the one selected are ignored.

· Off
o Disables logging. This selection is the highest priority.
· Fatal
o Logs critical messages. After logging occurs, the application quits abnormally.
· Error
o Logs application error messages to a log; the application continues to run (for example, an administrator-supplied configuration parameter is incorrect and you default to using a hard-coded value).
· Warn
o Logs warning messages to a log; the application continues to run without problems.
· Info
o Logs messages in a format similar to the verbose mode of many applications.
· Debug
o Logs debugging messages that should not be printed when the application is in a production environment.
· All
o Enables all logging. This selection is the lowest priority.

1.1.2. Logging with Sensors

You can use sensors to generate application logging activity.

Note that logging with sensors impacts performance because sensor data objects are built even when logging is disabled.

You add sensors to specific activities and then extract data from variables. To do this, you must implement a custom sensor publishing action to do the log4j logging. For example, you can create a sensor on an invoke activity and create a message that is
sent to a JMS queue.

1.1.3. Logging with bpelx:exec in a Java Embedding Activity

You can also log messages by adding custom Java code to a BPEL process using the Java BPEL exec extension bpelx:exec inside a Java Embedding activity in Oracle JDeveloper.

The method addAuditTrailEntry(String):void enables you to add an entry to the audit trail.

JSF/ADF and the browsers’ back button

When you’re developing a JSF or ADF Faces application and your giving your customer a first testing-experience, you’ll notice the browser’s back button is a very interesting functionality used by a lot of end-users.

JSF saves the state of every page loaded in a browser, which means everytime a user clicks the browser back button, JSF loads the saved state of the target page.

You will also notice that the application is behaving very weard and unpredictable and you as a developer will need to solve the problem.

Possible solutions:

  1. Integrate the needed java script functionality to display the application in a full screen window (back button isn’t displayed anymore)
  2. Define no-caching on your web application using the phaselistener (jsf api’s)
  3. The user needs to refresh the page he ‘backed to’ => maybe this isn’t an option when I’m talking about developers and customers ;o)
  4. Use ‘enableTokenValidation=false’ within an ADF Application
  5. Define the needed state- and session-parameters in your jsf’s configuration file, read following post

I still think, when talking to collegues, browsing the internet/communities etc., you need to tell the customer that when using the back button, unexpected behaviour can occure in the application. Using bread-crumbs, task-oriented applications, separate crud-pages, etc. the customer won’t be that easily temped to use the back-button.

Grid Enabled SOA

When talking about SOA or Web Services, most of the time a drawback that everybody knows is the performance issues when calling web services. The xml payload can become huge when you’re invoking the web service several times, or you don’t handle the xml data correctly.

Enabling grid functionality when using SOA or implementation your integration project, can spice up the performance and handle memory allocation more correctly.

The SOA GRID uses Oracle Coherence as an in Memory grid solution to provide high-speed in-memory access to continually available service state data without the need for disk persistence.

What can this GRID do for our SOA projects?

  • State-aware availability of services
  • Primary/backup synchronization via datamodel
  • Asynchronous database updates
  • Relocatable bpel processes (activate/rehydrate bpel process where other service resides)
  • ESB will process, transform and hold state
  • No need to define a new bus because all info is already available such as the data, the source/target, …

For more information regarding Grid-enabled SOA, have a look at OTN.

SOA – What’s it all about and most of all what’s in it for me?

When I talk about SOA, Service Oriented Architecture, most of the time people as well business as developers see this as a huge investment in knowledge and technology.

When talking about the ROI of SOA, I often hear that it’s only a solution for huge companies that can invest in these kinds of technologies.

This means that SOA still isn’t very clear for people and they all tend to have the same question … what’s in it for me, for my company and how much will this cost.

Well first all, SOA isn’t the word to use, it’s all about integration. When you’re talking about data integration, business process integration, application integration, … everything has got to do with the basic principles of a SOA methodology: Loose coupling, re-use, standardization and services.

There’s no such thing as a SOA Architecture, it’s more a new way of thinking, a methodology to guide you through getting acquanted with this new paradigm.

A quote I found very useful (more information regarding the article, can be found on searchsoa):

Today’s SOA projects are largely about integration. The top benefits
organizations hope to achieve are improved data integration (32%), enable legacy
application integration (32%) and integrated disparate department applications
(23%), followed by cost cutting (21%). Staying competitive (8.4%) and driving
innovation (8%) tracked low on the expected benefits list.

More information regarding integration-projects and how to achieve improvement in these different domains will be posted regularly on this blog-post.