A Canonical Data Model, the missing link within a Service Oriented Environment

Chris Judson has given an interesting presentation regarding a Canonical Data Model within a Service Oriented Architecture.

First he gave an example of the different aspects and problems you could be facing when defining the existing arhictecture and business flows within an organisation.

One of the aspects that’s needed to accomplish this, is getting IT and business to consolidate and collaborate with eachother to have a clear understaning of today’s architecture and the goals defined for the future.

The Canonical Data Model will define a common format to describe business entity within the enterprise wide organisation, as well for business as IT.

Take aways from this session:

  • The CDM will reduce the interface maintenance and encapsulate business logic in one central place
  • Put the CDM on the bus: you can plug in new applications to listen to existing events, without the need to define a new format for the new consumer + there’s a common understanding of the data model for as well business as it
  • Use the 80/20 rule to define a CDM: First you take all the unique identifiers combined with a super set of data which will be used by most consumers. In other words, if 80% of the consumers have the needed data within the CDM, the 20% can be delivered using the enrichment pattern, without the need to enlarge the payload of the CDM
  • Managing change is hard within such a model, because the dependencies between several applications are mostly high. To manage change, the 80/20 rule is applicable as well. When 80% of the consumers need new attributes, changes in the existing attributes, … the CDM can be changed. The other consumers can be delivered the same functionality using the enrichment pattern again.
  • For schema versioning the Format Indicator Pattern is mostly used
  • Use generic XML types for the XSD instead of DB specific types
  • Use declarative namespaces to manage data domains to have a generic enterprise wide data definition strategy in place

The presentation of Chris was very enlightning, because a lot of these tips & tricks are valuable for each design or implementation using XML type data and service enablement.

ADF 11g – Take Aways

Monday morning I followed the session of Lucas Jellema, Putting a Smile on ADF Faces. The presentation was great, especially because Lucas created a demo with some of the features integrated that Steve Miranda had shown in the keynote session regarding Fusion Apps.

Take aways from this session:

  • A table can fetch data in an asynchronous request, so you can scroll throug records and the server will get more data as needed. In other words you don’t need to use the pre-defined row-set anymore which was necessary in previous versions, e.g. show 10 records at a time)
  • Client Side and Server Side support has been greatly improved so developers can tackle functionality where it’s supposed to happen. API’s are available to be able to work only on the Client Side or Server Side using Listeners. Using the Client Side scripting for example, you can now use javascript and client Side API’s instead of needing to use backing beans or managed beans. For Example using propertyListeners on the client side to trigger events.
  • You can embed tables inside the panelCollection-component to be able to change to look and feel of a table component, e.g. detach the table to a separate region to have more display-functionality. What you can do know is hide columns, filter on records at run-time which can give you the same dynamic look and feel as with using interactive reporting within Apex
  • Autosuggest-functionality to be able to get a list of valies according to the information the end-user has putted in. The event keys kan be intercepted by the Client Side Listener, calling the Server Side Listener which can then push the data to the Client Side
  • The Data Push mechanism or Active Data Services give you the ability to work on real-time data without the need to fetch the data yourself which is a great enhancement

And of course there’s lots more in the ADF 11g release that will enhance user friendliness and capabilities for the developer to give you a true Web 2.0 experience. The hierarchy viewer-component can be viewed already on the rea-side. Have a look at what ADF has to offer you in a sandbox-environment where you can start playing around with the functionality.

ADF EMG at ODTUG, OKUG, google-group …

This group is a place to discuss best practices and methodologies for JDeveloper ADF Enterprise development, including effort by “experts” in ADF to discuss high level issues than those discussed on the OTN JDeveloper Forums.

This effort is an overall part of getting ADF experts, advocates and programmers to start collaborating at user group events and OOW to get “ADF out there”.

What does the ADF EMG stand for?

Meanwhile, for those who are looking for something of substance, we’ve recently centralised previous efforts of the group from the Oracle Wiki to this group.

If you select the “Pages” link to the right of the Group window, it’ll list useful content including:

During the Middleware and SOA Session at the Sundown Sessions, we will be further discussing these deliverables and other aspects that have been mentioned on our group.

You can find more information regarding ADF EMG and the ACE program during ODTUG, at: http://www.odtugkaleidoscope.com/acedirector.html

Simon Haslam has submitted an Abstract regarding the ADF EMG take aways for OKUG as well. So if you can’t attend ODTUG, you certainly need to attend OKUG!

Java ArrayList, Callable Statements and PL/SQL Procedures

If you’re developing a Java Application which integrates with an Oracle back-end, you’ve probably run into the following technicalities:
transforming an arraylist as a parameter to a callable statement to process and persist the data in your Oracle Database.

How can you accomplish this in a generic and re-usable manner:
1. Create an objecttype (PL/SQL) – Backend:

CREATE OR REPLACE type ot_emp as object
( emp_id number(10),
emp_cd varchar2(20),
name varchar2(100));

2. Create a collection-type based on the objecttype (PL/SQL) – Backend:

CREATE OR REPLACE type ct_emp as table of ot_emp;

3. Create a PL/SQL procedure to process and persist the data coming from the front-end app:

PROCEDURE p_insert_emp(
pi_employees IN ct_emp_coll,
po_message_info OUT ot_message_info);

4. Create a Java Object for each Oracle object-type implementing the SQLData-API:

public class Employee extends SuperEmployee implements SQLData
{
public void readSQL(SQLInput stream, String typeName) {}
public String
getSQLTypeName() { return
“HR.OT_EMP”; }
public void writeSQL(SQLOutput stream)
throws SQLException {
stream.writeLong(getId().longValue());
stream.writeString(getCode());
stream.writeString(getName());}}

The readSQL() method is used to map the Oracle data to Java data, the getSQLTypeName() and writeSQL() is used to map java data to Oracle data.

After putting together the framework, you can then pass the arraylist with the callablestatement and process the information into your back-end environment.

Of course this kind of functionality is handled by ORM frameworks as well, such as Hibernate, Toplink, iBatis, …

Data Integration Services – Consolidate your data (sources)

When talking about data integration services and depending on the audience you’re talking to, this could be:

  • Data needs to flow between different applications, instead of persisting the data on different data containers => Data Consolidation, Data messaging
  • Extract, transform and load => data warehousing, data consolidation
  • Services that are offered within an enterprise, or enterprise-wide, to share data => web services, messaging services, Service Bus, …

In other words, data integration services can hold a number of key performance indicators which are defined by the business metrics and requirements.

Besides the integration-aspect, the data-aspect is even more crucial such as defining the Enteprise Information Model so every consumer perceives the data in the same manner.

In regards to this aspect and the retail-sector, there’s an interesting blog-post regarding the Oracle Retail Data Model (ORDM).

OWB 11G (11.1.0.6) – Overall Goal, new features, installation guidelines

The overall goal :

Integrate the OWB technical stack with the 11g database. This simplifies the OWB installation by incorporating it as a database option during database installation.


Additional goals include:

  • Not requiring SYSDBA credentials at OWB installation time
  • Requiring only one OWB schema per Oracle Database instance
  • Requires only a single Control Center Service for the database instance, serving the OWBSYS schema.
  • The single unified repository enables maintaining a single copy of OWB database objects in OWBSYS (tables, views, PL/SQL packages, and so on).

Features (also available in the OWB 10.2.0.3 patch):

  • Versioning of Type 2 SCDs: Hierarchy versioning supports multiple hierarchies. Hierarchy versioning is not enabled by default for type 2 SCDs (slowly changing dimensions). You must use the Data Object Editor to enable hierarchy versioning.
  • Set-based DELETE in Mapping Code Generation: You can design a mapping with the loading type set to DELETE and code generation set to SET BASED.
  • Merge Optimization for Table Operators: You can enable the Merge Optimization property for table operators. When set to true, this property optimizes the invocation or execution of expressions and transformations in the MERGE statement.
  • DML Error Logging to Handle Error Tables: DML error logging is available for set-based maps.

When Do You Need Stand-alone Installation?
The OWB 11g stand-alone installation is required only if you must:

  • Deploy to an Oracle Database 10g, release 2 target
  • Perform Discoverer deployment

The stand-alone DVD is bundled with the 11g database pack.

Debugging Bpel processes

You could debug bpel processes using different kinds of approaches, such as looking into audit trails, sniffing the soap envelopes with the request- and response message, using junit, using the bpel console, …

In this post I’ll mention the 2 approaches I use the most, namely the bpel console itself and Junit.


Visual Debugging Using BPEL Console

Oracle Enterprise Manager 10g BPEL Control reduces the cost and complexity of deploying and managing your business processes. Visually monitor the execution of each BPEL process, drill down into the audit trail and view the details of each conversation, or debug a running flow against its BPEL implementation.

If you’ve deployed a bpel process, you can test the execution in the BPEL Console: http://server:port/ BPELConsole.

In the screen above you can see the deployed bpel processes on the lef-hand side of the screen. To instantiate such a process and create a test instance you can click on the process name and the below screen will be shown:

In this screen you can test the process by defining your own payload, data to be processed by the BPEL process. To define the payload you can use an html-form, the default screen or you can paste the soap-envelop, an xml-based message into the xml-source textarea. To actually test the instance you just need to click on the ‘Send XML-Message’-button. You can also perform stress tests on the bpel processes to verify if performance problems may occure on peak moments.

When you’ve clicked on the button, the process is instantiated and the following screen is shown:

In the tabbed windows you can have a detailed look at the instantiated process, depending on your requirements:

Visual flow:

The activities that failed, threw an exception, are shown with a red background. Each activity in the visual flow holds all needed information for that specific activity. When you double click an activity, the needed data will be shown in an xml-format, because xml is the standard messaging-format for web services.

Audit instance:

Debug instance:


Debug BPEL Processes via JUnit

As soon as a BPEL process is deployed, the BPEL process lives on as being a web service. The webservice can be accessed by its endpoint, or wsdl location.

On the wsdl-tab of your BPEL Process, in the Bpel Console, you can look-up the end-point of the deployed bpel process = web service.

In Jdeveloper you can define a Web Service Proxy and integrate a Junit-test case for this web service proxy:

package test.proxy;

import javax.xml.rpc.ServiceFactory;

public
class BPELTest_ServiceTest extends junit.framework.TestCase{
private
BPELTest_Service myBPELTest_Service;

public
BPELTest_ServiceTest(java.lang.String name){
super(name);
}

protected void setUp() throws Exception {
ServiceFactory factory =
ServiceFactory.newInstance();
myBPELTest_Service =
(test.proxy.BPELTest_Service)factory.loadService(test.proxy.BPELTest_Service.class);
}

protected void tearDown(){
myBPELTest_Service = null;
}

public void testBPELTestPortprocess() throws java.lang.Exception {
test.proxy.BPELTest_PortType port = myBPELTest_Service.getBPELTestPort();
test.proxy.BPELTestProcessRequest payload = null;
BPELTestProcessResponse response = port.process(payload);
assertEquals(“389″, response.toString());
}
}