The marriage of Toplink and Coherence

When we talk about Object Relational Mapping Frameworks or ORM frameworks we’re talking about mapping relational data to an object-oriented programming language.

The ORM framework will hold features such as support for JPA (Java Persistency API); database access control mechanisms such as JNDI, JTA, JDBC; distributed processes, 2 phase commit, etc.

So on the hand you need to be able to map your relational data to object relational data and on the other hand you need to have a clear view on your persistency layer.
How are you going to manage the different transactions in your application, how are you going to deal with distributed transactions, etc.

In Oracle Toplink 11g or the Oracle Toplink Grid you have the marriage of the ORM Framework and Coherence which offers you the power to control your transaction based application.

In other words you can hand over the persistency management to coherence instead of giving your architect and developers the burden of solving this puzzle themselves.

When using Toplink Grid you can choose to have Coherence manage the persisting of new and modified Entities. This integrated solution involves a layer between JPA and the data store where the grid can be leveraged to scaling beyond the database bound operations.

In other words you’re application doesn’t need to wait for your database transaction to return an answer before it can proceed. Using asynchronous processing in your application will be a huge improvement for the end users’ experience and will leave the responsability to the JPA layer were it needs to reside.

For more information regarding the Toplink Grid, read the following article.

This is a whole new way of thinking about persistency and data centric web applications which will have a great influence on as well the software architects as the end users’ experience.

Will we still have the so called ‘slow java web apps’ versus ‘data centric web apps’ which can lead to huge discussions amongst ‘client-server developers’ and ‘web developers’ ?

Migrate from Hibernate to Oracle Toplink (EclipseLink)

Douglas Clark, Director of Product Management, has recently participated in a discussion with Oracle ACE’s regarding the key differentiators of Oracle Toplink versus other ORM Mappings.

This is a very interesting discussion for key decision makers who are thinking about ORM Solutions for their existing or new JEE Applications.

The key differentiators according to Doug Clark:

  1. Performance and scalability: Our out of the box caching architecture is allows us to minimize object creation and share instances. The caching offers out of the box support for single node and clustered deployments. We have been involved in many internal and external benchmarking efforts that maintain our confidence that we have the best performing and scaling ORM solution available.
  2. Support for leading relation databases: We continue to support all leading relational databases with extensions specific to each. We are also the best ORM solution for the Oracle database. We continue to enhance this support in 11gR1 and EclipseLink.
  3. A comprehensive persistence solution: While we offer industry leading object-relational support we have also leveraged our core mapping functionality to deliver object-XML (JAXB), Service Data Object (SDO), as well as non-relational (EIS via JCA) and Database Web Services. Depending on your requirements you can use one or more of the persistence services based on the same core persistence engine.
  4. Donated to Open Source Community: Full functionality of Oracle TopLink now available in open source EclipseLink project. OracleAS/SOA customers will continue to leverage the functionality of TopLink now developed in open source. Those looking for an open source solution can now choose to use EclipseLink and gain the benefits of our long commercial usage and our ongoing development efforts.
  5. JPA Support: As the JPA 1.0 specification co-leads Oracle and the TopLink/EclipseLink team has been focussed on delivering a JPA compliant solution with supporting integration with JDeveloper, ADF, Spring, and the Elcipse IDE (Dali project). We have delivered the JPA 1.0 reference implementation and with EclipseLink will now deliver the JPA 2.0 reference implementation. We are focussed on standards based development while still offering many advanced capabilities as well.While Hibernate may have the current lead in developer mind-share we are focussed on continuing to deliver our world-class functionality to the entire Java community.

Tips & Tricks when working with Oracle BPEL

Today I was planning to rapidly demo the features of the DB Adapters in Bpel with a small test-case where master-detail data needs to be inserted … But I already ran into strange behaviour.

In this blog-posts I want to give the audience some valuable tips & tricks for common ‘problems’ faced when designing and deploying bpel processes.

  • Problem: Soa Suite Crashes when instantiating Simple Oracle Bpel Process with following exception ‘java.lang.Exception: Failed to create “java:comp/env/ejb/local/CubeEngineLocalBean” bean; exception reported is: “javax.naming.NameNotFoundException: java:comp/env/ejb/local/CubeEngineLocalBean not found in DeliveryBean
  • Solution: In most cases this exception occurs when you’re desiging a bpel process with a newer version of Jdeveloper than the Oracle BPEL version you’re deploying to. In my case I was designing with Jdeveloper 10.1.3.3 and deploying to Oracle Soa Suite 10.1.3.1. The problem was solved when I worked with an earlier version of Jdeveloper, namely 10.1.3.2.
  • Problem: ‘ORABPEL-11627: Mapping Not Found Exception’, when using DB Adapter in Bpel Process. I’m trying to insert master-detail data using a synchronous bpel process and the toplink mappings where all set. When I instantiate the bpel process I’m always running into the ORABPEL-11627 exception.
  • Solution: In my case this exception was thrown because I tried to assign my person-element to a personCollection variable. When I removed the assign-activity and added a transform-activity to map the one-element xsd file to the collection-element xsd file, the exception wasn’t thrown anymore. Note: When you work with the DB Adapter, the payload of the operation is always a collection-element, so you need to map your request- or inputVariable accordoingly by using a transform-activity.
  • Problem: Toplink warnings are shown when Db Adapter-partnerlink has been defined for inserting master-detail data. Some of the warnings shown: ‘Method accessors have not been selected.’
  • Solution: This is due to a JDeveloper bug in generating toplink mapping files. You will need to change the Toplink Mapping file for those Objects that use a Collection-attribute for master-detail data. Go to your toplink-mapping file, it will show up in the Structure window. Go to the Object that holds collection-elements, the Mapping for the collection-element will show up in the main screen. You will find that Use Method Accessing is selected yet there is no Get or Set methods defineds. Just unselect Use Method Accessing, save and rebuild. Note: When doing this you need to go through the DBAdapter-wizard again so the changes are reflected properly in all needed xml- and wsdl-files. (Thanks to user: Hongdo on the Bpel-forum on OTN, for sharing this with the community !)
  • Problem: Use db-sequences when trying to insert master-detail data using one insert-operation defined through a DBAdapter-partnerlink.
  • Solution: Define the master- and detail-tables properly in the DB Adapter-wizard, make sure to define the master-table correctly so the relationships are properly read and defined by Toplink. In the generated Toplink Mapping File you need to define you want to use ‘native sequencing’ and you need to make sure the ‘pre-allocation’ size matches the increment of your DB Sequence. By default the pre-allocation size is set to 50, which probably needs to be updated to 1. The next step is to define sequencing option for both master- and detail-data on the different POJO’s. There you need to specify the database sequence to use for each primary key field you want to populate.

I’ll keep you posted when I encounter more issues in defining, deploying and running bpel processes.

Create a demo using EJB 3.0, Toplink and ADF as the UI layer and Bpel and ESB as the back-end layer (through web service invocation) – Episode 1

The case I worked out for my demo application is the following: Create a new person via the UI (using JSF and EJB3.0 as data-layer) and initiate a bpel process for the creation of the person. The bpel process will check if all business requirements were met for the person that needs to be created using Business Rules.
A human task was added to make sure the Personal Manager has approved the new person and finally an ESB was added to actually create the person.
The ESB will transform the person-object, an xml-file, to the specific format I need to be able to insert the person in my db.

During the creation of my demo-project I faced some design problems which I will explain in the following chapters:

The first part of the case was simple, create a UI using EJB 3.0 and ADF Faces using JDeveloper as my IDE.
The UI consists of a ListPersons.jspx – page that lists all existing persons of my persons-db and a link to the CreatePerson.jspx-page. In the CreatePerson.jspx page I will create a new Person and initiate the bpel process from here.

But it wasn’t as simple as I thought … to be able to initiate my bpel process using the EJB-objects I needed to prefetch the sequence-value needed for uniquely identifying the person-object, e.g. the Person.personid. This is needed because the bpel process needs all xml-tags to be filled in. If for example the personid, or firstname isn’t filled in you will get the following exception ‘unexpected null value for literal data’.

To be able to prefetch the id in my EJB i thought I could ‘eagerly fetch’ this id using an annotation in EJB 3.0, but there’s no such annotation available :(

What to do next … fetch the sequence value myself and populate the Person.personID with my sequence-value.

I’ve added a new method in my SessionBean that gets the sequence value from my db-sequence, this method is invoked from my custom-method ‘createPersonObject() which constitutes a valid person-object.

The custom-method to fetch the sequence-value uses the ‘createNativeQuery’-method on the EntityManager:

em.createNativeQuery(“select person_seq.nextval from dual”);

The UI now works correctly, using the createPersonObject()-method as binding-layer to go to the CreatePerson.jspx file.

Now I need to link my existing Bpel Process to this UI … coming up soon …

Deploying ADF Application to IAS 10.1.3.1

As you could read in my previous post, ‘Deploying ADF Application to Standalone OC4J’, I’ve faced some problems when trying to deploy the application to a Standalone OC4J.

But the goal is to deploy this application to the production environment which would be an Application Server 10.1.3.1.

When trying to deploy to this environment other problems arrised …

When we took the same deployment plan of the standalone OC4J, which worked splendidly, the deployment failed to the IAS 10.1.3.1 environment.

We still encountered the ‘NoClassDefFoundError: oracle/jbo/JboException’ which arises when you didn’t install the ‘ADF Runtime Installer’ libraries to your environment. The weard problem was that we had run the ADF Runtime Installer on our IAS environment, in the same way has we did for our standalone OC4J and it only worked for OC4J.

After investigating the problem further we found out that the libraries that should be copied by the ADF Runtime Installer, weren’t available on our IAS Environment. The ADF Runtime Installer will copy over all the needed libraries to your IAS/OC4J to be able to run ADF applications on your container. These libraries are copied over to the BC4J-folder of your environment, which isn’t the case for IAS.

What we did was manually copy the libraries (jar-files) from ADF Runtime Installer to our IAS Environment. This means you have to copy the content of the following folders: jlib, lib and redist to your IAS-environment under the IAS_HOME\BC4J folders.

After you’ve done this you’re able to deploy the application to your IAS Environment using the deployment plan you’ve set up for as well the standalone OC4J as IAS.

Because we’re working with Toplink we also have to copy over the xdb.jar file from our toplink-workbench\bin folder to the Toplink folder of IAS. The same step we’ve done for our standalone OC4J, as mentioned in the previous post.

Deploying ADF Application to Standalone OC4J

For the deployment of an ADF Application to my standalone OC4J I’ve faced some problems which aren’t clearly explained/solved on OTN.
You will find many people facing the same problems when deploying applications to OC4J from jboException until Log4j-exceptions, etc.

I will try to address some of these problems in this blog using my own project deployed to a standalone OC4J.

Following errors/problems cam uping during deployment:

  • java.lang.NoClassDefFoundError: oracle/jbo/JboException
  • java.lang.NoClassDefFoundError: org/apache/log4j/Category
  • Problems with shared libraries and user-libraries when deploying from JDeveloper-IDE
  • Memory problems during deployment

How were this problems addressed and how did I package the application?

In my J2EE application the following technologies are being used: Toplink, EJB 3.0, OCS and finally ADF Faces as the frontend. Additional libraries we’re using: log4j-libraries.

This J2EE application uses the MVC-paradigm which means i’m working with 3 important layers: Model, View and Controller. In my case the Model is written in Toplink, the DataControl which is the glue between the Model and View is based on the EJB 3.0 (sessionbeans) and in the end we which faces for our Controller.

How did I package this application, by the creation of deployment profiles for each application that’s used in the application:

  • A jar-file for the Toplink-model and bizzlogic-model
  • An ejb-jar file for the EJB-project(s)
  • A war-file containing all logic of the view-layer (jspx, images, pageDefinitions, backing beans, web.xml file, jazn-data file, orion-application.xml file, etc.
  • An ear-file packaging all the different deployment profiles together using the Application Assembly tool

As I mentioned before i’m using log4j in the application and I experienced a lot of problems during deployment because a newer version of log4j is used in our application, than the one which is used by default by OC4J. How can you solve this problem:

  • Add the version you’re using in your project (log4j-1.2.13.jar) and the commons-logging jars from JDeveloper to the EAR file and point towards these 2 jar-files in the MANIFEST.MF file from the project that uses log4j

Secondly I mentioned Toplink is used for the Model-layer for which we needed to perform a manual configuration as well:

  • Copy xdb.jar from the toplink workbench folder to the directory of the standalone OC4J installation \toplink\jlib\xdb.jar

To address the jboException you need to install the ADF Runtime Installer to your standalone container. You can do this using the JDeveloper IDE, first create an Application Server Connection to your standalone OC4J. Go to the menu ‘Tools’, choose the ‘ADF Runtime Installer’ and choose to deploy to ‘standalone OC4J’.

Make sure your OC4J isn’t running when you perform this task because otherwise all libraries can’t be upgraded because they’re being used by the container.

Last but not least the ‘OutOfMemoryException’/PermGen Space can be adressed by adding memory to your standalone OC4J or IAS. For OC4J you could add the following attribute to the oc4J.cmd-file which can be found in the bin-folder of your oc4J_home => add the following:

OC4J_JVM_ARGS=-XX:PermSize=128m -XX:MaxPermSize=256m

If you need an indepth explanation about memory-management you can view the Memory Management topic on this blog.

Have fun!