Apex: User Interface Defaults

Last week I was formatting a a report column to a specific number format. I asked myself the question If it was not
possible to use a substitution variable like you can for dates with “PICK_DATE_FORMAT_MASK”. I learned that
this wasn’t the case and that I had to search for other solutions.

Jornica pointed me to “User Interface Defaults”, this option can be found under “Shared Components -> User Interface”. You can set default values for your tables and columns in your apex application.

For your table you can set a default “Form Region Title” and a “Report Region Title”. I was suprised when I saw the options for “Column Defaults”:

- Display alignment
- Format Mask
- Help text
- Display sequence
- …

All very nice options that will save me a lot of time when developing. As other users pointed out to me, this option is not very well know yet very useful!

Oracle Webcenter technical workshop (part 2)

As promised here is the continuation of the technical workshop we have followed.
In this text I will talk about how we created a little appliciation in Webcenter with some build in features of webcenter. In most part of the exercise we used ADF to build this, but we have also build in some out-of-the-box portlets of webcenter such as the Omniportlet and the Rich Text Editor Portlet but this will be discussed later.

I will go into detail on the real webcenter development and I will not go into detail about the not really webcenter related issues(this is not a step by step guide)

If you want to know more about this, you will have to read further

First thing we had to do of course is to open Jdevelopper, since this is the (free) tool which is used to create the webcenter portlets – portals. The version we need for using Webcenter is jdev (or later).

Then we had to create a new application. If you want to create a porlet application you have to select Webcenter Application(Portlet, content Repository,JSF) as application template.
This will not only create a new application for you, but it will already contain some structure to build your pages – portlets in. The structure contains 3 ‘directories’: Model, ViewController and Portlets. As we have already mentioned webcenter uses the MVC ADF framework. The next thing was creating a portlet. We had to right click the portlets ‘directorie’ in the application navigator and then choose the option below Web Tier -> Portlets which was ‘Standards based Java Portlets(JSR-168) the other option is the Oracle PDK-Java Portlet (this is based on the Oracle portal API’s).

Next thing we had to do in the creation wizard was to select the web application version. We had the choice between j2ee 1.3 and J2ee 1.4

We also had to give in some names, titles, keywords,… and we had to make a choice which portlet modes we wanted to use(possibilities are: help, about, config, preview, print, view, edit and edit_defaults), for this exercise we only choose view, edit and edit_defaults. For each of these 3 modes a different jsp will be build.

In this wizard you will also have to create the parameters if you want to use some in your portlet.

Once finished, our first portlet was created. But before deploying this we had to create a link to an application server and a deployment file.

In this version of jdev there is a preconfigured OC4J for webcenter. So we used this container to deploy our application on, we didn’t had to install the webcenter suite for testing our portlets. You first have to start this container, this can be done by clicking the green light on the top right of the menu in Jdev.

Hint, when creating a connection you should use 22667 as RMI port, this is the default RMI port for the webcenter container.

Once your portlet is deployed into your OC4J you can find the WSDL URL by using http://localhost:6688//portlets/wsrp2?WSDL.
We have now created a portlet but we don’t have a page to publish it on. Therefor we had to rightclick the Viewcontroller ‘directorie’ in the Application navigator and choose New. Once again you will have to choose a technology, in this case we had to choose JSF JSP(Web Tier -> JSP) .
A wizard opens and we choose jspx (xml version of a jsp page). In the wizard we selected the needed libraries(adf faces components,adf faces html,customizable components, jsf core, jsf html) since we were going to use some parts of all these libraries we selected them all.

So now we have a portlet and a page, but how do we have to include this portlet into the page? Well we have to register the portlet into the oracle Webcenter framework. Go to the Application Navigator and right click the Viewcontroller and click new. Select WSRP Producer Registration(Web-Tier -> portlets). In the next wizard you will have to give in the WSDL URL(see above). Once you have selected all the other attributes you click finish and your portlets is registered.

Now you just have to drag and drop the portlet into the jspx file. Therefor you have to open your jspx file and then select your producer in the Component Palet and drag and drop your portlet into the application.

Conclusion: I believe that there is a good future for this product it has lots of options and lots of potential, but you have to get used to all the possibilities of it and where you have to select what… But I believe that you really can create some nice working portal pages after a few days. If you want to learn more about ADF or webcenter there is only one option, start using it! I really got interested in it maybe you do too…

Deploying ADF Application to IAS

As you could read in my previous post, ‘Deploying ADF Application to Standalone OC4J’, I’ve faced some problems when trying to deploy the application to a Standalone OC4J.

But the goal is to deploy this application to the production environment which would be an Application Server

When trying to deploy to this environment other problems arrised …

When we took the same deployment plan of the standalone OC4J, which worked splendidly, the deployment failed to the IAS environment.

We still encountered the ‘NoClassDefFoundError: oracle/jbo/JboException’ which arises when you didn’t install the ‘ADF Runtime Installer’ libraries to your environment. The weard problem was that we had run the ADF Runtime Installer on our IAS environment, in the same way has we did for our standalone OC4J and it only worked for OC4J.

After investigating the problem further we found out that the libraries that should be copied by the ADF Runtime Installer, weren’t available on our IAS Environment. The ADF Runtime Installer will copy over all the needed libraries to your IAS/OC4J to be able to run ADF applications on your container. These libraries are copied over to the BC4J-folder of your environment, which isn’t the case for IAS.

What we did was manually copy the libraries (jar-files) from ADF Runtime Installer to our IAS Environment. This means you have to copy the content of the following folders: jlib, lib and redist to your IAS-environment under the IAS_HOME\BC4J folders.

After you’ve done this you’re able to deploy the application to your IAS Environment using the deployment plan you’ve set up for as well the standalone OC4J as IAS.

Because we’re working with Toplink we also have to copy over the xdb.jar file from our toplink-workbench\bin folder to the Toplink folder of IAS. The same step we’ve done for our standalone OC4J, as mentioned in the previous post.

Deploying ADF Application to Standalone OC4J

For the deployment of an ADF Application to my standalone OC4J I’ve faced some problems which aren’t clearly explained/solved on OTN.
You will find many people facing the same problems when deploying applications to OC4J from jboException until Log4j-exceptions, etc.

I will try to address some of these problems in this blog using my own project deployed to a standalone OC4J.

Following errors/problems cam uping during deployment:

  • java.lang.NoClassDefFoundError: oracle/jbo/JboException
  • java.lang.NoClassDefFoundError: org/apache/log4j/Category
  • Problems with shared libraries and user-libraries when deploying from JDeveloper-IDE
  • Memory problems during deployment

How were this problems addressed and how did I package the application?

In my J2EE application the following technologies are being used: Toplink, EJB 3.0, OCS and finally ADF Faces as the frontend. Additional libraries we’re using: log4j-libraries.

This J2EE application uses the MVC-paradigm which means i’m working with 3 important layers: Model, View and Controller. In my case the Model is written in Toplink, the DataControl which is the glue between the Model and View is based on the EJB 3.0 (sessionbeans) and in the end we which faces for our Controller.

How did I package this application, by the creation of deployment profiles for each application that’s used in the application:

  • A jar-file for the Toplink-model and bizzlogic-model
  • An ejb-jar file for the EJB-project(s)
  • A war-file containing all logic of the view-layer (jspx, images, pageDefinitions, backing beans, web.xml file, jazn-data file, orion-application.xml file, etc.
  • An ear-file packaging all the different deployment profiles together using the Application Assembly tool

As I mentioned before i’m using log4j in the application and I experienced a lot of problems during deployment because a newer version of log4j is used in our application, than the one which is used by default by OC4J. How can you solve this problem:

  • Add the version you’re using in your project (log4j-1.2.13.jar) and the commons-logging jars from JDeveloper to the EAR file and point towards these 2 jar-files in the MANIFEST.MF file from the project that uses log4j

Secondly I mentioned Toplink is used for the Model-layer for which we needed to perform a manual configuration as well:

  • Copy xdb.jar from the toplink workbench folder to the directory of the standalone OC4J installation \toplink\jlib\xdb.jar

To address the jboException you need to install the ADF Runtime Installer to your standalone container. You can do this using the JDeveloper IDE, first create an Application Server Connection to your standalone OC4J. Go to the menu ‘Tools’, choose the ‘ADF Runtime Installer’ and choose to deploy to ‘standalone OC4J’.

Make sure your OC4J isn’t running when you perform this task because otherwise all libraries can’t be upgraded because they’re being used by the container.

Last but not least the ‘OutOfMemoryException’/PermGen Space can be adressed by adding memory to your standalone OC4J or IAS. For OC4J you could add the following attribute to the oc4J.cmd-file which can be found in the bin-folder of your oc4J_home => add the following:

OC4J_JVM_ARGS=-XX:PermSize=128m -XX:MaxPermSize=256m

If you need an indepth explanation about memory-management you can view the Memory Management topic on this blog.

Have fun!

Oracle Webcenter technical workshop (part 1)

On the 17th of april we where invited to join a technical workshop about Oracle Webcenter for partners at Oracle DeMeern near to Utrecht (Holland).

We wanted to know what Webcenter really could do for us, and what the capabilities are, and of course what the next generation of Webcenter would look like.

Well, we came back full of enthusiasm and the will to explore and look deeper in the webcenter technology.

Read further …

What have we seen during that day?
First of all we got an introduction about all the different layers on which the webcenter framework was build.
And we also saw that webcenter uses several known standards like wsrp, jsr168, jsr170, web 2.0 …

They explained that the Webcenter framework uses Oracle Metadata Management (MDS). This is an XML based repository that stores all kinds of application metadata. In the version we use now this is file based, but in future versions the user will have the choice between file and database based storage.
Webcenter is the first tool in the fusion stack that uses MDS but in the next releases this will also be used for other tools.

In Webcenter it is also possible to use existing portlets/pages/… Therefore you will have to use the ‘Federated Portal Adapter’.
The other way round is also possible, by using the ‘JSF Portlet Bridge’. This will make it possible to publish any portlet created in webcenter to a portal (e.g. oracle portal) that supports jsr168.

The main subject of the day was the JSF part. Oracle uses their ADF framework for creating these kinds of applications.
Not being a real java expert, I really enjoyed working with this tool and the fastness of creating little applications with it.
In the webcenter there are already a few build in portlet applications like the rich text editor, omniportlet which we already knew from the oracle portal.

We also saw how the future of webcenter possible would look like.
First of all the UI of the portlets will look much flashier with more use of AJAX, dhtml,…
It will also be possible to change pages on the fly and there will be more drag and drop functionality.

Last but not least, we also discussed the positioning of the product, particularly against the oracle portal. This was a tricky one, but I kind of know now how I have to position it now. If you want to use open standards and you don’t mind to create applications (almost) from scratch (apart from the already build in applications like rte, omniportlet,…) this is a very fast and good development tool!

One of the next days we will handle the practical part of this day, to give you a glimpse on how webcenter works.

To be continued…

ODI – Day 4 – Organize Projects & Models – KM’s, Interfaces, DataStores

Before I start defining interfaces I first need to define my target datasource which is an Oracle 10g schema.

I will re-organize my models to be able to identify the source- and target-datamodels used for my migration-path.

I create a new Model Folder to hold the different data-models which will be used for my project.

Next I drag-and-drop the Data Model I’ve already created – see previous posts – inside my Model Folder and rename the Model to ‘SRC_Model_Excel’ by double-clicking.

Now I need to define the Physical and Logical Schema for my target-datastore which is my Oracle 10g schema.

Navigate to the Topology Manager and go to Technologies and choose ‘Insert Data Server for the Oracle-technology icon.

Define the connection settings for your Oracle Connection such as username/password and the JDBC configuration settings (JDBC-driver = oracle.jdbc.driver.OracleDriver,Url = jdbc:oracle:thin:@::). Test the connection-settings are defined correcttly via the Test-button and the local agent (depending on the configuration of your ODI). Click OK.

Update the default settings for the Physical Schema in the following way:

  • Define the schema to be your personal schema, your user-defined schema and the work schema is the work repository we’ve defined earlier. I’m very glad ‘Cenisis’ on the ODI-OTN forum has helped me figure out the difference between these two schema’s. If you want more information take a look at the following thread: http://forums.oracle.com/forums/thread.jspa?forumID=374&threadID=491550
  • Make sure to grant select-privileges ont the work repository schema to your user-schema because otherwise the execution of the interface will fail!
  • Define the Context of your physical schema, use the ‘Global’ context and define a new Logical Schema ‘Demo_DB’ and click OK.

Next we will define our data model for the target-datastore in our Models-tab in the Designer-window of ODI.

Choose ‘Insert Model’ and define this as being the target-datastore and choose ‘Oracle’ as ‘Technology’ and the Logical Schema that was defined earlier. Define the ‘Global’ context in the ‘Reverse’-tab and choose which tables you want to reverse engineer in the ‘Selective Reverse’-tab.

The different tables you’ve chosen to reverse-engineer will be shown in the Model as datastore.

Now it’s time to define a new interface to transform our Excel data (source) to our oracle data (target), the different tasks we need to perform:
- Create a New Interface.
- Define the Target Datastore.
- Define the source datastores, filters and joins on these sources.
- Define the mapping between source and target data.
- Define the interface flow.
- Define the flow control.
- Execute the interface for testing.

Go back to the first tab of your Designer-window, the ‘Projects’-tab and right-click on the ‘Interface’-icon in the Folder and choose ‘Insert Interface’.

Give your interface a meaningfull name and drag-and-drop the source-datastore and target-datastore to your interface. In our example you need to drag the PERSONEN-datastore (from the SRC-DemoExcel) to the Sources-window in your interface and drag-and-drop the USERINFO-datastore (from the TRG-DemoExcel) to the target datastore window.

Make sure to map the fields correctly by clicking on the attribute in the target-datastore and drag-and-drop the attribute from the source datastore to the implementation-tab in the mapping-screen.

For the USERID in my case, the Primary Key, I defined a sequence to be used to fill in this key because my Excel file doesn’t provide any ID’s. To be able to use a Sequence in the interface you need to specify the sequence_name in the implementation-field and of course nextval because a number needs to be inserted inside this column.
Make sure to check the ‘Active Mapping’ Checkbox and set the “execute on” radio button to the target area.

When you try to save the interface at this point you well get errors explaining you need to define Knowledge Modules.

I need to import or create Knowledge Modules to be used in my Interface and when you check out the ‘Knowlegde Module’ guides in the ODI Documentation Library you will notice many KM’s are already made available.

We will import the needed LKM, IKM and CKM to use them in our interface where we are migrating data from Excel to Oracle. Depending on the scenario, or use case you’re working out you need to use other KM’s which is explained in the guides (ODI Documentation Library).

Go to the Knowledge Module-node inside Designer, choose the Loading (LKM) KM and right-click and choose import.

In the Import Knowledge Modules-screen you have to browse to the impexp folder which is available in the odi-installation directory.

Choose to import the ‘LKM SQL to Oracle’-Knowledge Module and do the same for the Integration KM (IKM) and choose ‘IKM SQL Incremental Update’.

The Loading Knowledge Module will be used to load the data from our Excel Files to our staging area and the Integration Lnowledge Module will be used to integrate this Excel Data with our Oracle DB.

Open up the interface you’ve defined earlier and open up the ‘Flow’-tab and click on the SSO_0 or the source-datastore. Choose the LKM you’ve defined earlier from the LKM Selection Dropdown in the screen below.

Perform the same tasks for the IKM in the Target datastore, choose the IKM you’ve defined earlier (IKM SQL Incremental Update).

The last Knowledge Module we need to define is the Check Knowledge Module (CKM), which will define how data will be checked and the constraints and rules that must be satisfied before integrating the data.

Go to the Check Knowledge Modules and choose to import ‘CKM Oracle’ from the import wizard.

The CKM Oracle will be used for our checking algorithm, make sure to open up the interface again because otherwise the dropdown-lists don’t show the KM’s.

In the Controls-tab of your interface you need to select the CKM to be used.

Save your settings and choose to ‘Execute’ your interface … in my case I got the following error ‘Flow Control not possible if no Key is declared in your Target Datastore’. And after a little bit of snooping around I’ve figured out the problem, there was no Primary Key defined for my target datastore.

If you need to define constraints for your datastores you’ve got to go to the Model and drill down into the specific DataStore. Right-click on the Constraints-node and choose to define new constraints, choose the type ‘PK’/’Unique’/’Alternate’ and in the Columns-tab choose the column.

Navigate back to your interface and choose the second tab ‘Diagram’ and choose your Target DataStore, in the screen below define the Primary Key you’ve defined earlier in the ‘Update Key’-dropdown.

Choose to Execute the interface again, choose Yes for the default settings to execute the interface.

To follow the result of your execution in the Operator, click on the operator-icon in your menu bar.
The Operator window that subsequently opens (and can be refreshed using the button if the window is already open) details the execution process tasks.

As you can notice in the Operator-window no errors are shown and the execution’s status is done. You can check out the database to verify that the Excel data has succesfully been uploaded to the Oracle Database.
Up to the next challenge !

ApEx AJAX Text Filter

A while ago, a customer asked me to develop a report page with an option to search on a title. For every character the user typed, the results should immediately be adjusted. I did some research and came out on an AJAX text filter.

I couldn’t find a step by step tutorial about this so I made my own.

1. Things to do before we start

Create an Apex application
Create a new page
Create a report region

2. Making the text filter

2.1. Create the necessary fields
Create a text field on your ApEx Page and name it P1_TEST1 and place it above your report.. This field will serve as our search field.

Create an application item F138_SEARCH_NAME. This item will hold our search string.

2.2. Add the JavaScript function to your page

The javascript code can be found here.

The ‘searchTitle‘ string indicates which application process must be called to search for any possible results.

Edit your text field P1_TEST1 and put the next code under HTML Form Element Attributes: ‘ onkeyup=”f_TestOnDemand()” ‘

2.3. Create an empty html region

Create an HTML region, give it the title ‘Result Region‘ and place it under your search field. The region source is a ‘div’ tag with “test2” as id.

JavaScript will replace the content of the div with the matching results.

2.4. Create the application process

Create an “On Demand” application process “searchTitle” in the Shared Components. Later on, Javascript will refer to this process.

The code for the process can be found here.

You will see that in the query, the search item refers to the application item.

3. Result

I made a working example available here.

Oracle Data Integrator – Administrator Version – Day 3

Today I’ve installed ODI on my own environment, the previous posts were about the development environment of my client-machine on which I don’t have administrator-privileges. On my own environment I’ve installed the administrator-version of ODI so I can create my own master and work repository to work with for my migration path.

I’ve followed the ‘ODI Installation Guide for these different steps and added some custom comments, actions, you can still use the Installation Guide as reference guide.

First I’ve installed the administrator-version of ODI, so I’m able to define my own repositories and the second step I’ve performed was the definition of the schema for my master and work repository.

I hear you thinking … ‘what is a master and work repository, why do I need this?’ … well, the definition from the ‘ODI Installation Guide’ states the following:

  • Master Repository: Data structure containing information on the topology of the company’s IT resources, on security and on version management of projects and data models. Mostly only one Master repository is needed.
  • Work Repository: Data structure containing information on data models, projects, and their use. Several work repositories can be designated with several master repositories if necessary. However, a work repository can be linked with only one master repository for version management purposes.

In the previous step I’ve defined the 2 schema’s used for the master and work repository (create db-schema’s), now it’s time to actually create these 2 repositories.

Let’s create the master repository – creation of tables and automatic importing of definitions – for the schema we’ve defined ‘snmp’:

  • Go to your Oracle Data Integrator in the Start Menu and select ‘Master Repository Creation’ from the ‘Repository Management’

  • Define the settings of your database and your master repository-schema in the ‘Master Repository Creation Wizard’ as shown below (don’t forget to specify a meaningfull Id, not the default 0)

When you click OK the different components – tables, indexes, schema’s, … – will be created and imported into the master repository and you can follow up this creation in the log-window.
After everything is succesfully created a pop-up window will be shown informing you about the succesfull creation of the Master Repository.

Afterwards we will connect to our new master repository via the ‘Topology Manager’:

We’ve now succesfully created our master repository and connected to it via our Topology Manager.

The next step is to create the work repository. Go the 5th tab shown below in the Topology Manager to show the existing repositories, right-click on the Work Repository and choose ‘Insert Work Repository’.

Define the jdbc-driver and username and password for the schema you’ve created for the work repository.

You can test your connection to be sure all the settings were defined correctly and then click ok.
The next screen is shown to define the specific settings for the work repository, choose a unique ID and define a name for the work repository. Click OK.

To connect and work with our newly created work repository, perform the following:

  • Choose the Designer-submenu in the Oracle Data Integrator menu of your Start Menu
  • Choose to create a ‘New Data Integrator Connection’ choosing the first icon to the right of the dropdown-box for ‘Login name’
  • Enter the different settings needed to be able to connect to your work repository. In the Database Connection – section you need to specify the connection the (Master Repository). In the last part you need to specify the Name of the Work Repository which can be chosen from a list. The name you’ve defined for the work repository will be shown in this screen.

  • Click OK
  • Choose these new settings to Login to the Oracle Data Integrator

Now I can perform the same steps as I’ve discussed in the previous blogs to be able to work on my customly defined master and work repository.

When you’ve got adminstrator privileges this is the standard way to go:

  • create master and work repository schema’s on the database
  • create master and work repository through the ODI as mentioned in this thread
  • create different models, interfaces, … to work with these custom defined repositories.

I’ve done the same steps for my own ‘administrator’ environment as mentioned in the previous posts so I can work on my administrator environment and in the development environment.

In the previous post I’ve created the microsoft ODBC DataSource and I’ve reverse engineered my Excel file into a datamodel in the Designer, now it’s time to start creating interfaces.

Oeps … I’ve stumbled upon a ‘logical steps to perform when working with ODI’, and the logical step to perform after the definition of the model isn’t definining interfaces … let’s have a closer look :

Managing an Oracle Data Integrator project generally involves the following steps:

  • Creating and reverse-engineering models. (Check!)
  • Creating a project.
  • Using markers (optional).
  • Creating and organizing folders.
  • Importing KMs.
  • Creating and modifying reusable objects: Variables, Sequences, Interfaces, Procedures, User functions.
  • Unit testing interfaces and procedures (back to step 6).
  • Building packages from elements created in step 6.
  • Integration testing the packages.
  • Generating scenarios.
  • Scheduling scenarios.
  • Managing the scenarios in production.
  • Maintenance, bug fixing and further modifications (back to step 6).

So the next step to perform – after we’ve created our data model – is to create a new project, a project is nothing more than a ‘container’ holding a group of objects created in Oracle Data Integrator.

  • Go to the designer and choose the projects-tab, this is the default-tab shown when you open up designer. For our newly created work repository no projects are created yet so we can start building from scratch.
  • Choose the first icon ‘Insert Project’
  • Define a name for the new project

After we’ve defined the project the different objects that can be created in a project are shown, such as variables, knwoledge modules, sequences, …

The next step to perform is the creation of ‘folders’ to organize our interfaces, procedures and packages, but in my newly created projet a ‘First Folder’ has already been created.
Right-click on this folder and choose ‘edit’ and rename the folder to your own choice.

The next logical step would be to import Knowledge Modules of interest to our project, but in my case the default Knowledge Modules were already imported, so no actions need to be performed here.

Finally I’m ready to create interfaces to be able to link our source-data to our target-data, the definition given in the UserGuide is the following: ‘An interface consists of a set of rules that define the loading of a datastore or a temporary target structure from one or more source datastores.’.

I’ll keep you posted !