Oracle Data Integrator – Administrator Version – Day 3

Today I’ve installed ODI on my own environment, the previous posts were about the development environment of my client-machine on which I don’t have administrator-privileges. On my own environment I’ve installed the administrator-version of ODI so I can create my own master and work repository to work with for my migration path.

I’ve followed the ‘ODI Installation Guide for these different steps and added some custom comments, actions, you can still use the Installation Guide as reference guide.

First I’ve installed the administrator-version of ODI, so I’m able to define my own repositories and the second step I’ve performed was the definition of the schema for my master and work repository.

I hear you thinking … ‘what is a master and work repository, why do I need this?’ … well, the definition from the ‘ODI Installation Guide’ states the following:

  • Master Repository: Data structure containing information on the topology of the company’s IT resources, on security and on version management of projects and data models. Mostly only one Master repository is needed.
  • Work Repository: Data structure containing information on data models, projects, and their use. Several work repositories can be designated with several master repositories if necessary. However, a work repository can be linked with only one master repository for version management purposes.

In the previous step I’ve defined the 2 schema’s used for the master and work repository (create db-schema’s), now it’s time to actually create these 2 repositories.

Let’s create the master repository – creation of tables and automatic importing of definitions – for the schema we’ve defined ‘snmp’:

  • Go to your Oracle Data Integrator in the Start Menu and select ‘Master Repository Creation’ from the ‘Repository Management’

  • Define the settings of your database and your master repository-schema in the ‘Master Repository Creation Wizard’ as shown below (don’t forget to specify a meaningfull Id, not the default 0)

When you click OK the different components – tables, indexes, schema’s, … – will be created and imported into the master repository and you can follow up this creation in the log-window.
After everything is succesfully created a pop-up window will be shown informing you about the succesfull creation of the Master Repository.

Afterwards we will connect to our new master repository via the ‘Topology Manager’:

We’ve now succesfully created our master repository and connected to it via our Topology Manager.

The next step is to create the work repository. Go the 5th tab shown below in the Topology Manager to show the existing repositories, right-click on the Work Repository and choose ‘Insert Work Repository’.

Define the jdbc-driver and username and password for the schema you’ve created for the work repository.

You can test your connection to be sure all the settings were defined correctly and then click ok.
The next screen is shown to define the specific settings for the work repository, choose a unique ID and define a name for the work repository. Click OK.

To connect and work with our newly created work repository, perform the following:

  • Choose the Designer-submenu in the Oracle Data Integrator menu of your Start Menu
  • Choose to create a ‘New Data Integrator Connection’ choosing the first icon to the right of the dropdown-box for ‘Login name’
  • Enter the different settings needed to be able to connect to your work repository. In the Database Connection – section you need to specify the connection the (Master Repository). In the last part you need to specify the Name of the Work Repository which can be chosen from a list. The name you’ve defined for the work repository will be shown in this screen.

  • Click OK
  • Choose these new settings to Login to the Oracle Data Integrator

Now I can perform the same steps as I’ve discussed in the previous blogs to be able to work on my customly defined master and work repository.

When you’ve got adminstrator privileges this is the standard way to go:

  • create master and work repository schema’s on the database
  • create master and work repository through the ODI as mentioned in this thread
  • create different models, interfaces, … to work with these custom defined repositories.

I’ve done the same steps for my own ‘administrator’ environment as mentioned in the previous posts so I can work on my administrator environment and in the development environment.

In the previous post I’ve created the microsoft ODBC DataSource and I’ve reverse engineered my Excel file into a datamodel in the Designer, now it’s time to start creating interfaces.

Oeps … I’ve stumbled upon a ‘logical steps to perform when working with ODI’, and the logical step to perform after the definition of the model isn’t definining interfaces … let’s have a closer look :

Managing an Oracle Data Integrator project generally involves the following steps:

  • Creating and reverse-engineering models. (Check!)
  • Creating a project.
  • Using markers (optional).
  • Creating and organizing folders.
  • Importing KMs.
  • Creating and modifying reusable objects: Variables, Sequences, Interfaces, Procedures, User functions.
  • Unit testing interfaces and procedures (back to step 6).
  • Building packages from elements created in step 6.
  • Integration testing the packages.
  • Generating scenarios.
  • Scheduling scenarios.
  • Managing the scenarios in production.
  • Maintenance, bug fixing and further modifications (back to step 6).

So the next step to perform – after we’ve created our data model – is to create a new project, a project is nothing more than a ‘container’ holding a group of objects created in Oracle Data Integrator.

  • Go to the designer and choose the projects-tab, this is the default-tab shown when you open up designer. For our newly created work repository no projects are created yet so we can start building from scratch.
  • Choose the first icon ‘Insert Project’
  • Define a name for the new project

After we’ve defined the project the different objects that can be created in a project are shown, such as variables, knwoledge modules, sequences, …

The next step to perform is the creation of ‘folders’ to organize our interfaces, procedures and packages, but in my newly created projet a ‘First Folder’ has already been created.
Right-click on this folder and choose ‘edit’ and rename the folder to your own choice.

The next logical step would be to import Knowledge Modules of interest to our project, but in my case the default Knowledge Modules were already imported, so no actions need to be performed here.

Finally I’m ready to create interfaces to be able to link our source-data to our target-data, the definition given in the UserGuide is the following: ‘An interface consists of a set of rules that define the loading of a datastore or a temporary target structure from one or more source datastores.’.

I’ll keep you posted !

Java & Memory in an iAS environment

Everyone who worked with java applications probably heard of the OutOfMemory error. Like the name says, it indicates that the java application reached the limit of available memory and tries to go beyond that limit, with the OutOfMemory as a logical consequence. But why does this error occur? Your 8 CPU, 20GB server has enough resources to serve 50 of these applications simultaneously and still, the error pops-up now and then, crashing your entire application.

Before you add another set of memory modules, let’s take a quick look at the JVM internal memory management.

Automatic Memory Management
Java uses automatic instead of explicit memory management. This means that the developer is only response for the allocation of memory (Object x = new Object();), and the JVM, more specific, the garbage collector automatically cleans up unused objects.

So it is impossible to have memory leaks in my application? Well, not quite, it is not as obvious as in programming languages with explicit memory management but you still have to free your allocated resources in a proper way (ref. DB connections, Streams, …)

The Heap
The memory allocated by the JVM is called the Heap. The heap itself is divided into 3 areas also called ‘generations’.

  • young generation
  • old generation
  • permanent generation

Most objects are initially allocated in the young generation. The old generation contains objects that have survived some number of young generation collections, as well as some large objects that may be allocated directly in the old generation.
The permanent generation holds objects that the JVM finds convenient to have the garbage collector manage, such as objects describing classes and methods, as well as the classes and methods themselves.

The young generation consists of an area called Eden plus two smaller survivor spaces.
Most objects are initially allocated in Eden. (As mentioned, a few large objects may be allocated directly in the old generation.)
The survivor spaces hold objects that have survived at least one young generation collection and have thus been given additional chances to die before being considered “old enough” to be promoted to the old generation. At any given time, one of the survivor spaces holds such objects,
while the other is empty and remains unused until the next collection.

Now that we know that the young generation contains short living objects and the permanent generation contains (very)long living objects
it is easier to pinpoint the OutOfMemory errors.

iAS & memory
When OutOfMemory problems occur in your newly deployed and bugfree application :-) , you can use the Oracle Enterprise Manager (OEM) web interface
to get a first impression about the memory usage of the application.

In the system components view (The view where all the OC4J instances are listed) you can see the Memory usage in MB per OC4J.
The view also shows the total amount of memory in the server as a pie chart at the top of the screen. The pie chart is divided into 3 parts.

  • Application Server Memory Usage
  • Free Memory
  • Other Memory Usage

Be very careful while interpreting the pie chart. Unix and Linux allocate free memory in a buffer. When an application needs extra memory,
the OS can free a part of the allocated buffer and give it to the application.
Due to the allocated buffer, the pie chart shows that almost all of the system memory is allocated in the ‘Other Memory Usage’

On the other hand, there are the log files where OutOfMemory errors are logged.

  • application.log located in the %ORACLE_HOME%/j2ee/%OC4J%/application-deployments/%APPLICATION% directory.
  • Standard output log named OC4J~\{OC4J\}~\{island\}~\{jvm process\}.log (eg. OC4J~OC4J_DEMOS~default_island~1.log) located in the %ORACLE_HOME%/opmn/logs directory

Be sure to check logging just before the OutOfMemory for more details.

Solving the memory problem
The simplest way to solve OutOfMemory errors is to increase the size of the heap. To do this, you can add command line parameters to the java command.

java -Xms256m -Xmx512m

Where -Xms defines the minimum and -Xmx the maximum amount of memory allocated by the JVM.

In the application server you can alter this in the OC4J instance -> server properties -> command line options -> java options field.
Note that the properties affect all OC4J instances in the cluster for that specific OC4J.

Sometimes this does not resolve the problem. In some cases an OutOfMemory error occurs while there is still lots of memory free for the JVM.
Check standard output log files for the following message.

Permanent generation is full…
increase MaxPermSize (current capacity is set to: 67108864 bytes)

As we have seen before, the permanent generation contains long living objects. By default, the JVM allocates 64MB to the permanent generation.
For the most application this will suffice. The size permanent generation must be set apart from the heap size. To do this use the command line:

java -XX:PermSize=64m -XX:MaxPermSize=256m

Where -XX:PermSize=xxx defines the minimum and -XX:MaxPermSize=xxx the maximum amount of memory allocated for the permanent generation.

Always make sure that the -server option is added to your java options. This setting will preconfigure your JVM to run as optimal as possible in a server environment.

If these settings do not work, you should check you application for memory leaks. You can use a profile to detect these leaks… but that will be explained in a future post

Memory Management in the Java HotSpot Virtual Machine (pdf)
Oracle® Application Server 10g Performance Guide

Export ADF Table To Excel

Recently a colleague of mine came up with a blog about exporting data shown in an af:table-component to Excel using the Apache POI API.

Some time ago I’ve accomplished the same behaviour without using any additional API or framework because I wanted to stay ‘independend’ and not use another framework on top of ADF. So I’ve done some research on OTN and finally figured out we couls accomplish the same behaviour using the Response of our HttpServlet to stream the data as Excel to our Browser.

How did I accomplish this?

I’ve created a Backing Bean for the jspx-page to be able to add my own business logic in a java class. In the jspx-document I’ve added the actionListener-attribute to my component and this attribute is bound to a method in my backing bean.

In my backing bean I’m delegating the action to my java class that holds the generic business logic. This method accepts the DCIteratorBinding as parameter to be able to browse the table-data for my Excel-file.

public void exportToExcel(ActionEvent actionEvent) throws
IOException {
ExportToExcel.exportHtmlTableToExcel((DCIteratorBinding)bindings.get(“findAllIter”)); }

The Java Class that actually performs the transformation to Excel uses the HttpServletResponse to output the data as Excel.

public static void exportHtmlTableToExcel(DCIteratorBinding tableContent) throws IOException {

//Set the filename DateTime dt = new
DateTimeFormatter fmt =
String filename = dt.toString(fmt) + “.csv”;

//Setup the output

String contentType = “application/”;
FacesContext fc = FacesContext.getCurrentInstance();
HttpServletResponse response = (HttpServletResponse)fc.getExternalContext().getResponse();
response.setHeader(“Content-disposition”, “attachment; filename=” + filename);

PrintWriter out = response.getWriter();

RowSetIterator rsi = tableContent.getRowSetIterator();

String[] attNames = rsi.getRowAtRangeIndex(0).getAttributeNames();

for (int i = 0; i

out.print(attNames[i] + “;”);



for (int i = 0; i

Row currentRow = rsi.getRowAtRangeIndex(i);

Object[] attValues = currentRow.getAttributeValues();

for (int j = 0; j

out.print(attValues[j] + “;”);






Deployment to WebCenter Pre-configured OC4J

Ever wondered how to deploy your first portletized adf application to the ‘Webcenter preconfigured OC4J’ … it’s not documented in the Developer guide and you can’t find it on technet … so what should we do do …

Read further …

When you’ve created your ADF-application and you’ve portletized it to be deployed to your ‘Webcenter preconfigured OC4J’, you need to define the connection to your container (webcenter pre-configured)

This isn’t that easy as I expected because you can’t find the ports listed in the Enterprise Manager Console of the Webcenter pre-configured container. These settings aren’t documented either in the Developer Guide or on technet, so after a lot of searching and trying out we’ve finally figured it out.

There are 2 ways to get the connection-settings used by the ‘Webcenter pre-configured OC4J’:

  • Check out the log-window when shutting down the container in Jdeveloper, the following information is shown:

C:\jdev10132\jdev\extensions\oracle.adfp.seededoc4j. -shutdown -port 22667 -password welcomeShutdown OC4J instance…Executing:
C:\jdev10132\jdk\jre/..\bin\java -jar “C:\jdev10132\j2ee\home\admin.jar”
ormi:// oc4jadmin welcome -shutdown

  • Check out the rmi.xml file which can be found in the following directory



Oracle Data Integrator – Part 2 – Working with Excel Data

What have I learned today …

Today I took a closer look at declaring or using my Excel file as a source model for the ODI for which I’ve used the ODI User Guide and the help-documentation in ODI itself.

What do I need to do to get ODI to talk to my Excel file?

  • Create a new ODBC datasource for my Excel files using the Microsoft ODBC Data Source Administrator (run from the Configuration Panel)
  • Make sure you’ve defined a name for the data in the Excel spreadsheet. You can accomplish this by defining a name for the data shown in every worksheet. You have to mark the data in a specific worksheet (select all the rows and columns with data) and choose to insert a name for this data using the insert-menu.

For the configuration of ODI I’ve used different kinds of guidance, help then only the User Guide. To define my ODBC Datasource in the ODI I’ve followed the topic ‘Create a Microsoft Excel Data Server’ (written in the help-documentation of ODI) . This topic has a more detailed approach then the topic ‘Creating a Data Server’ explained in the User Guide.

The steps I’ve performed:

  • Connect to the Topology Manager (first open up the Designer from your Oracle Data Integrator menu)

  • Select Topology > Physical Architecture > Technologies > Microsoft Excel in the tree

  • Right click and select Insert Data Server
  • Fill in the necessary fields in the Definition and JDBC tab, such as the name and the JDBC Driver (sun.jdbc.odbc.JdbcOdbcDriver) and JDBC URL (jdbc:odbc: => the name you defined for your ODBC data source)

  • You can test your connection, test if the connection was succesfull by clicking the test-button in your Data Server properties screen.

The next step we need to perform is to create the physical schema for our Excel file to be able to map this ‘source’ schema to the ‘target’ schema.

  • Right-click on your newly created Data Server and choose to insert a new physical schema:
  • Leave the defaults as is for the Physical schema and choose the wright context in the context-tab which means you need to specify the logical schema you want to use. Specify a name and click apply and ok.

  • Let’s reverse engineer the Excel file into a new model. Go to the designer and choose to insert a new model. In the technology-dropdown choose ‘Microsoft Excel’ and choose your Logical Schema.
  • In the reverse-tab make sure to select the context you want to define the model in.
  • In the ‘Selective Reverse’ tab choose all checkboxes and the data you’ve defined names for in the spreadsheet will be listed. Click OK to reverse-engineering the defined data into your model.
  • In the model-tree you can see the different tables that where created inside the model depending on the data you’ve defined in the excel sheet.

  • In the different DataStores that were created (depending on the data you’ve defined in the spreadsheet) you can check the colums that were defined in your spreadsheet.

The next step is to start creating Interfaces to define the transformation that’s needed from our source-model to our target-model.

Stay Tuned

Oracle Data Integrator

For my project one of the tasks I need to accomplish is to work out the migration path from an oracle db and excell files to a new oracle db.
I need to design a new application in adf with a process-driven data-model to replace the existing ‘application’, which are excell files with a lot of pivot tables, redundancy, …
The first application, before the excell files, was written in Oracle Forms, but isn’t currently used anymore.

One of the challenges I’m faced with now is the need to migrate the old oracle schema and the excell files to my new oracle schema. The data model has changed a lot according to the previous one, because the data-model is set-up in a process-driven manner.

The past weeks we’ve analysed the existing data and the different strategies we can can follow for this migration path. The migration has to be done only once for all the data that’s stored in the old schema and in the excell files and the best way to do is … indeed that’s the question?

I’ve been reading a lot on technet, on different blogs, in whitepapers, … about the different possibilities there are, the ones I came up with:

  • external tables
  • sqlloader
  • Oracle Migration Workbench (SQLDeveloper plug-in)
  • Oracle Data Integrator
  • Open Source ETL Tools
  • After taking a look at the strengths and weaknesses of every technology/possibility I’ve chosen to have a closer look at ODI (Oracle Data Integrator).

Some of the features I’m really interested in and which are very useful for me are the following:

  • Transformation between source and target
      My source system, the excel files, have a different data structure than my target database. To be able to map the source-data to the target-data I will need a mapping-tool such as ODI to do this mapping in a declarative way.
      E.G. In the excell file the address of a person is stored in one cell, this needs to be mapped to different fields in my target-datasource, …

  • Ability to check the data-flows (via Data Flow Control) between the source- and target-destination and have error-handling and logging for each needed step during the process
    • Data flow control consists of checking the source data of an interface compared to the consistency level required on the target. Data is checked before the data is integrated in the target.
      Detection of an error can give rise to either non-integration of all the data (even correct), or integration of only the correct data. In any case, it is useful to isolate and/or recycle the erroneous data (after any necessary correction).

  • Use existing Knowledge Modules which hold the necessary tasks to extract and load data according to the pre-defined steps in this Module
  • The following important question is … how to get started?

    Firstly I’ve installed ODI after reading the installation guide and then I’ve taken a tour of the ‘getting started guide’ and the demo material that’s included in the ODI install.

    After reading through the ODI User’s guide I’ve noted down the following important decision points:

    • I need a ‘One Shot Interface’ for my migration path because I only need to perform this migration one time, before going to production
    • I need an LKM (Loading Knowledge Module) to extract data from the source database and my excell files
    • When designing Interfaces in Oracle Data Integrator, these Interfaces contain several phases, including data loading, data check, data integration, etc. For each of these phases, you will define:
      – The functional rules (mappings, constraints, etc.) for this phase
      – The Knowledge Module to be used for this phase.

    Secondly I’ve read the threads on ‘otn’ to be able to have an idea of the community that’s using this tool and the people I can asks questions to ;o)

    Thirdly it’s time to get started …

    I will keep you posted on the progress and the ODI-experience … so stay tuned !

    Apex: integration of new Google API’s

    Google has released 2 new features on it’s AJAX Search API:

    • Blog bar
    • News bar

    Both bars allow you to incorporate dynamic content into the layout of your site.

    I have tried to implement both features into ApEx and they were succesful. In this blog you can read a step by step guide explaining how to integrate the Blog Bar into your apex application.

    The Blog Bar is offered in horizontal and vertical styles. You have the ability to let your users see specific Blog Search results or you can select only blogs specified by yourself.

    The first thing you need to do is to register with Google.
    Then you can go to Google Blog Bar API page and run the wizard to get the right example code and key. Save the example code and the key so you can use it later.

    Now we can implement it into your ApEx page:

    Paste your css and javascript code into the html header of your ApEx page but leave the next code out of it:

    <div id=”blogBar-bar”>
    <span style=”margin:10px;padding:4px;”>Loading…</span>

    Now create an empty html region and paste the code you left out of your header into it:

    Run your page:

    The following example shows a horizontal style(only 1 line):

    A working example can be found here.

    The following example shows a vertical style:

    A working example can be found here.

    More info about customizing the google blog API can be found here.