Migrate Existing ODI from DEV to TEST to Production

In the previous posts on this Blog you’ve read about my experiences with Oracle Data Integrator and now a new important milestone was reached … import and export the ODI interfaces, datastores, … from DEV-environment to TST-environment.

I’ve asked on OTN what the best way was to accomplish this and different views were given on how to solve this.

You could use import/export functionality from ODI itself, e.g. if you want to partially export a certain interface or datastore, you could export a specific object.
If you want to migrate from DEV-environment to TEST, you could use the import/export functionality from the Oracle Database because my master- and work-repository are stored in an Oracle 10g Database.

I needed to move my development environment onto the test-environment to be able to run the ODI functionality against real-time test data to be able to test the performance more accuratly. Another important raison also was that my development environment, my notebook, had crashed a couple of times already so I needed to back-up all the important data.

What did I do to accomplish this import/export:
- I exported the master- and work repository used by ODI (snpw and snpm-schema’s)
- I copied over the ODI-folder from my program-files folder, just to make sure

After backing-up all other stuff on my laptop, I needed to format my laptop and have a closer look at the problems I was facing concerning my hard-drive etc.

Alea iacta est

A new laptop, a new environment … let’s start installing all needed software and then the time had come to get back up and running with my ODI environment.

First I tried to import the existing Master Repository but the wizard didn’t really guide me a lot during this process so another approach was needed.

At last, the successfull steps to import/export from environment 1 to environment B such as DEV to TEST are the following:

  • Export existing master- and work-repository in a dump-file using Oracle db export feature
  • Import the master- and work-repository into your new schema
  • Create a new connection to the imported master-repository when you connect to the Topology Manager in ODI
  • Create a new connection to the imported work-repository when you connect to the Designer manager in ODI

Now your work is done … if you check out the model- and projects-tab in your Designer-view you will notice all interfaces, datastores, models, etc. are imported successfully.

ODI – Day 4 – Organize Projects & Models – KM’s, Interfaces, DataStores

Before I start defining interfaces I first need to define my target datasource which is an Oracle 10g schema.

I will re-organize my models to be able to identify the source- and target-datamodels used for my migration-path.


I create a new Model Folder to hold the different data-models which will be used for my project.

Next I drag-and-drop the Data Model I’ve already created – see previous posts – inside my Model Folder and rename the Model to ‘SRC_Model_Excel’ by double-clicking.

Now I need to define the Physical and Logical Schema for my target-datastore which is my Oracle 10g schema.

Navigate to the Topology Manager and go to Technologies and choose ‘Insert Data Server for the Oracle-technology icon.


Define the connection settings for your Oracle Connection such as username/password and the JDBC configuration settings (JDBC-driver = oracle.jdbc.driver.OracleDriver,Url = jdbc:oracle:thin:@::). Test the connection-settings are defined correcttly via the Test-button and the local agent (depending on the configuration of your ODI). Click OK.

Update the default settings for the Physical Schema in the following way:

  • Define the schema to be your personal schema, your user-defined schema and the work schema is the work repository we’ve defined earlier. I’m very glad ‘Cenisis’ on the ODI-OTN forum has helped me figure out the difference between these two schema’s. If you want more information take a look at the following thread: http://forums.oracle.com/forums/thread.jspa?forumID=374&threadID=491550
  • Make sure to grant select-privileges ont the work repository schema to your user-schema because otherwise the execution of the interface will fail!
  • Define the Context of your physical schema, use the ‘Global’ context and define a new Logical Schema ‘Demo_DB’ and click OK.

Next we will define our data model for the target-datastore in our Models-tab in the Designer-window of ODI.

Choose ‘Insert Model’ and define this as being the target-datastore and choose ‘Oracle’ as ‘Technology’ and the Logical Schema that was defined earlier. Define the ‘Global’ context in the ‘Reverse’-tab and choose which tables you want to reverse engineer in the ‘Selective Reverse’-tab.

The different tables you’ve chosen to reverse-engineer will be shown in the Model as datastore.

Now it’s time to define a new interface to transform our Excel data (source) to our oracle data (target), the different tasks we need to perform:
- Create a New Interface.
- Define the Target Datastore.
- Define the source datastores, filters and joins on these sources.
- Define the mapping between source and target data.
- Define the interface flow.
- Define the flow control.
- Execute the interface for testing.

Go back to the first tab of your Designer-window, the ‘Projects’-tab and right-click on the ‘Interface’-icon in the Folder and choose ‘Insert Interface’.

Give your interface a meaningfull name and drag-and-drop the source-datastore and target-datastore to your interface. In our example you need to drag the PERSONEN-datastore (from the SRC-DemoExcel) to the Sources-window in your interface and drag-and-drop the USERINFO-datastore (from the TRG-DemoExcel) to the target datastore window.

Make sure to map the fields correctly by clicking on the attribute in the target-datastore and drag-and-drop the attribute from the source datastore to the implementation-tab in the mapping-screen.

For the USERID in my case, the Primary Key, I defined a sequence to be used to fill in this key because my Excel file doesn’t provide any ID’s. To be able to use a Sequence in the interface you need to specify the sequence_name in the implementation-field and of course nextval because a number needs to be inserted inside this column.
Make sure to check the ‘Active Mapping’ Checkbox and set the “execute on” radio button to the target area.

When you try to save the interface at this point you well get errors explaining you need to define Knowledge Modules.

I need to import or create Knowledge Modules to be used in my Interface and when you check out the ‘Knowlegde Module’ guides in the ODI Documentation Library you will notice many KM’s are already made available.

We will import the needed LKM, IKM and CKM to use them in our interface where we are migrating data from Excel to Oracle. Depending on the scenario, or use case you’re working out you need to use other KM’s which is explained in the guides (ODI Documentation Library).

Go to the Knowledge Module-node inside Designer, choose the Loading (LKM) KM and right-click and choose import.

In the Import Knowledge Modules-screen you have to browse to the impexp folder which is available in the odi-installation directory.

Choose to import the ‘LKM SQL to Oracle’-Knowledge Module and do the same for the Integration KM (IKM) and choose ‘IKM SQL Incremental Update’.

The Loading Knowledge Module will be used to load the data from our Excel Files to our staging area and the Integration Lnowledge Module will be used to integrate this Excel Data with our Oracle DB.

Open up the interface you’ve defined earlier and open up the ‘Flow’-tab and click on the SSO_0 or the source-datastore. Choose the LKM you’ve defined earlier from the LKM Selection Dropdown in the screen below.

Perform the same tasks for the IKM in the Target datastore, choose the IKM you’ve defined earlier (IKM SQL Incremental Update).

The last Knowledge Module we need to define is the Check Knowledge Module (CKM), which will define how data will be checked and the constraints and rules that must be satisfied before integrating the data.

Go to the Check Knowledge Modules and choose to import ‘CKM Oracle’ from the import wizard.

The CKM Oracle will be used for our checking algorithm, make sure to open up the interface again because otherwise the dropdown-lists don’t show the KM’s.

In the Controls-tab of your interface you need to select the CKM to be used.

Save your settings and choose to ‘Execute’ your interface … in my case I got the following error ‘Flow Control not possible if no Key is declared in your Target Datastore’. And after a little bit of snooping around I’ve figured out the problem, there was no Primary Key defined for my target datastore.

If you need to define constraints for your datastores you’ve got to go to the Model and drill down into the specific DataStore. Right-click on the Constraints-node and choose to define new constraints, choose the type ‘PK’/’Unique’/’Alternate’ and in the Columns-tab choose the column.


Navigate back to your interface and choose the second tab ‘Diagram’ and choose your Target DataStore, in the screen below define the Primary Key you’ve defined earlier in the ‘Update Key’-dropdown.

Choose to Execute the interface again, choose Yes for the default settings to execute the interface.

To follow the result of your execution in the Operator, click on the operator-icon in your menu bar.
The Operator window that subsequently opens (and can be refreshed using the button if the window is already open) details the execution process tasks.

As you can notice in the Operator-window no errors are shown and the execution’s status is done. You can check out the database to verify that the Excel data has succesfully been uploaded to the Oracle Database.
Up to the next challenge !

Oracle Data Integrator – Administrator Version – Day 3

Today I’ve installed ODI on my own environment, the previous posts were about the development environment of my client-machine on which I don’t have administrator-privileges. On my own environment I’ve installed the administrator-version of ODI so I can create my own master and work repository to work with for my migration path.

I’ve followed the ‘ODI Installation Guide for these different steps and added some custom comments, actions, you can still use the Installation Guide as reference guide.

First I’ve installed the administrator-version of ODI, so I’m able to define my own repositories and the second step I’ve performed was the definition of the schema for my master and work repository.

I hear you thinking … ‘what is a master and work repository, why do I need this?’ … well, the definition from the ‘ODI Installation Guide’ states the following:

  • Master Repository: Data structure containing information on the topology of the company’s IT resources, on security and on version management of projects and data models. Mostly only one Master repository is needed.
  • Work Repository: Data structure containing information on data models, projects, and their use. Several work repositories can be designated with several master repositories if necessary. However, a work repository can be linked with only one master repository for version management purposes.

In the previous step I’ve defined the 2 schema’s used for the master and work repository (create db-schema’s), now it’s time to actually create these 2 repositories.

Let’s create the master repository – creation of tables and automatic importing of definitions – for the schema we’ve defined ‘snmp’:

  • Go to your Oracle Data Integrator in the Start Menu and select ‘Master Repository Creation’ from the ‘Repository Management’

  • Define the settings of your database and your master repository-schema in the ‘Master Repository Creation Wizard’ as shown below (don’t forget to specify a meaningfull Id, not the default 0)

When you click OK the different components – tables, indexes, schema’s, … – will be created and imported into the master repository and you can follow up this creation in the log-window.
After everything is succesfully created a pop-up window will be shown informing you about the succesfull creation of the Master Repository.


Afterwards we will connect to our new master repository via the ‘Topology Manager’:


We’ve now succesfully created our master repository and connected to it via our Topology Manager.

The next step is to create the work repository. Go the 5th tab shown below in the Topology Manager to show the existing repositories, right-click on the Work Repository and choose ‘Insert Work Repository’.


Define the jdbc-driver and username and password for the schema you’ve created for the work repository.


You can test your connection to be sure all the settings were defined correctly and then click ok.
The next screen is shown to define the specific settings for the work repository, choose a unique ID and define a name for the work repository. Click OK.


To connect and work with our newly created work repository, perform the following:

  • Choose the Designer-submenu in the Oracle Data Integrator menu of your Start Menu
  • Choose to create a ‘New Data Integrator Connection’ choosing the first icon to the right of the dropdown-box for ‘Login name’
  • Enter the different settings needed to be able to connect to your work repository. In the Database Connection – section you need to specify the connection the (Master Repository). In the last part you need to specify the Name of the Work Repository which can be chosen from a list. The name you’ve defined for the work repository will be shown in this screen.

  • Click OK
  • Choose these new settings to Login to the Oracle Data Integrator

Now I can perform the same steps as I’ve discussed in the previous blogs to be able to work on my customly defined master and work repository.

When you’ve got adminstrator privileges this is the standard way to go:

  • create master and work repository schema’s on the database
  • create master and work repository through the ODI as mentioned in this thread
  • create different models, interfaces, … to work with these custom defined repositories.

I’ve done the same steps for my own ‘administrator’ environment as mentioned in the previous posts so I can work on my administrator environment and in the development environment.

In the previous post I’ve created the microsoft ODBC DataSource and I’ve reverse engineered my Excel file into a datamodel in the Designer, now it’s time to start creating interfaces.

Oeps … I’ve stumbled upon a ‘logical steps to perform when working with ODI’, and the logical step to perform after the definition of the model isn’t definining interfaces … let’s have a closer look :

Managing an Oracle Data Integrator project generally involves the following steps:

  • Creating and reverse-engineering models. (Check!)
  • Creating a project.
  • Using markers (optional).
  • Creating and organizing folders.
  • Importing KMs.
  • Creating and modifying reusable objects: Variables, Sequences, Interfaces, Procedures, User functions.
  • Unit testing interfaces and procedures (back to step 6).
  • Building packages from elements created in step 6.
  • Integration testing the packages.
  • Generating scenarios.
  • Scheduling scenarios.
  • Managing the scenarios in production.
  • Maintenance, bug fixing and further modifications (back to step 6).

So the next step to perform – after we’ve created our data model – is to create a new project, a project is nothing more than a ‘container’ holding a group of objects created in Oracle Data Integrator.

  • Go to the designer and choose the projects-tab, this is the default-tab shown when you open up designer. For our newly created work repository no projects are created yet so we can start building from scratch.
  • Choose the first icon ‘Insert Project’
  • Define a name for the new project

After we’ve defined the project the different objects that can be created in a project are shown, such as variables, knwoledge modules, sequences, …

The next step to perform is the creation of ‘folders’ to organize our interfaces, procedures and packages, but in my newly created projet a ‘First Folder’ has already been created.
Right-click on this folder and choose ‘edit’ and rename the folder to your own choice.

The next logical step would be to import Knowledge Modules of interest to our project, but in my case the default Knowledge Modules were already imported, so no actions need to be performed here.

Finally I’m ready to create interfaces to be able to link our source-data to our target-data, the definition given in the UserGuide is the following: ‘An interface consists of a set of rules that define the loading of a datastore or a temporary target structure from one or more source datastores.’.

I’ll keep you posted !

Oracle Data Integrator – Part 2 – Working with Excel Data

What have I learned today …

Today I took a closer look at declaring or using my Excel file as a source model for the ODI for which I’ve used the ODI User Guide and the help-documentation in ODI itself.


What do I need to do to get ODI to talk to my Excel file?

  • Create a new ODBC datasource for my Excel files using the Microsoft ODBC Data Source Administrator (run from the Configuration Panel)
  • Make sure you’ve defined a name for the data in the Excel spreadsheet. You can accomplish this by defining a name for the data shown in every worksheet. You have to mark the data in a specific worksheet (select all the rows and columns with data) and choose to insert a name for this data using the insert-menu.

For the configuration of ODI I’ve used different kinds of guidance, help then only the User Guide. To define my ODBC Datasource in the ODI I’ve followed the topic ‘Create a Microsoft Excel Data Server’ (written in the help-documentation of ODI) . This topic has a more detailed approach then the topic ‘Creating a Data Server’ explained in the User Guide.

The steps I’ve performed:

  • Connect to the Topology Manager (first open up the Designer from your Oracle Data Integrator menu)

  • Select Topology > Physical Architecture > Technologies > Microsoft Excel in the tree

  • Right click and select Insert Data Server
  • Fill in the necessary fields in the Definition and JDBC tab, such as the name and the JDBC Driver (sun.jdbc.odbc.JdbcOdbcDriver) and JDBC URL (jdbc:odbc: => the name you defined for your ODBC data source)

  • You can test your connection, test if the connection was succesfull by clicking the test-button in your Data Server properties screen.

The next step we need to perform is to create the physical schema for our Excel file to be able to map this ‘source’ schema to the ‘target’ schema.

  • Right-click on your newly created Data Server and choose to insert a new physical schema:
  • Leave the defaults as is for the Physical schema and choose the wright context in the context-tab which means you need to specify the logical schema you want to use. Specify a name and click apply and ok.

  • Let’s reverse engineer the Excel file into a new model. Go to the designer and choose to insert a new model. In the technology-dropdown choose ‘Microsoft Excel’ and choose your Logical Schema.
  • In the reverse-tab make sure to select the context you want to define the model in.
  • In the ‘Selective Reverse’ tab choose all checkboxes and the data you’ve defined names for in the spreadsheet will be listed. Click OK to reverse-engineering the defined data into your model.
  • In the model-tree you can see the different tables that where created inside the model depending on the data you’ve defined in the excel sheet.

  • In the different DataStores that were created (depending on the data you’ve defined in the spreadsheet) you can check the colums that were defined in your spreadsheet.

The next step is to start creating Interfaces to define the transformation that’s needed from our source-model to our target-model.

Stay Tuned

Oracle Data Integrator

For my project one of the tasks I need to accomplish is to work out the migration path from an oracle db and excell files to a new oracle db.
I need to design a new application in adf with a process-driven data-model to replace the existing ‘application’, which are excell files with a lot of pivot tables, redundancy, …
The first application, before the excell files, was written in Oracle Forms, but isn’t currently used anymore.

One of the challenges I’m faced with now is the need to migrate the old oracle schema and the excell files to my new oracle schema. The data model has changed a lot according to the previous one, because the data-model is set-up in a process-driven manner.

The past weeks we’ve analysed the existing data and the different strategies we can can follow for this migration path. The migration has to be done only once for all the data that’s stored in the old schema and in the excell files and the best way to do is … indeed that’s the question?

I’ve been reading a lot on technet, on different blogs, in whitepapers, … about the different possibilities there are, the ones I came up with:

  • external tables
  • sqlloader
  • Oracle Migration Workbench (SQLDeveloper plug-in)
  • Oracle Data Integrator
  • Open Source ETL Tools
  • After taking a look at the strengths and weaknesses of every technology/possibility I’ve chosen to have a closer look at ODI (Oracle Data Integrator).

Some of the features I’m really interested in and which are very useful for me are the following:

  • Transformation between source and target
      My source system, the excel files, have a different data structure than my target database. To be able to map the source-data to the target-data I will need a mapping-tool such as ODI to do this mapping in a declarative way.
      E.G. In the excell file the address of a person is stored in one cell, this needs to be mapped to different fields in my target-datasource, …

  • Ability to check the data-flows (via Data Flow Control) between the source- and target-destination and have error-handling and logging for each needed step during the process
    • Data flow control consists of checking the source data of an interface compared to the consistency level required on the target. Data is checked before the data is integrated in the target.
      Detection of an error can give rise to either non-integration of all the data (even correct), or integration of only the correct data. In any case, it is useful to isolate and/or recycle the erroneous data (after any necessary correction).

  • Use existing Knowledge Modules which hold the necessary tasks to extract and load data according to the pre-defined steps in this Module
  • The following important question is … how to get started?

    Firstly I’ve installed ODI after reading the installation guide and then I’ve taken a tour of the ‘getting started guide’ and the demo material that’s included in the ODI install.

    After reading through the ODI User’s guide I’ve noted down the following important decision points:

    • I need a ‘One Shot Interface’ for my migration path because I only need to perform this migration one time, before going to production
    • I need an LKM (Loading Knowledge Module) to extract data from the source database and my excell files
    • When designing Interfaces in Oracle Data Integrator, these Interfaces contain several phases, including data loading, data check, data integration, etc. For each of these phases, you will define:
      - The functional rules (mappings, constraints, etc.) for this phase
      - The Knowledge Module to be used for this phase.

    Secondly I’ve read the threads on ‘otn’ to be able to have an idea of the community that’s using this tool and the people I can asks questions to ;o)

    Thirdly it’s time to get started …

    I will keep you posted on the progress and the ODI-experience … so stay tuned !