Java & Memory in an iAS environment

Everyone who worked with java applications probably heard of the OutOfMemory error. Like the name says, it indicates that the java application reached the limit of available memory and tries to go beyond that limit, with the OutOfMemory as a logical consequence. But why does this error occur? Your 8 CPU, 20GB server has enough resources to serve 50 of these applications simultaneously and still, the error pops-up now and then, crashing your entire application.


Before you add another set of memory modules, let’s take a quick look at the JVM internal memory management.

Automatic Memory Management
Java uses automatic instead of explicit memory management. This means that the developer is only response for the allocation of memory (Object x = new Object();), and the JVM, more specific, the garbage collector automatically cleans up unused objects.

So it is impossible to have memory leaks in my application? Well, not quite, it is not as obvious as in programming languages with explicit memory management but you still have to free your allocated resources in a proper way (ref. DB connections, Streams, …)

The Heap
The memory allocated by the JVM is called the Heap. The heap itself is divided into 3 areas also called ‘generations’.

  • young generation
  • old generation
  • permanent generation

Most objects are initially allocated in the young generation. The old generation contains objects that have survived some number of young generation collections, as well as some large objects that may be allocated directly in the old generation.
The permanent generation holds objects that the JVM finds convenient to have the garbage collector manage, such as objects describing classes and methods, as well as the classes and methods themselves.

The young generation consists of an area called Eden plus two smaller survivor spaces.
Most objects are initially allocated in Eden. (As mentioned, a few large objects may be allocated directly in the old generation.)
The survivor spaces hold objects that have survived at least one young generation collection and have thus been given additional chances to die before being considered “old enough” to be promoted to the old generation. At any given time, one of the survivor spaces holds such objects,
while the other is empty and remains unused until the next collection.

Now that we know that the young generation contains short living objects and the permanent generation contains (very)long living objects
it is easier to pinpoint the OutOfMemory errors.

iAS & memory
When OutOfMemory problems occur in your newly deployed and bugfree application :-) , you can use the Oracle Enterprise Manager (OEM) web interface
to get a first impression about the memory usage of the application.

In the system components view (The view where all the OC4J instances are listed) you can see the Memory usage in MB per OC4J.
The view also shows the total amount of memory in the server as a pie chart at the top of the screen. The pie chart is divided into 3 parts.

  • Application Server Memory Usage
  • Free Memory
  • Other Memory Usage

Be very careful while interpreting the pie chart. Unix and Linux allocate free memory in a buffer. When an application needs extra memory,
the OS can free a part of the allocated buffer and give it to the application.
Due to the allocated buffer, the pie chart shows that almost all of the system memory is allocated in the ‘Other Memory Usage’

On the other hand, there are the log files where OutOfMemory errors are logged.

  • application.log located in the %ORACLE_HOME%/j2ee/%OC4J%/application-deployments/%APPLICATION% directory.
  • Standard output log named OC4J~\{OC4J\}~\{island\}~\{jvm process\}.log (eg. OC4J~OC4J_DEMOS~default_island~1.log) located in the %ORACLE_HOME%/opmn/logs directory

Be sure to check logging just before the OutOfMemory for more details.

Solving the memory problem
The simplest way to solve OutOfMemory errors is to increase the size of the heap. To do this, you can add command line parameters to the java command.

java -Xms256m -Xmx512m

Where -Xms defines the minimum and -Xmx the maximum amount of memory allocated by the JVM.

In the application server you can alter this in the OC4J instance -> server properties -> command line options -> java options field.
Note that the properties affect all OC4J instances in the cluster for that specific OC4J.

Sometimes this does not resolve the problem. In some cases an OutOfMemory error occurs while there is still lots of memory free for the JVM.
Check standard output log files for the following message.

Permanent generation is full…
increase MaxPermSize (current capacity is set to: 67108864 bytes)

As we have seen before, the permanent generation contains long living objects. By default, the JVM allocates 64MB to the permanent generation.
For the most application this will suffice. The size permanent generation must be set apart from the heap size. To do this use the command line:

java -XX:PermSize=64m -XX:MaxPermSize=256m

Where -XX:PermSize=xxx defines the minimum and -XX:MaxPermSize=xxx the maximum amount of memory allocated for the permanent generation.

Always make sure that the -server option is added to your java options. This setting will preconfigure your JVM to run as optimal as possible in a server environment.

If these settings do not work, you should check you application for memory leaks. You can use a profile to detect these leaks… but that will be explained in a future post

References
Memory Management in the Java HotSpot Virtual Machine (pdf)
Oracle® Application Server 10g Performance Guide

Export ADF Table To Excel

Recently a colleague of mine came up with a blog about exporting data shown in an af:table-component to Excel using the Apache POI API.

Some time ago I’ve accomplished the same behaviour without using any additional API or framework because I wanted to stay ‘independend’ and not use another framework on top of ADF. So I’ve done some research on OTN and finally figured out we couls accomplish the same behaviour using the Response of our HttpServlet to stream the data as Excel to our Browser.

How did I accomplish this?


I’ve created a Backing Bean for the jspx-page to be able to add my own business logic in a java class. In the jspx-document I’ve added the actionListener-attribute to my component and this attribute is bound to a method in my backing bean.

In my backing bean I’m delegating the action to my java class that holds the generic business logic. This method accepts the DCIteratorBinding as parameter to be able to browse the table-data for my Excel-file.

public void exportToExcel(ActionEvent actionEvent) throws
IOException {
ExportToExcel.exportHtmlTableToExcel((DCIteratorBinding)bindings.get(“findAllIter”)); }

The Java Class that actually performs the transformation to Excel uses the HttpServletResponse to output the data as Excel.

public static void exportHtmlTableToExcel(DCIteratorBinding tableContent) throws IOException {

//Set the filename DateTime dt = new
DateTime();
DateTimeFormatter fmt =
DateTimeFormat.forPattern(“yyyy-MM-dd_HHmmss”);
String filename = dt.toString(fmt) + “.csv”;

//Setup the output

String contentType = “application/vnd.ms-excel”;
FacesContext fc = FacesContext.getCurrentInstance();
HttpServletResponse response = (HttpServletResponse)fc.getExternalContext().getResponse();
response.setHeader(“Content-disposition”, “attachment; filename=” + filename);

response.setContentType(contentType);
PrintWriter out = response.getWriter();

RowSetIterator rsi = tableContent.getRowSetIterator();

String[] attNames = rsi.getRowAtRangeIndex(0).getAttributeNames();

for (int i = 0; i

out.print(attNames[i] + “;”);

}

out.println();

for (int i = 0; i

Row currentRow = rsi.getRowAtRangeIndex(i);

Object[] attValues = currentRow.getAttributeValues();

for (int j = 0; j

out.print(attValues[j] + “;”);

}

out.println();

}
out.close();

fc.responseComplete();

}

Deployment to WebCenter Pre-configured OC4J

Ever wondered how to deploy your first portletized adf application to the ‘Webcenter preconfigured OC4J’ … it’s not documented in the Developer guide and you can’t find it on technet … so what should we do do …

Read further …

When you’ve created your ADF-application and you’ve portletized it to be deployed to your ‘Webcenter preconfigured OC4J’, you need to define the connection to your container (webcenter pre-configured)

This isn’t that easy as I expected because you can’t find the ports listed in the Enterprise Manager Console of the Webcenter pre-configured container. These settings aren’t documented either in the Developer Guide or on technet, so after a lot of searching and trying out we’ve finally figured it out.

There are 2 ways to get the connection-settings used by the ‘Webcenter pre-configured OC4J’:

  • Check out the log-window when shutting down the container in Jdeveloper, the following information is shown:

C:\jdev10132\jdev\extensions\oracle.adfp.seededoc4j.10.1.3.2.0 -shutdown -port 22667 -password welcomeShutdown OC4J instance…Executing:
C:\jdev10132\jdk\jre/..\bin\java -jar “C:\jdev10132\j2ee\home\admin.jar”
ormi://127.0.0.1:22667 oc4jadmin welcome -shutdown

  • Check out the rmi.xml file which can be found in the following directory

/jdev/extensions/oracle.adfp.seededoc4j.10.1.3.2.0/

j2ee/home/config/rmi.xml

Oracle Data Integrator – Part 2 – Working with Excel Data

What have I learned today …

Today I took a closer look at declaring or using my Excel file as a source model for the ODI for which I’ve used the ODI User Guide and the help-documentation in ODI itself.


What do I need to do to get ODI to talk to my Excel file?

  • Create a new ODBC datasource for my Excel files using the Microsoft ODBC Data Source Administrator (run from the Configuration Panel)
  • Make sure you’ve defined a name for the data in the Excel spreadsheet. You can accomplish this by defining a name for the data shown in every worksheet. You have to mark the data in a specific worksheet (select all the rows and columns with data) and choose to insert a name for this data using the insert-menu.

For the configuration of ODI I’ve used different kinds of guidance, help then only the User Guide. To define my ODBC Datasource in the ODI I’ve followed the topic ‘Create a Microsoft Excel Data Server’ (written in the help-documentation of ODI) . This topic has a more detailed approach then the topic ‘Creating a Data Server’ explained in the User Guide.

The steps I’ve performed:

  • Connect to the Topology Manager (first open up the Designer from your Oracle Data Integrator menu)

  • Select Topology > Physical Architecture > Technologies > Microsoft Excel in the tree

  • Right click and select Insert Data Server
  • Fill in the necessary fields in the Definition and JDBC tab, such as the name and the JDBC Driver (sun.jdbc.odbc.JdbcOdbcDriver) and JDBC URL (jdbc:odbc: => the name you defined for your ODBC data source)

  • You can test your connection, test if the connection was succesfull by clicking the test-button in your Data Server properties screen.

The next step we need to perform is to create the physical schema for our Excel file to be able to map this ‘source’ schema to the ‘target’ schema.

  • Right-click on your newly created Data Server and choose to insert a new physical schema:
  • Leave the defaults as is for the Physical schema and choose the wright context in the context-tab which means you need to specify the logical schema you want to use. Specify a name and click apply and ok.

  • Let’s reverse engineer the Excel file into a new model. Go to the designer and choose to insert a new model. In the technology-dropdown choose ‘Microsoft Excel’ and choose your Logical Schema.
  • In the reverse-tab make sure to select the context you want to define the model in.
  • In the ‘Selective Reverse’ tab choose all checkboxes and the data you’ve defined names for in the spreadsheet will be listed. Click OK to reverse-engineering the defined data into your model.
  • In the model-tree you can see the different tables that where created inside the model depending on the data you’ve defined in the excel sheet.

  • In the different DataStores that were created (depending on the data you’ve defined in the spreadsheet) you can check the colums that were defined in your spreadsheet.

The next step is to start creating Interfaces to define the transformation that’s needed from our source-model to our target-model.

Stay Tuned

Oracle Data Integrator

For my project one of the tasks I need to accomplish is to work out the migration path from an oracle db and excell files to a new oracle db.
I need to design a new application in adf with a process-driven data-model to replace the existing ‘application’, which are excell files with a lot of pivot tables, redundancy, …
The first application, before the excell files, was written in Oracle Forms, but isn’t currently used anymore.

One of the challenges I’m faced with now is the need to migrate the old oracle schema and the excell files to my new oracle schema. The data model has changed a lot according to the previous one, because the data-model is set-up in a process-driven manner.

The past weeks we’ve analysed the existing data and the different strategies we can can follow for this migration path. The migration has to be done only once for all the data that’s stored in the old schema and in the excell files and the best way to do is … indeed that’s the question?

I’ve been reading a lot on technet, on different blogs, in whitepapers, … about the different possibilities there are, the ones I came up with:

  • external tables
  • sqlloader
  • Oracle Migration Workbench (SQLDeveloper plug-in)
  • Oracle Data Integrator
  • Open Source ETL Tools
  • After taking a look at the strengths and weaknesses of every technology/possibility I’ve chosen to have a closer look at ODI (Oracle Data Integrator).

Some of the features I’m really interested in and which are very useful for me are the following:

  • Transformation between source and target
      My source system, the excel files, have a different data structure than my target database. To be able to map the source-data to the target-data I will need a mapping-tool such as ODI to do this mapping in a declarative way.
      E.G. In the excell file the address of a person is stored in one cell, this needs to be mapped to different fields in my target-datasource, …

  • Ability to check the data-flows (via Data Flow Control) between the source- and target-destination and have error-handling and logging for each needed step during the process
    • Data flow control consists of checking the source data of an interface compared to the consistency level required on the target. Data is checked before the data is integrated in the target.
      Detection of an error can give rise to either non-integration of all the data (even correct), or integration of only the correct data. In any case, it is useful to isolate and/or recycle the erroneous data (after any necessary correction).

  • Use existing Knowledge Modules which hold the necessary tasks to extract and load data according to the pre-defined steps in this Module
  • The following important question is … how to get started?

    Firstly I’ve installed ODI after reading the installation guide and then I’ve taken a tour of the ‘getting started guide’ and the demo material that’s included in the ODI install.

    After reading through the ODI User’s guide I’ve noted down the following important decision points:

    • I need a ‘One Shot Interface’ for my migration path because I only need to perform this migration one time, before going to production
    • I need an LKM (Loading Knowledge Module) to extract data from the source database and my excell files
    • When designing Interfaces in Oracle Data Integrator, these Interfaces contain several phases, including data loading, data check, data integration, etc. For each of these phases, you will define:
      - The functional rules (mappings, constraints, etc.) for this phase
      - The Knowledge Module to be used for this phase.

    Secondly I’ve read the threads on ‘otn’ to be able to have an idea of the community that’s using this tool and the people I can asks questions to ;o)

    Thirdly it’s time to get started …

    I will keep you posted on the progress and the ODI-experience … so stay tuned !

    Apex: integration of new Google API’s

    Google has released 2 new features on it’s AJAX Search API:

    • Blog bar
    • News bar

    Both bars allow you to incorporate dynamic content into the layout of your site.

    I have tried to implement both features into ApEx and they were succesful. In this blog you can read a step by step guide explaining how to integrate the Blog Bar into your apex application.

    The Blog Bar is offered in horizontal and vertical styles. You have the ability to let your users see specific Blog Search results or you can select only blogs specified by yourself.


    The first thing you need to do is to register with Google.
    Then you can go to Google Blog Bar API page and run the wizard to get the right example code and key. Save the example code and the key so you can use it later.

    Now we can implement it into your ApEx page:

    Paste your css and javascript code into the html header of your ApEx page but leave the next code out of it:

    <div id=”blogBar-bar”>
    <span style=”margin:10px;padding:4px;”>Loading…</span>
    </div>

    Now create an empty html region and paste the code you left out of your header into it:

    Run your page:

    The following example shows a horizontal style(only 1 line):

    A working example can be found here.

    The following example shows a vertical style:


    A working example can be found here.

    More info about customizing the google blog API can be found here.