New in Java 8 : Consumers and Predicates : a simple introduction

The java.util.function package is new in Java 8 and contains interfaces that are used for lambda expressions and functional references. In this blog, I give a brief introduction of  2 interfaces of this package :

  • Consumer
  • Predicate

For the examples in this blog, we have a list of invoices with name, amount and category :

public class Invoice {
   private String name,amount,category;
   public Invoice(String name,String amount,String category) {
     super;
     this.name=name;
     this.amount=amount;
     this.category=category;
   }
   public String getName() {
     return this.name;
   }
   public String getAmount() {
     return this.amount;
   }
   public String getCategory() {
     return this.category;
   }
}

To generate a list of invoices, we’ll use the following method:

public static List<Invoice> generateInvoices()  {
   List<Invoice> list = new ArrayList<Invoice>();
   list.add(new Invoice("Oracle","1000","SOFTWARE"));
   list.add(new Invoice("Microsof","30000","HARDWARE"));
   list.add(new Invoice("Apple","5000","SOFTWARE"));
}

Consumer

A Consumer is an interface that ‘consumes’ an object. It takes an argument and does something with it. It does not return a result.

The Consumer interface has 2 methods :

  • void accept(T t) : contains the code that is executed on t
  • default Consumer<T> andThen(Consumer<? super T> after) : This method returns a consumer that is executed after the previous one and enables you to execute a chain of consumers.

For this demo, we are using the (new in Java8) method of the Collection API :

Collection.forEach(Consumer<? super T> action)

This method executes the consumer ‘action’ on every item of the collection.

First we create 2 methods that each return a Consumer object. The first will print the name of the invoice, the second prints the amount.

Finally we use these 2 methods in a Collection.foreach method.


public static Consumer<Invoice> printName() {
    return new Consumer<Invoice>() {
         public void accept(Invoice invoice) {
           System.out.println(invoice.getName());
         }
    };
}

public static Consumer<Invoice> printAmount() {
    return new Consumer<Invoice>() {
         public void accept(Invoice invoice) {
           System.out.println(invoice.getAmount());
         }
    };
}

generateInvoices().forEach(printName().andThen(printAmount());

As you can see in the last line, first the printName() is executed, and then the printAmount(). This line will print the following :
Oracle
1000
Microsoft
3000
Apple
5000

When an error occurs in the foreach method, an exception is thrown, and further processing of the List stops.

Predicate

A Predicate is an interface that is used to assign lambda expressions. It has a functional method :

boolean Test(T t )

Predicates are used as stream operations. Stream operations can be executed on Collections in order to execute complex data processing queries. But in this blog we’ll keep it simple, we just want to select all the invoices with category=’HARDWARE’, and put them in a new List.

Using a predicate in combination with the new Streams API, will simplify and shorten our code and make it more readable.

First we define our predicate, and then we’ll use it on our List of invoices. The stream method will filter the List using our predicate, and then collect the items that fulfill the predicate in a new List.

public static Predicate<Invoice> isHardware() {
     return i -> i.getCategory().equals("HARDWARE");
}

List<Invoice> listHardware = generateInvoices().stream.filter(isHardware()).collect(Collectors.<Invoice>toList());

Our new list will now contain 1 invoice, the one from Microsoft which has ‘HARDWARE’ as category.
As you can see, Predicate is a class that contains a function that we can pass to other classes. Actually it is just a reference to a function, AKA ‘a function reference’.
With Streams, you also sort and map data, before collecting, but that’s for another blog.

So that’s it for now. I hope this blog has shown that, by using Consumers and Predicates, our code will become shorter, cleaner and more readable.

 

New in Java 8 : Default and static methods in interfaces

Default method’s  (aka Defender methods) in interfaces are new in Java 8. They enable you to define a default implementation of a method in the interface itself.

If an interface is implemented by several classes, it’s hard to add method’s afterwards, as it will break the code and require all implementing classes to define the method as well. Adding a method to the interface, and defining a default implementation for it, will resolve this problem.

Here’s a code example :


public Interface Draw {

public void drawCircle();

   default public void drawRectangle() {

      System.out.println("draw a rectangle");

   }

}

Implementing classes that have not defined the drawRectangle() method, will print “draw a rectangle” when drawRectangle() is executed on them.

Interfaces that extend this interface can

  • define nothing, in which case the method will be inherited
  • declare the default method again with no implementation, which will make it abstract
  • Redefine the default method so it get’s overridden

These default methods were added to Java in order to be able to implement the new Streams API. As they needed to update the Collection interface, adding the stream() and parallelStream() methods. If they didn’t had the default method, they should have updated all classes that implement the Collection interface.

Static methods

Also new in Java 8 is the use of static method’s in an Interface.

So now, drawRectangle()  could also be defined as a static method, but that would give the impression that it is a utility or helper method, and not part of the essential core interface. So in that case, it’s better to go for the default method.

You could argument that an abstract class would have done the job as well. But as Java has single inheritance, that choice would narrow down our the design possibilities. And as the poster above your bed is shouting every day : ‘Favor composition over inheritance!!’ right ? So we want to avoid inheritance anyway.

So what will happen if you try to implement 2 interfaces with the same default methods ? Well, you will get the following compile time error :

Duplicate default methods named [methodname] with the parameters () and () are inherited from the types [interface1] and [interface2]

To avoid this error, choose an implementation of one of the interfaces :

interface Draw{
   default void circle() {
     System.out.println("draw circle");
   }
}
interface Print{
   default void circle() {
     System.out.println("print circle");
   }
}

class MyClass implements Draw, Print {
   @Override
   public void circle() {
     Draw.super.circle();
   }
}

That’s it, a quick overview of this new feature in Java 8.

Introduction to Websockets and JSON-P API in JEE7

Websockets (JSR 356) and the JSON-Processing API (JSR 353) are both introduced in the JEE7 specification. Together with JavaScript an HTML5, they enable web applications to deliver a richer user experience.

Websockets allow you to communicate bidirectional and full duplex over TCP, between your server and different kind of clients (browser’s, JavaFX… ). It’s basically a push technology, where, for example events or data originating from the server or a client, can be pushed to all the other connected clients.

In our demo , JSON strings are send between client and server, so that’s where the JSON Processing API comes in. It’s a  portable API that allows you to parse, generate, transform and query JSON by using the streaming or model API. But you could also send XML or any other proprietary format.

Serverside components

  1. A java class annotated with
    @ServerEndpoint(value=”/endpoint”, decoders=EncodeDecode.class, encoders=EncodeDecode.class)
    with following method annotations :
    @OnOpen : when connections is open
    @OnMessage : when a message comes in
    @OnClose : when a message is closed
  2. A java class that encode/decodes the message from/to JSON and Java object. (That’s where the JSON-P API comes in).

Clientside component

An html file that contains JavaScript to communicate with our server endpoint. Communication is done through a WebSocket object, declared as follows :

connection = new WebSocket(‘ws://localhost:8080/mywebsocket/endpoint’);

will trigger the @OnOpen method of our server side endpoint.

connection.onmessage : fired when a message comes in

connection.send : will trigger the OnMessage annotated method of our endpoint

connection.close : will trigger the OnClose annotated method of out endpoint

Demo

It’s a screen that sends messages to all the connected clients, including itself. When the client opens a connection on the server, his session is added to a list of active sessions. When a client sends a message to the server, it is distributed to all the sessions in the list. When the client closes his browser tab or window, his session is removed from the list. The data that we send, can be any complex JSON or XML model. To keep it simple, we just send a simple string.

This application needs to be deployed on a JEE7 compliant servet. So at this moment (May 2014) it will only run on Glassfish 4.0 or WildFly 8.

The war file can be found here. After deployment, open url (for Glassfish) http://localhost:8080/mywebsocket/socket.html.

 The Code

Java endpoint

package be.iadvise.mywebsocket;

import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;

import javax.websocket.EncodeException;
import javax.websocket.OnClose;
import javax.websocket.OnMessage;
import javax.websocket.OnOpen;
import javax.websocket.Session;
import javax.websocket.server.ServerEndpoint;

@ServerEndpoint(value="/endpoint", decoders=EncodeDecode.class, encoders=EncodeDecode.class)
public class MyEndPoint {
 // contains list of active sessions
 private static List<Session> sessions = Collections.synchronizedList(new ArrayList<Session>());

 @OnOpen
 public void onOpen (Session s) {
 sessions.add(s);
 System.out.println("Open session : no of sessions = "+sessions.size());
 }

 @OnMessage
 public void onMessage (MyMessage msg, Session s) throws IOException, EncodeException {
 for (Session session : sessions) { // loop over active sessions and send the message.
 session.getBasicRemote().sendObject(msg);
 }
 }
 @OnClose
 public void onClose (Session s) {
 sessions.remove(s); // remove session from the active session list.
 }
}

Java Decode/Encode message

package be.iadvise.mywebsocket;

import java.io.Reader;
import java.io.StringReader;
import java.io.StringWriter;

import javax.json.Json;
import javax.json.JsonObject;
import javax.json.JsonReader;
import javax.json.stream.JsonGenerator;
import javax.websocket.DecodeException;
import javax.websocket.Decoder;
import javax.websocket.Encoder;
import javax.websocket.EndpointConfig;

/**
 * This class will encode/decode the messages from/to the client.
 * Decoder : from client to server -> converts the JSON to MyMessage object
 * Encoder : from server to client -> converts MyMessage object to JSON
 *
 * We are using JSON, but you can use XML or any other format.
 */
public class EncodeDecode implements Decoder.Text<MyMessage>, Encoder.Text<MyMessage> { 

 @Override
 public MyMessage decode(String txt) throws DecodeException {
 Reader reader = new StringReader(txt);
 JsonReader jsonReader = Json.createReader(reader);
 JsonObject object = jsonReader.readObject();
 String text = object.getJsonString("text").getString();
 return new MyMessage (text);
 }

 //Check if decode is possible. If not, return false
 @Override
 public boolean willDecode(String s) {
 System.out.println("Will decode asked for " + s);
 return true;
 }

 @Override
 public void init(EndpointConfig config) {
 System.out.println("init called on chatdecoder");
 }

 @Override
 public void destroy() {
 System.out.println("destroy called on chatdecoder");
 }

 @Override
 public String encode(MyMessage object) {
 System.out.println("I have to encode " + object);
 StringWriter sw = new StringWriter();
 JsonGenerator generator = Json.createGenerator(sw);
 generator.writeStartObject();
 generator.write("text", ((MyMessage)object).getText());
 generator.writeEnd();
 generator.flush();
 String answer = sw.toString();
 System.out.println("I encoded an object: " + answer);
 return answer;
 }
}

Java message


package be.iadvise.mywebsocket;

public class MyMessage {
private String text;
public MyMessage(String text) {
super();
this.text = text;
}
public String getText() {
return text;
}
public void setText(String text) {
this.text = text;
}
@Override
public String toString() {
return "MyMessage [text=" + text + "]";
}
}

The html file


<html>
<head>
<script language="javascript">
var connection;
var me;
function openSocket() {
connection = new WebSocket('ws://localhost:8080/mywebsocket/endpoint');
connection.onmessage = function(evt) {
var x = JSON.parse(evt.data);
mytext = x.text;
var chld = document.createElement("p");
chld.innerHTML = mytext;
var messages = document.getElementById("messages");
messages.appendChild(chld);
}
}

function talk() {
var txt = document.getElementById("msg").value;
var message = {
'text':txt
};
connection.send(JSON.stringify(message));
}
function closeSocket() {
alert('closing socket')
connection.onclose = function () {}; // disable onclose handler first
connection.close();
}
</script>

<script type="text/javascript">
if (window.addEventListener) { // all browsers except IE before version 9
window.addEventListener ("beforeunload", closeSocket, false);
}
else {
if (window.attachEvent) { // IE before version 9
window.attachEvent ("onbeforeunload", closeSocket);
}
}
</script>
</head>
<body onLoad="openSocket();">
<p>
SimpleWebSocket
</p>
<!-- <table id="chatbox" style="display:none"> -->
<table id="chatbox">
<tr><th width="400">messages</th></tr>
<tr>
<td width="400" id="messages">
</td>
</tr>
<tr>
<td>
<input type="text" id="msg"/>
<input type="submit" value="send" onclick="talk(); return false;"></input>
</td>
</tr>
</table>
</body>
</html>

 Conclusion

Websockets are a huge improvement for building rich applications. This is the first time that push technology is actually build in the JEE framework. Before that, we had to use polling or other techniques in order to get the same results. In this blog, I showed that you don’t need much code to start off. Once you get this working, you can gradually go further building more complex sockets.

 

wpg_docload.download_file : mime type not recognized by client

For a project we are currently working on, we needed to generate, and send a Word 2010 document to the client. The document was generated by a great PL/SQL document generation tool called Doxxy, and was sent to the client using the wpg_docload package. This is a standard Oracle pl/sql package that can be used to download files, BLOBs and BFILEs.

Before the download, we set the Content-type in the http header as follows :

owa_util.mime_header('application/vnd.openxmlformats-officedocument.wordprocessingml.document',FALSE);

When sending the document to the client, we got the following popup in our browser :

Image

So it looked like our browser didn’t recognized that this was an Word 2010 document.

Looking at the response header, using Firebug, we got the following result :

Image

Somehow the content type for Word 2010 was overwritten to text/html; charset=utf-8.

So, time for the good old trial and error approach, which, after a while, paid off.

Before setting the response header to : owa_util.mime_header(‘….’,FALSE); we need to issue the following commands :

htp.flush();
htp.init();

Now the code looks like this  :

-- first clear the header
 htp.flush;
 htp.init;
 -- set up HTTP header
 owa_util.mime_header('application/vnd.openxmlformats-officedocument.wordprocessingml.document', FALSE);
 -- set the size so the browser knows how much to download
 htp.p('Content-length: ' || DBMS_LOB.getlength(v_blob));
 -- the filename will be used by the browser if the users does a save as
 htp.p('Content-Disposition:attachment; filename="'||nvl(v_filename,'export')||v_ext||'"');
 -- Set COOKIE (for javascript download plugin)
 htp.p('Set-Cookie: fileDownload=true; path=/');
 -- close the headers
 owa_util.http_header_close;
 -- download the BLOB
 wpg_docload.download_file(v_blob);

After adding these 2 lines, we got the correct mime type :

Image

Many thanks to Willem Albert and Bjorn Fraeys for delivering the content for this blog !

Caching in a JEE : don’t write it yourself, use LoadingCache from Google Guava libraries.

Caching data is something you use in almost every JEE project. Most of the time it’s pretty simple : put your data in a .properties file and use a PropertyManager to fetch the data.

But that’s not very flexible and manageable. Updating the values means, updating your property file, repackaging the ear file, and redeploying, and only developers can update the data.

Putting the data in JNDI entries, and using JNDI lookups may solve the problem of redeploying, but if you got a few 100 properties, it’s still not very manageable.

Most of the times, JNDI entries are entered via some application server console which, in a production environment, is not accessible for your users who need to manage this data.

So lets put the data that needs to be cached in a database, or make it accessible via a web service. That would be ideal. You can write your own application on it, and have the data managed by your users.

But that means that you have to write your own, thread safe, caching algorithms.

No big deal if the data only changes once every 10 years, but refreshing it on a time or size basis, makes the whole thing a bit more complicated. And that’s where the great LoadingCache class from the Google Guava library comes in.

What are the Guave libraries ? Well, here’s how they describe it : ‘The Guava project contains several of Google’s core libraries that we rely on in our Java-based projects: collections, caching, primitives support, concurrency libraries, common annotations, string processing, I/O, and so forth.’

Now for caching, the Guava LoadingCache class caches data in a key-object map, and lets you define a cache refreshing mechanism, all done in a thread safe manner.

So lets show a small  example and explain how it works. Suppose your cache contains a list of products that are on sale for 1 day. Depending on the no. of sold products, the price will increase during that day. This means that the cache should be updated every few seconds, to update the price, and after 1 day, the whole cache should be refreshed with new products. Suppose that price setting and product selections are in the database, updated by some back-end application, and we need the new data in our frontend application and we want to cache it.

All this can be done with this simple class :

import java.util.concurrent.TimeUnit;
import javax.ejb.EJB;
import javax.ejb.Singleton;
import be.iadvise.dao.DatabaseDAO;
import be.iadvise.entities.Product;
import com.google.common.base.Optional;
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
import com.google.common.util.concurrent.MoreExecutors;

@Singleton
public class ProductCache {

@EJB
 DatabaseDAO databaseDAO;
 private static final Integer REFRESH_PRODUCT_AFTER_5_SECONDS = 5;
 private static final Integer EXPIRE_PRODUCT_AFTER_1_DAY = 1;
 private final LoadingCache<String, Optional<Product>> cache;

 public ProductCache() {
      cache = CacheBuilder.newBuilder()
           .expireAfterWrite(EXPIRE_PRODUCT_AFTER_1_DAY, TimeUnit.DAYS)
           .refreshAfterWrite(REFRESH_PRODUCT_AFTER_5_SECONDS, TimeUnit.SECONDS)
           .build( new CacheLoader<String, Optional<Product>>() {
                 @Override
                 public Optional<Product> load( String productId ) throws Exception {
                     return loadCache(productId);
                 }
           }
     );
 }

 public Optional<Product> getEntry( String productId ) {
      return cache.getUnchecked( productId );
 }

 private Optional<Product> loadCache(String productId) {
      Product product = databaseDAO.getProduct(productId);
      return Optional.fromNullable(product);
 }
}

Explanation

  1. In the constructor, we build the cache using the CacheLoader, defining the refresh mechanism. In our example we define 2 rules :
    – expireAfterWrite : after this period, the object will be evicted from the cache, and replaced the next time it is requested.
    – refreshAfterWrite : after this period, the object will be refreshed using the loadCache method. (with our new price)
  2. getEntry(String productId) method : will return the object with given key. So in this example, the cache is not loaded all at once, but only when the object is needed.
  3. loadCache(String productId) : will load the product and add it to the cache, or replace it if it’s already there and needs to be refreshed.

That’s all there is to it !

A few other remarks on the code

  1. There are other mechanism like expire/refresh AfterRead, which will time only from the last read, or let the cache hold only a certain no. of objects,…
  2. This code is implemented as a session bean. To make a singleton, I’m using EJB 3 annotation @Singleton. Because I only want 1 cache in my application
  3. My DAO is also injected using the @EJB annotation
  4. The LoadingCache does not want any null objects in the map (returns an error), so I’m using the Guava ‘Optional’ class here. This is basically a wrapper for my object and used to check if there is a value for my product id or not. So if someone uses a wrong productId, my cache will indicate that there is no product for this id, and I don’t have to go to the database every time it is requested.

To conclude:

Programming a caching mechanism in a JEE environment is not as trivial as it may seem. Testing it in a multithreaded environment is even more difficult. The caching classes of Guave gives you ready-to-use solution. It’s programmed, tested and used by Google, so I think we can say in all honesty : this is proven technology.

A remark on deploying on Weblogic 12c:

Weblogic also uses the Guava libraries, but an older version. This causes following error on deployment :

java.lang.NoSuchMethodError: com.google.common.util.concurrent.MoreExecutors.sameThreadExecutor()

Lcom/google/common/util/concurrent/ListeningExecutorService;

Adding the following to your weblogic-application.xml will solve the problem (force weblogic to use your deployed Guava libraries :

<wls:prefer-application-packages>
<wls:package-name>
com.google.common.*
</wls:package-name>
<wls:prefer-application-packages>

Guava libraries run under Apache license, more info/download can be found on :

https://code.google.com/p/guava-libraries/

Have fun !

5 neat little features of the 12C database to remember

In this post, I’d like to introduce 5 of the many new features Oracle 12C brings to us, database developer’s.
Of course this blog would be to long to explain them all in detail, so I will stick
to a small introduction.

  1. Generating a primary key without triggers, using nextval or identity
    In 12C, you are now able to use sequence.nextval or the new keyword ‘identity’ as default values.
    The ‘identity’ keyword will generate the value max(id)+1 for your primary key. So now you don’t need to create triggers anymore, when generating PK’s with a sequence.
    And problems with sequences that are not in sync, when moving/copying tables to another schema/database, can be avoided by using the ‘identity’ keyword.
    Example PK row declaration :
    id_pers         number default person_seq.nextval primary key;
    id_pers         number generated as identity;
  2. Accessible key word : define which code can call your function/procedure.
    One of the major problems of PL/SQL is, when developing a lot of packages/procedures/functions, in the end there is no telling who is called by who. This problem can now be answered by ‘white listing’. This means that, on creation, you are telling the package/function/procedure/type by whom it is accessible, or may be used.
    The accessible by clause takes packages/functions/procedures/triggers as accessor clause.
    Example white listing :
    - create procedure get_sales_data accessible by (my_sales_proc)…
    – create procedure get_sales_data accessible by (my_after_update_trigger)
    - create package my_package accessible by (my_other_package)…When the object is not accessible, following error will be thrown during compilation, or at runtime, in case of an anonymous PL/SQL block :PLS-00904: insufficient privilege to access object MY_PACKAGE.MY_PROCEDURE
  3. Temporal Validity of a row
    Sometimes rows in a table are valid or not, depending on a timeframe. For instance a subscription for a magazine, may only be valid for a year. Adding this validity to a row goes as follows :

     create table subscriptions
     ( person_id             number,
     subscription_id                 number,
     person_name                   varchar2(500),
     subscr_start_date   date,
     subscr_end_date              date,
     period for valid(subscr_start_date , subscr_end_date)
    )
    

    Now with following query we can select the ‘valid’ subscriptions :

     select * from subscriptions
     as of period for valid sysdate;
    
  4. New PL/SQL Package UTL_CALL_STACK
    The UTL_CALL_STACK package provides subprograms to return the current call stack for a PL/SQL program. This could already be done by DBMS_UTILITY.FORMAT_CALL_STACK, but this new package returns this information in a more structured way, and includes the depth of the call (calling level) and the names of the subprograms.
    This will make this information more usable in code. Related to this subject, 2 new directives are added in 12c, next to $$PLSQL_LINE and $$PLSQL_UNIT that already existed).
    – $$PLSQL_OWNER
    – $$PLSQL_TYPE

       dbms_output_put.line("Owner of this package is "+$$PLSQL_OWNER);
    

    Will print : Owner of this package is SCOTT

  5. An Invoker’s Rights Function Can Be Result Cached
    Caching results of a PL/SQL function already exists in 11g. Basically what happens is that, for a certain function, you define that the result, for given parameters should be cached in memory.
    So first time function getPerson(123) is executed, the data is fetched from the database, second time the function is called with parameter ‘123’, the result is fetched from the cache in memory, resulting in a better performance.
    Whenever a DML statement is executed on the table(s) used in that function, the cache is automatically cleared, causing the next call to return the new data. (Since 11G rel. 2, Oracle manages these dependencies himself.)
    So in our case, Oracle caches the result’s of function getPerson() for every key it is called with.
    Through Oracle Database 11g Release 2 (11.2), only definer’s rights PL/SQL functions could be result cached. Now in 12c, the identity of the invoker is implicitly added to that key.

As already mentioned, the possibilities of these new features go way beyond what I describe here. But hopefully it’s a start to a few experiments on your side !

More info can be found at http://docs.oracle.com/cd/E16655_01/server.121/e17906/chapter1.htm

Using ADF Logging in a non-ADF project

In a previous post (Starting with ADF 11G Logging), I explained how ADF logging is simple to set up, and how it will enable you to set the logging levels at runtime, without having to restart any server. When I showed this to a colleague of mine, he immedialtely popped the question : “Can’t we use this for all of our java applications, even the ones that don’t use ADF?”. Well, the answer is yes, and it turns out to be very easy. Just add the correct jar to your project and your done.

This blog will demonstrate how to get this working. I use Eclipse Juno to create a small webproject, only containing a servlet that does the logging. In fact I will use the same servlet I used in the previous post.

So I open my Eclipse , and started with a File -> New -> Dynamic Web project. Give it a name, set ‘Dynamic web module version’ to 2.5, click the  ‘Add project to an ear’ checkbox and click finish.

dyn_wb_prj

Now Eclipse has created a web and ear module for me.

Image

Now right click the web project (ADFLogging), and select New -> Servlet, give it a name, eg. TestServlet, and click finish.

Remove the generated code in the servlet, and copy the code from the servlet ‘ExecuteLogger’ from my previous post (here) and paste it in our new serlvet.

PS. : When you copied the code from my previous blog, don’t forget to set ADFLogger.createADFLogger to our current servlet class name : TestServlet.class.

We will get compile errors on HttpServletRequest,etc… and on the ADFLogger class because they are not defined in the classpath of the project. So we’ll add them in order to get our servlet compiled.  I get the 2 jar’s from a JDeveloper installation I did on my machine. We’ll only add these jar’s in order to get the servlet compiled in Eclipse. We will NOT deploy them, as they are already available on our Weblogic server.

To add the jar’s, right click on the web project, and go to Properties. In the Properties, click on ‘Java Build Path’.

buildpath

Click on ‘Add External JARs…’ , and go to the directory where you installed your JDeveloper, which in my case is : C:\Oracle\Middleware.

In that directory , get following jar’s from the sub-directory :

\oracle_common\modules\javax.servlet_1.0.0.0_2-5.jar : contains the servlet classes like HttpServletRequest/Response,etc…

\oracle_common\modules\oracle.adf.share.ca_11.1.1\adf-share-base.jar : contains the ADFLogger classes.

Now we see the the following jar’s added :

jars_added

Click OK and return to the servlet. In the servlet use CTRL-SHIFT-O to import the neccessary classes from the jar’s we just added.

Now all compile errors should be gone.

Generate the ear file as follow : File -> Export -> Ear file

Select the ear project and enter destination of the ear file

When you examine the ear, you will notice that the folder \WEB-INF\lib is empty.

As the servlet and ADFLogger jar is already available on Weblogic, there is no need to deploy it with our application.

Now deploy the ear to the Weblogic and test the servlet with following url. :

http://localhost:7101/ADFLogging/TestServlet

It will generate following output :

output

To check the logging done by this servlet :

As I used the integrated Weblogic of JDeveloper, I will look for my logs using JDeveloper, but in a production environment,

these logs can be viewed using the enterprise manager of Weblogic. For details, see my previous blog.

In the Oracle Diagnostics Logging configuration, I see my servlet after the deployment. No message level is defined, so it will take “Warning”, as this one is defined as default by the Root Logger

logger

After te execution, I see following log lines in the log analyzer.

result

So that’s it. So the bottom line is to add the ADFLogger jar to your non-ADF project, and you are ready to go !