New in Java 8 : Default and static methods in interfaces

Default method’s  (aka Defender methods) in interfaces are new in Java 8. They enable you to define a default implementation of a method in the interface itself.

If an interface is implemented by several classes, it’s hard to add method’s afterwards, as it will break the code and require all implementing classes to define the method as well. Adding a method to the interface, and defining a default implementation for it, will resolve this problem.

Here’s a code example :


public Interface Draw {

public void drawCircle();

   default public void drawRectangle() {

      System.out.println("draw a rectangle");

   }

}

Implementing classes that have not defined the drawRectangle() method, will print “draw a rectangle” when drawRectangle() is executed on them.

Interfaces that extend this interface can

  • define nothing, in which case the method will be inherited
  • declare the default method again with no implementation, which will make it abstract
  • Redefine the default method so it get’s overridden

These default methods were added to Java in order to be able to implement the new Streams API. As they needed to update the Collection interface, adding the stream() and parallelStream() methods. If they didn’t had the default method, they should have updated all classes that implement the Collection interface.

Static methods

Also new in Java 8 is the use of static method’s in an Interface.

So now, drawRectangle()  could also be defined as a static method, but that would give the impression that it is a utility or helper method, and not part of the essential core interface. So in that case, it’s better to go for the default method.

You could argument that an abstract class would have done the job as well. But as Java has single inheritance, that choice would narrow down our the design possibilities. And as the poster above your bed is shouting every day : ‘Favor composition over inheritance!!’ right ? So we want to avoid inheritance anyway.

So what will happen if you try to implement 2 interfaces with the same default methods ? Well, you will get the following compile time error :

Duplicate default methods named [methodname] with the parameters () and () are inherited from the types [interface1] and [interface2]

To avoid this error, choose an implementation of one of the interfaces :

interface Draw{
   default void circle() {
     System.out.println("draw circle");
   }
}
interface Print{
   default void circle() {
     System.out.println("print circle");
   }
}

class MyClass implements Draw, Print {
   @Override
   public void circle() {
     Draw.super.circle();
   }
}

That’s it, a quick overview of this new feature in Java 8.

Caching in a JEE : don’t write it yourself, use LoadingCache from Google Guava libraries.

Caching data is something you use in almost every JEE project. Most of the time it’s pretty simple : put your data in a .properties file and use a PropertyManager to fetch the data.

But that’s not very flexible and manageable. Updating the values means, updating your property file, repackaging the ear file, and redeploying, and only developers can update the data.

Putting the data in JNDI entries, and using JNDI lookups may solve the problem of redeploying, but if you got a few 100 properties, it’s still not very manageable.

Most of the times, JNDI entries are entered via some application server console which, in a production environment, is not accessible for your users who need to manage this data.

So lets put the data that needs to be cached in a database, or make it accessible via a web service. That would be ideal. You can write your own application on it, and have the data managed by your users.

But that means that you have to write your own, thread safe, caching algorithms.

No big deal if the data only changes once every 10 years, but refreshing it on a time or size basis, makes the whole thing a bit more complicated. And that’s where the great LoadingCache class from the Google Guava library comes in.

What are the Guave libraries ? Well, here’s how they describe it : ‘The Guava project contains several of Google’s core libraries that we rely on in our Java-based projects: collections, caching, primitives support, concurrency libraries, common annotations, string processing, I/O, and so forth.’

Now for caching, the Guava LoadingCache class caches data in a key-object map, and lets you define a cache refreshing mechanism, all done in a thread safe manner.

So lets show a small  example and explain how it works. Suppose your cache contains a list of products that are on sale for 1 day. Depending on the no. of sold products, the price will increase during that day. This means that the cache should be updated every few seconds, to update the price, and after 1 day, the whole cache should be refreshed with new products. Suppose that price setting and product selections are in the database, updated by some back-end application, and we need the new data in our frontend application and we want to cache it.

All this can be done with this simple class :

import java.util.concurrent.TimeUnit;
import javax.ejb.EJB;
import javax.ejb.Singleton;
import be.iadvise.dao.DatabaseDAO;
import be.iadvise.entities.Product;
import com.google.common.base.Optional;
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
import com.google.common.util.concurrent.MoreExecutors;

@Singleton
public class ProductCache {

@EJB
 DatabaseDAO databaseDAO;
 private static final Integer REFRESH_PRODUCT_AFTER_5_SECONDS = 5;
 private static final Integer EXPIRE_PRODUCT_AFTER_1_DAY = 1;
 private final LoadingCache<String, Optional<Product>> cache;

 public ProductCache() {
      cache = CacheBuilder.newBuilder()
           .expireAfterWrite(EXPIRE_PRODUCT_AFTER_1_DAY, TimeUnit.DAYS)
           .refreshAfterWrite(REFRESH_PRODUCT_AFTER_5_SECONDS, TimeUnit.SECONDS)
           .build( new CacheLoader<String, Optional<Product>>() {
                 @Override
                 public Optional<Product> load( String productId ) throws Exception {
                     return loadCache(productId);
                 }
           }
     );
 }

 public Optional<Product> getEntry( String productId ) {
      return cache.getUnchecked( productId );
 }

 private Optional<Product> loadCache(String productId) {
      Product product = databaseDAO.getProduct(productId);
      return Optional.fromNullable(product);
 }
}

Explanation

  1. In the constructor, we build the cache using the CacheLoader, defining the refresh mechanism. In our example we define 2 rules :
    – expireAfterWrite : after this period, the object will be evicted from the cache, and replaced the next time it is requested.
    – refreshAfterWrite : after this period, the object will be refreshed using the loadCache method. (with our new price)
  2. getEntry(String productId) method : will return the object with given key. So in this example, the cache is not loaded all at once, but only when the object is needed.
  3. loadCache(String productId) : will load the product and add it to the cache, or replace it if it’s already there and needs to be refreshed.

That’s all there is to it !

A few other remarks on the code

  1. There are other mechanism like expire/refresh AfterRead, which will time only from the last read, or let the cache hold only a certain no. of objects,…
  2. This code is implemented as a session bean. To make a singleton, I’m using EJB 3 annotation @Singleton. Because I only want 1 cache in my application
  3. My DAO is also injected using the @EJB annotation
  4. The LoadingCache does not want any null objects in the map (returns an error), so I’m using the Guava ‘Optional’ class here. This is basically a wrapper for my object and used to check if there is a value for my product id or not. So if someone uses a wrong productId, my cache will indicate that there is no product for this id, and I don’t have to go to the database every time it is requested.

To conclude:

Programming a caching mechanism in a JEE environment is not as trivial as it may seem. Testing it in a multithreaded environment is even more difficult. The caching classes of Guave gives you ready-to-use solution. It’s programmed, tested and used by Google, so I think we can say in all honesty : this is proven technology.

A remark on deploying on Weblogic 12c:

Weblogic also uses the Guava libraries, but an older version. This causes following error on deployment :

java.lang.NoSuchMethodError: com.google.common.util.concurrent.MoreExecutors.sameThreadExecutor()

Lcom/google/common/util/concurrent/ListeningExecutorService;

Adding the following to your weblogic-application.xml will solve the problem (force weblogic to use your deployed Guava libraries :

<wls:prefer-application-packages>
<wls:package-name>
com.google.common.*
</wls:package-name>
<wls:prefer-application-packages>

Guava libraries run under Apache license, more info/download can be found on :

https://code.google.com/p/guava-libraries/

Have fun !

Another 5 neat 12c features for Oracle developers

In this post I will put 5 other new 12c features in the spotlight (in addition to the features of a previous post), that really makes the 12c an improvement against the previous versions of the Oracle database.

To get this result I listed up all the major new features and wanted to know my top 5 features that would make my life easier(as a developer), excluding the features from the previous  post (I certainly would have added the sequence modification(feature 1)), when doing development on an Oracle database.

  1. Top end query -> I really like this feature and I’m still wondering why it took Oracle so long before creating it. It is something I could have used a lot in the past, but instead I had to create far too complicated, not as nice readable queries to achieve this. How does it work? Well it’s very easy and it’s readable and can be used in a wide variety of cases. Some examples:
    Only get the first 3 rows:
    select * from X order by id
    fetch first 3 rows only;

    Skip the first 3 rows and get the next 3 rows:

    select * from X order by id
    offset 3 rows fetch next 3 rows only;

    Get the first 50% of records

    select * from X order by id
    fetch first 50 percent rows only;

    Get the first 3 rows together with the records equal to these department id’s

    select * from emp order by deptno
    fetch first 3 rows with ties;

    If you want the capture the last rows, you can obviously change ‘first’ with ‘last’…

  2. In the 12c database the use of 32767 characters for a VARCHAR2 in SQL is now available instead of the maximum of 4000(this is also the case for RAW and nvarchar2).
    We all have been waiting a long time for this one and before we had to use the clob datatype.
    But beware this is not an out of the box feature, you will have to execute the lines below before this is enabled :
    shutdown immediate
    startup upgrade
    alter system set max_string_size=EXTENDED scope=both;
    @<ORACLE_HOME>/rdbms/admin/utl32k.sql
    Shutdown immediate
    Startup

    More info can be found on: http://docs.oracle.com/cd/E16655_01/server.121/e17615/refrn10321.htm

  3. The invisible column is a feature of which I was wondering where I could use it for.
    Well it could be handy when you are adding a column to your table, but you don’t want any existing code to be impacted by it.
    Another case where it could be useful, is when using audit columns.  Columns as the creation_dt, update_dt, user_creation and user_update will only be of any added value when you would like to audit a certain column.
    Packages with inserts, updates, references to this table will not be impacted by the creation of this column.
    On the other hand there is also a risk that you forget that this column is in there, because you have to explicitly call for it (a describe or select * will not show this column). You can create invisible columns like this:

    ALTER TABLE
    ADD  INVISIBLE;

    If you want to make the column visible again, use this:

    ALTER TABLE
    MODIFY VISIBLE;

    In summary it could be handy, but don’t forget this column or it will pollute your table.

  4.  The with clause inline plsql feature is also something that I think is very welcome.
    It will make it possible to create a procedure or function inside your select statement instead of having to create this in a package or function. Oracle also says that this will optimize the performance against having to call a schema procedure/function(I still have to test this).
    A little example:

    WITH
    FUNCTION fnc$_add_one(p_num IN NUMBER) IS
    BEGIN
    RETURN p_num+1;
    END;
    SELECT fnc$_add_one(1)
    FROM DUAL;
  5.  Most of the time I use the ANSI way of writing for a left outer join, but the oracle way of writing left outer joins is still often used by many of the oracle developers.
    But there was one thing that you could do in ANSI, that you couldn’t in the oracle way.  You couldn’t write multiple tables on the left of an outer join, untill12c…
    In 11g and before when coding something like this:
    select *
    from a,b,c
    where a.id = b.id
    and a.id = c.id(+)
    and b.id = c.id2(+);

    This resulted in -> ORA-01417: a table may be outer joined to at most one other table

    In 12c this will work, also the ANSI solution obviously still works both on 12c and on 11g

    select *
    from a
    JOIN b ON (a.id = b.id)
    LEFT OUTER JOIN c ON (a.id = c.id AND b.id = c.id2);

Together with the previous post this makes 10 reasons why you should start to use the Oracle 12c database :-)

Checkboxes in editable reports in APEX

We have all been there, we need to create an editable report and one of the columns contains a checkbox. So how should you handle this?

If you are using one of the recent APEX versions the easiest way is a tabular form. Just edit the column attributes of your checkbox column and at display as select “Simple Checkbox”. At the list of values definition type “Y,N”, where Y is the value the column will get when the checkbox is checked.

Tabular form checkbox

But what if you have multiple editable reports that have this requirement on one page? Then it starts to get interesting, since you can no longer use tabular forms.

With multiple editable reports we will be making our own editable report by using the API APEX_ITEM. You can read more about the APEX_ITEM API here.

We first create a report, and in our query we add our “active” column. We create two items there using the APEX_ITEM API: a checkbox and a hidden item. The parameter p_idx is the number  that apex uses to identify the items and write them in an APEX collection when the page is submitted. This has to be unique on the page. We set the value of both items to the id of the column. Why we need these will become clear later on.

SELECT APEX_ITEM.HIDDEN(p_idx =>1, p_value => id)

             ||APEX_ITEM.CHECKBOX(p_idx => 2, p_value => id , p_attributes => DECODE(active,’Y’,’checked=”checked”‘, NULL)) active

FROM MYTABLE

Next we go to report attributes, edit our active column and set display as Standard Report Column. This will allow APEX to render this properly.

Column attributes for APEX_ITEM API

Before we precede let me explain how checkboxes work. In HTML a checkbox that is not checked has no value it is considered NULL. This is something you will have noticed when you create a checkbox page item in APEX in a form. So if we loop over our APEX collection containing the checkboxes we will only loop over the checkboxes that have a value. This is no issue when you only need it to delete rows, but let me show you what happens if you try to use it to update rows. Let’s assume we have two columns, one contains our ID, and one contains our Checkbox with value Y.

APEX_ITEM.HIDDEN(p_idx => 1, p_value => id) APEX_ITEM.CHECKBOX(p_idx => 2, p_value => ‘Y’)
1 Checked
2 Not checked
3 Checked
4 Not checked

Assume we then loop over our first collection and do an update statement in our table:

FOR i in 1..APEX_APPLICATION.G_F01.COUNT LOOP

UPDATE MYTABLE SET active=NVL(APEX_APPLICATION07.G_F02(i),’N’)

WHERE id = APEX_APPLICATION.G_F01(i);

END LOOP;

Looks correct doesn’t it? Well it isn’t. When our process goes over the first row it will update correctly. When he tries to update the 2nd row he will update it wrongly to ‘Y’. And the 3rd row will give an error. That is because our 2nd APEX collection only contains two rows. It does not contain the rows that are not checked.

So now that I explained the problem let’s have a look at the solution.

DECLARE

l_yesno VARCHAR2(1);

TYPE t_checkboxes IS TABLE OF VARCHAR2(1);

l_checkboxes t_checkboxes := t_checkboxes();

BEGIN

FOR i IN 1..APEX_APPLICATION.G_F01.COUNT LOOP

FOR j IN  1..APEX_APPLICATION.G_F02.COUNT LOOP

IF APEX_APPLICATION.G_F01(i) = APEX_APPLICATION.G_F02(j) THEN

l_yesno := ‘Y';

END IF;

END LOOP;

l_yesno := NVL(l_yesno,’N’);

l_checkboxes.EXTEND;

l_checkboxes(i) := l_yesno;

l_yesno := ‘N';

END LOOP;

 

FORALL i IN INDICES OF APEX_APPLICATION.G_F01

UPDATE MYTABLE

SET ACTIVE = l_checkboxes(i)

WHERE id = APEX_APPLICATION.G_F01(i);

END;

We start by looping over our APEX collection containing our ID, inside that same loop we loop over the APEX collection with our checkboxes. Both contain as value our ID. If the values match, then the checkbox containing that ID has value ‘Y’. We insert this into a PL/SQL collection that we made for this purpose.

Lastly we do an update in our table, to set our new values. Notice how we only did one update statement using FORALL, and by doing so we limited our context switch to just one, and boosted our performance.

I now hope that everyone got a better idea of how they can deal with checkboxes rather easy, using only PL/SQL and APEX API’s.

Using ADF Logging in a non-ADF project

In a previous post (Starting with ADF 11G Logging), I explained how ADF logging is simple to set up, and how it will enable you to set the logging levels at runtime, without having to restart any server. When I showed this to a colleague of mine, he immedialtely popped the question : “Can’t we use this for all of our java applications, even the ones that don’t use ADF?”. Well, the answer is yes, and it turns out to be very easy. Just add the correct jar to your project and your done.

This blog will demonstrate how to get this working. I use Eclipse Juno to create a small webproject, only containing a servlet that does the logging. In fact I will use the same servlet I used in the previous post.

So I open my Eclipse , and started with a File -> New -> Dynamic Web project. Give it a name, set ‘Dynamic web module version’ to 2.5, click the  ‘Add project to an ear’ checkbox and click finish.

dyn_wb_prj

Now Eclipse has created a web and ear module for me.

Image

Now right click the web project (ADFLogging), and select New -> Servlet, give it a name, eg. TestServlet, and click finish.

Remove the generated code in the servlet, and copy the code from the servlet ‘ExecuteLogger’ from my previous post (here) and paste it in our new serlvet.

PS. : When you copied the code from my previous blog, don’t forget to set ADFLogger.createADFLogger to our current servlet class name : TestServlet.class.

We will get compile errors on HttpServletRequest,etc… and on the ADFLogger class because they are not defined in the classpath of the project. So we’ll add them in order to get our servlet compiled.  I get the 2 jar’s from a JDeveloper installation I did on my machine. We’ll only add these jar’s in order to get the servlet compiled in Eclipse. We will NOT deploy them, as they are already available on our Weblogic server.

To add the jar’s, right click on the web project, and go to Properties. In the Properties, click on ‘Java Build Path’.

buildpath

Click on ‘Add External JARs…’ , and go to the directory where you installed your JDeveloper, which in my case is : C:\Oracle\Middleware.

In that directory , get following jar’s from the sub-directory :

\oracle_common\modules\javax.servlet_1.0.0.0_2-5.jar : contains the servlet classes like HttpServletRequest/Response,etc…

\oracle_common\modules\oracle.adf.share.ca_11.1.1\adf-share-base.jar : contains the ADFLogger classes.

Now we see the the following jar’s added :

jars_added

Click OK and return to the servlet. In the servlet use CTRL-SHIFT-O to import the neccessary classes from the jar’s we just added.

Now all compile errors should be gone.

Generate the ear file as follow : File -> Export -> Ear file

Select the ear project and enter destination of the ear file

When you examine the ear, you will notice that the folder \WEB-INF\lib is empty.

As the servlet and ADFLogger jar is already available on Weblogic, there is no need to deploy it with our application.

Now deploy the ear to the Weblogic and test the servlet with following url. :

http://localhost:7101/ADFLogging/TestServlet

It will generate following output :

output

To check the logging done by this servlet :

As I used the integrated Weblogic of JDeveloper, I will look for my logs using JDeveloper, but in a production environment,

these logs can be viewed using the enterprise manager of Weblogic. For details, see my previous blog.

In the Oracle Diagnostics Logging configuration, I see my servlet after the deployment. No message level is defined, so it will take “Warning”, as this one is defined as default by the Root Logger

logger

After te execution, I see following log lines in the log analyzer.

result

So that’s it. So the bottom line is to add the ADFLogger jar to your non-ADF project, and you are ready to go !

Red Gate Deployment Suite for Oracle, a valuable component in your APEX ECO-System

When working with APEX, you also need a number of tools to improve the efficiency of your work. It isn’t enough to just have the APEX IDE at your disposal. Besides the APEX IDE for the application development, you may use a bunch of other tools. This is what I mean by the APEX ECO-System.

For the PL/SQL development and creation of the objects, such as tables, views, synonyms, sequences, … you may use SQL Developer. The DML scripts with the code for creating those objects may be generated by SQL Datamodeler. This tool is used to design the data model, which offers a graphical representation of the relationships between tables and offers the possibility to generate database objects automatically on our database.  Both tools are part of the Oracle Database Development tools, just like APEX.

Once the objects are modeled and generated on the database, you can start with the development of the application in APEX. After some time, when a first version of the application is ready, it can be deployed on a test environment. Without the aid of an external tool, it’s necessary to make a script of each package, trigger, sequence, synonym, table, view,… in order to create them on the test environment. And when you start with change requests and enhancements, it becomes even more complex!

After the end-user has tested the application and given his feedback, you normally will have to change your packages, tables,… or you will have to add some new functionalities, or you’ll have to change the existing ones. Without the aid of an external tool, you’ll have to make a script of each change you make, in order to deploy the changes on test as well. This is very time consuming and the risk of making mistakes is very big.

In the past, in my personal experience, it took me lots of hours to make a script for each column definition, table, relationship, package, trigger, view,… which has changed during the development cycle. It was very annoying and especially when you forgot one… It’s very time-consuming to find differences between the development and test environment.
But, since I got in touch with Red Gate, all those problems are gone!

Red Gate’s Deployment Suite for Oracle contains a tool that allows you to deploy schema objects and compare schema’s (Schema Compare for Oracle). But that’s not all. Unlike other tools such as SQL Developer, which  only gives the possibility to compare schema’s in a limited way, the Red Gate Deployment Suite allows you to deploy Data from one schema to another as well (Data Compare for Oracle). And last but not least, since the 12th of March, Red Gate extended their product portfolio. They also provide a tool for Source Control (=Version Control) of your database code! This is done by Source Control for Oracle!

A small overview…

Schema Compare for Oracle, allows you to make a full installation script of your database objects. This is very useful because you don’t have to worry about the question if all modifications are scripted, neither about the possible dependencies between the different objects. You are sure that everything on your development environment is scripted in a good way and, when you choose to deploy everything on the test environment, you are sure that it is executed in the right sequence.  Everything which has to do with the deployment of the schema objects is handled with the Red Gate Schema Compare tool, which is included in the Deployment Suite.

While Schema Compare for Oracle is used for objects, Data Compare for Oracle will take care of all your data. This is very useful when you have a test environment with a lot of reference data, which is entered by the Business. Don’t you recognize the situation where your end users have entered data while testing the application in a test environment assuming that this data would also be available in the production environment? This tool allows you to compare data on different schemas, and to deploy changes from the one to the other.

Very recently, Red Gate added an extra tool to his portfolio: Source control for Oracle. It all started with a live lab at KScope12 where every attendee could contribute to the first prototypes. Read more about it here. Less than a year later, a first version is downloadable. Since it is completely new, there is not yet that much experience with it.

Since iAdvise is an official partner of Red Gate, we’ve had the opportunity to test the tool before its official release. After a few months of testing, we can tell you that this is the first mature tool which really helps developers to manage their database code!

The whole idea is to give developers a tool to put schema objects on an SVN repository from the database and to pull those objects from the SVN repository, back to another database. If you ever tried doing this manually, you know that this can take up a lot of your valuable time. Source Control for Oracle will take the manual labor out of the picture and put the files on the SVN for you. While doing this, Source Control for Oracle will notice if an object already exists on SVN and the tool will ask the user what he needs to do with it. This will decrease the chances of anyone overwriting your work. You can also add a comment when you push something to the SVN. You can always read back and see what was changed, why it was changed, when it was changed, and by whom it was changed. In this way, Source Control for Oracle provides a whole version control system for database code.

One of the nice features of the tool is that when you define all your schemas in a SVN repository, Source control for Oracle will compare your schemas in the database automatically with the definitions in SVN, when you start the tool.

As you can read, the Deployment Suite for Oracle isn’t just the first database tool you meet; it’s a lot more than that! At iAdvise, we’re convinced about the added value and we believe this tool will help a lot of people in their needs. If you aren’t yet convinced about the tools, then take a minute to look at the website of Red Gate and check out the videos and testimonials of other users, I’m sure you’ll change of thoughts!

Interested to see & learn more about iAdvise?
Follow us on twitter!

@iadvise_live

http://www.iadvise.eu

SQL Developer Dbms Output Pane gone…

I already managed to remove the Dbms Output pane a couple of times from my SQL Developer, probably by doing a keyboard combination by accident.
I am not sure how I did this and I can’t really reproduce this but sometimes I was bugged with this problem.

Problem is that I can’t get this pane back by following the ‘normal’ procedure (going to Menu-> View->Dbms-Output).
Previously I did not found a proper solution for this and I just had to reinstall the sqldeveloper all over again.
Untill now, I found a really easy way to get this pane back.

First of all exit all your sqldevelopers sessions.
Go to the directory where the sqldeveloper is installed (the numbering of the directories could be different from one version to another, but it will look similar to this)
cd /home/vallafr/.sqldeveloper/system3.1.07.42/o.ide.11.1.1.4.37.59.48

there remove the windowinglayout.xml file
rm windowinglayout.xml

This file contains your ‘custom’ sqldeveloper layout. By removing this the sqldeveloper layout will be reset to the original settings.

Then restart your SqlDeveloper
in sqldeveloper go to Menu -> View -> Dbms_Output
And tadaaa  there it is again :)