Caching in a JEE : don’t write it yourself, use LoadingCache from Google Guava libraries.

Caching data is something you use in almost every JEE project. Most of the time it’s pretty simple : put your data in a .properties file and use a PropertyManager to fetch the data.

But that’s not very flexible and manageable. Updating the values means, updating your property file, repackaging the ear file, and redeploying, and only developers can update the data.

Putting the data in JNDI entries, and using JNDI lookups may solve the problem of redeploying, but if you got a few 100 properties, it’s still not very manageable.

Most of the times, JNDI entries are entered via some application server console which, in a production environment, is not accessible for your users who need to manage this data.

So lets put the data that needs to be cached in a database, or make it accessible via a web service. That would be ideal. You can write your own application on it, and have the data managed by your users.

But that means that you have to write your own, thread safe, caching algorithms.

No big deal if the data only changes once every 10 years, but refreshing it on a time or size basis, makes the whole thing a bit more complicated. And that’s where the great LoadingCache class from the Google Guava library comes in.

What are the Guave libraries ? Well, here’s how they describe it : ‘The Guava project contains several of Google’s core libraries that we rely on in our Java-based projects: collections, caching, primitives support, concurrency libraries, common annotations, string processing, I/O, and so forth.’

Now for caching, the Guava LoadingCache class caches data in a key-object map, and lets you define a cache refreshing mechanism, all done in a thread safe manner.

So lets show a small  example and explain how it works. Suppose your cache contains a list of products that are on sale for 1 day. Depending on the no. of sold products, the price will increase during that day. This means that the cache should be updated every few seconds, to update the price, and after 1 day, the whole cache should be refreshed with new products. Suppose that price setting and product selections are in the database, updated by some back-end application, and we need the new data in our frontend application and we want to cache it.

All this can be done with this simple class :

import java.util.concurrent.TimeUnit;
import javax.ejb.EJB;
import javax.ejb.Singleton;
import be.iadvise.dao.DatabaseDAO;
import be.iadvise.entities.Product;
import com.google.common.base.Optional;
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
import com.google.common.util.concurrent.MoreExecutors;

@Singleton
public class ProductCache {

@EJB
 DatabaseDAO databaseDAO;
 private static final Integer REFRESH_PRODUCT_AFTER_5_SECONDS = 5;
 private static final Integer EXPIRE_PRODUCT_AFTER_1_DAY = 1;
 private final LoadingCache<String, Optional<Product>> cache;

 public ProductCache() {
      cache = CacheBuilder.newBuilder()
           .expireAfterWrite(EXPIRE_PRODUCT_AFTER_1_DAY, TimeUnit.DAYS)
           .refreshAfterWrite(REFRESH_PRODUCT_AFTER_5_SECONDS, TimeUnit.SECONDS)
           .build( new CacheLoader<String, Optional<Product>>() {
                 @Override
                 public Optional<Product> load( String productId ) throws Exception {
                     return loadCache(productId);
                 }
           }
     );
 }

 public Optional<Product> getEntry( String productId ) {
      return cache.getUnchecked( productId );
 }

 private Optional<Product> loadCache(String productId) {
      Product product = databaseDAO.getProduct(productId);
      return Optional.fromNullable(product);
 }
}

Explanation

  1. In the constructor, we build the cache using the CacheLoader, defining the refresh mechanism. In our example we define 2 rules :
    – expireAfterWrite : after this period, the object will be evicted from the cache, and replaced the next time it is requested.
    – refreshAfterWrite : after this period, the object will be refreshed using the loadCache method. (with our new price)
  2. getEntry(String productId) method : will return the object with given key. So in this example, the cache is not loaded all at once, but only when the object is needed.
  3. loadCache(String productId) : will load the product and add it to the cache, or replace it if it’s already there and needs to be refreshed.

That’s all there is to it !

A few other remarks on the code

  1. There are other mechanism like expire/refresh AfterRead, which will time only from the last read, or let the cache hold only a certain no. of objects,…
  2. This code is implemented as a session bean. To make a singleton, I’m using EJB 3 annotation @Singleton. Because I only want 1 cache in my application
  3. My DAO is also injected using the @EJB annotation
  4. The LoadingCache does not want any null objects in the map (returns an error), so I’m using the Guava ‘Optional’ class here. This is basically a wrapper for my object and used to check if there is a value for my product id or not. So if someone uses a wrong productId, my cache will indicate that there is no product for this id, and I don’t have to go to the database every time it is requested.

To conclude:

Programming a caching mechanism in a JEE environment is not as trivial as it may seem. Testing it in a multithreaded environment is even more difficult. The caching classes of Guave gives you ready-to-use solution. It’s programmed, tested and used by Google, so I think we can say in all honesty : this is proven technology.

A remark on deploying on Weblogic 12c:

Weblogic also uses the Guava libraries, but an older version. This causes following error on deployment :

java.lang.NoSuchMethodError: com.google.common.util.concurrent.MoreExecutors.sameThreadExecutor()

Lcom/google/common/util/concurrent/ListeningExecutorService;

Adding the following to your weblogic-application.xml will solve the problem (force weblogic to use your deployed Guava libraries :

<wls:prefer-application-packages>
<wls:package-name>
com.google.common.*
</wls:package-name>
<wls:prefer-application-packages>

Guava libraries run under Apache license, more info/download can be found on :

https://code.google.com/p/guava-libraries/

Have fun !

AJAX in APEX

AJAX is becoming important in the world of web applications. APEX has provided us a very easy way to create an AJAX process, by using dynamic actions. Using PL/SQL Actions in Dynamic Actions to communicate with the database without submitting the page will suffice in most cases, but the downside is that the code is not very re-usable, and when you want to write a plug-in you simply don’t have access to Dynamic Actions. In this blog you will learn how to code your own AJAX process.

An AJAX process in APEX consists out of three parts

  • The JavaScript code that calls the AJAX PL/SQL Process
  • The PL/SQL Process that might or might not return a value
  • The JavaScript code that catches the return value and possibly does something with it

In APEX there are three ways to create an AJAX process from JavaScript:

  • The htmldb_get() method: undocumented but this used to be the only method available (without installing external libraries)
  • jQuery.ajax(): since jQuery was added to APEX, it has been quite common to use this method. It’s well documented on the jQuery homepage, but the downside is you need to write more code
  • apex.server: this new APEX API has been recently added (I believe at APEX 4.2). It is actually a wrapper of jQuery.ajax(), so it supports the same functionality with some additional APEX specific features. It is thoroughly documented in the APEX documentation, and this is the reason I prefer this method, and I will explain how you too can use it

The first thing we do is create a test application. In our case we have a table called “JOBS” that looks like this:

jobstable

In my jobs table I just inserted one job with a salary of 2800 of an unknown currency.

In our APEX application we have an Item of the type select list where the user can select a job, and then the minimum salary will be filled in.

Our page looks like this:

page

Next we write our JavaScript code.  This includes our change event and the apex.server.process . Double click your page name to go to the page definition, and scroll down to “Execute when page loads”.

javascript call

  • AJAX_GET_MIN_SALARY is the name of our future AJAX process.
  • X01 is the variable we pass, in this case the value of our #P17_JOB_ID item
  • Finally we declare that our expected return type is plain text. If we don’t do this, then by default the function expects a JSON string returned. Furthermore we declare in this function what we do with this return data. The return data will be delivered asynchronous, meaning we will get this data from our AJAX Callback function as soon as the AJAX Callback process is ready.

Now we can create our AJAX_GET_MIN_SALARY Ajax Callback process. Just right click on Ajax Callbacks . Click “Create” and select PL/SQL. Here we can put our PL/SQL code:

ajax_process

There are two things here that are worth mentioning:

  • TO_CHAR(apex_application.g_x01): this is how we catch the variable that is passed from our JavaScript code. We use TO_CHAR to identify that it’s a character.
  • HTP.Prn(v_min_salary): here we return the minimum salary back to our page

There, all done!  Let’s test out our application, shall we? Before you do anything it’s best to open the developer toolbar in the browser. In Chrome you can do this by pressing ctrl+shift+J.  It’s  a good practice to reload the page and to check if any JavaScript errors pop up on the console. If our JavaScript code shows no errors in the console go to the ‘Network’ tab, and select a job in the application.

items

You will now see www_flow.show appear. Click it. There are two tabs here that are vital to investigating this function for debugging, if needed. The first is the header, it shows what data is send to our AJAX Callback function.

toolbar

The second tab that’s important is our Response tab. It tells us what data is send back from the PL/SQL Process. If you remember our PL/SQL Process you will notice that we did not include an exception for when no data was found. Select “null” as job and you will get an error. If you then check out the response of the AJAX call you will see it gives our ORA error.

error

If you managed to read this far then you have gained some insights on how you can create your own AJAX function using the new APEX JavaScript API, how it works and how you can debug it should not everything go as planned.

Migrate your MS Access data to an Oracle database using the ETL Tool Talend

APEX is promoted as the perfect replacement for MS Access applications. One thing you should consider though is how you migrate your data to the Oracle database. In APEX there is a handy tool called the Data Workshop that can be used for this. You first export your Excel files from the MS Access database, and then follow the data upload wizard to import the data into identical tables. Since you are not always working with a 1-1 relationship, you will most likely have to write some PL/SQL to get all the data in the right tables.

dataworkshop

The downside is that you will need to repeat this process when you go into production. This is not a big problem if you only have one table to migrate. But if you have multiple tables and/ or your users also want new data during tests and trainings, you will spend a lot of time exporting and importing Excel files.

A recent APEX project for a client required a large data migration from MS Access Databases to the Oracle database. Because we would require fresh data on several points in the development process we decided to use the ETL Open Source Tool Talend. We got impressed of how intuitive the tool is, it only took a few days before we were familiar with the tool. Once you get the hang of it, you can write (or should I say draw) migrations of tables in no time. We needed to migrate from an MS Access database but the tool supports a wide range of databases and documents to import your data from. In total we migrated around 30-40 tables to our Oracle database.

Let’s have a closer look at one of our migration jobs.

talendmsaccessoracle

At the left we see our MS Access database. Each tAccessInput component will get data from one table. After that we join the tables in our tMap_1 component. The reason we don’t just write our joins in one component, is because this way we can really see how many rows every table returns.

On the bottom we have some Oracle Database input connections. They will join the persons of our MS Access Database with the persons in our Oracle Database based on the National registration number. After that we write our data to our Oracle Database. You may notice that we have two lines going to Excel files. This is our error logging; we use this to log the rows that did not find a match. In our first Excel for example we write persons that did not find a match in our Oracle Database.

This is just one example, in total about 20 jobs were built. During the development we also had to deal with certain calculations or convert data. For most things there was a component ready to use and if there wasn’t you could always write a Java expression in the tMap items.

I hope I convinced you of the benefits of using Talend as a migration tool for APEX projects, because we will certainly use this tool again!

Connecting to Salesforce and Mailchimp using Talend

A lot of companies use Salesforce to manage their customers and contacts. In addition Mailchimp can be used for sending out mailings to these connections. Mailchimp also captures information about what people did with these mails. This can be useful information for your CRM. A while ago, I was asked to make a list of everyone that have opened their mails in Mailchimp. Let me show you how easy it is, to do something like that with Talend.

In Talend:

  • we can get a list of email addresses from Mailchimp of receivers that opened a mail
  • and we can ask Salesforce for the email addresses and names of all our connections
  • and we can also use a mapping component to join these lists.

Talend has a standard interface with Salesforce. And Mailchimp offers lots of RESTful web services, which we can make use of in our Talend job.

  1. Connecting to Salesforce  

Right click “Salesforce” under the Metadata and choose “Create Salesforce Connection”.

pic1

After choosing a name for our connection, all we need to fill in, is the username and password for our Salesforce-connection.  The rest is already filled in for us.

pic2

To enable the “Finish” button, we need to check our properties first, using the button “Check login”.

Under Metadata, we can now browse through all our Salesforce-data.

pic3

Now you’re probably wondering, how to use this data in your ETL-flow. Well.. that’s even easier!

Simply drag one of the tables (with the blue icons) into your job and choose for the “tSalesforceInput” component from it’s 3 suggestions.

pic4

After specifying the necessary mappings you should get something like this:

pic5

We’ve used Contact and Account data of Salesforce for this.

In the next part, let’s check out how we generated the list of email addresses.

2.       Connecting to Mailchimp

Accessing your Mailchimp-data, is a bit harder. We need two components from the Talend-palette:

The ‘tRest’ component,  because we need to use a RESTful webservice for requesting our data from Mailchimp. And the ‘tExtractJSONFields’ component for interpreting the data we receive back.

After dragging the tRest component to your job, choose ‘POST’ as the ‘method’ and fill in the URL, corresponding to the report you wish to receive.

pic6

If you want to receive your report in XML-format instead of JSON, just add “.xml” at the end of the URL.

Here we needed the Mailchimp report, that gives us information on opened emails.

If you are interested in other kinds of reports, you can find the list here:

http://apidocs.mailchimp.com/api/2.0/#lists-methods

Every request, needs certain parameters. We can specify them in the HTTP body field, like this:

“{\”apikey\”: \”your api key will be here\”,\”cid\”: \”put a campaign id here\”}”

The API-key will always be needed as the first parameter. You can find it in Mailchimp under your ‘Account Settings’  – ‘Extras’ .

pic7pic8

The second component we need, is called ‘ExtractJSONFields’. After dragging it to our job, we link our first component to it.

pic9

We can use ‘Edit schema’, to define the data we want to extract.

pic10

Finally all we need to do, is specify the location of this data we are interested in, for example the ‘email’-field inside the ‘member’-field.

pic11

Now that we’re able to access our data from Mailchimp, let’s take a look at how we used it for generating the list of e-mailaddresses.

First we asked Mailchimp for all our Campaigns, then we used the ‘flowToIterate’-component so we could ask Mailchimp for the email addresses, once for every campaign in the list:

pic12

Finally all we had to do, is put these two jobs together and press ‘run’.

So.. I hope you’ll enjoy it, as much as I did!

Accessing SSL encrypted websites using UTL_HTTP and ORAPKI command line utility

Introduction
In an earlier post, I explained the purpose and usage of the Oracle Wallet Manager. This explanation assumed that the user has the ability to use the graphical user interface to execute the OWM program included with each DBMS installation.

However, sometimes there is no graphical user interface available on the server, and the user is limited to SSH access.
Additionally, the disadvantage of using a user interface tool is that it is not scriptable and re-runnable on other server/environments.

For just that reason, Oracle also provides a command line utility to perform the same tasks, called ORAPKI.
This post will show you how to perform the same tasks as we did in the previous post, using only the command line.

Step 1: creating a wallet:
The base command to create a new, empty wallet is:

orapki wallet create -wallet <wallet name or path>

The name of the wallet will be used as a folder within your home folder by default. If you prefer to use a specific folder, the full path to the folder can be used as wallet name as well. Make sure the Oracle user has permission to write to this folder though.

When any command on a wallet is executed, a prompt will be given to enter the wallet password. If the command is to be used in a script, it is better to include the password right away in the command. This can be done by appending -pwd <password> to any command.

orapki wallet create –wallet testwallet –pwd test1234

Step 2: display contents
To show the contents of any wallet, use the display command.

You will see that a number of trusted certificates are included in your wallet after it has been created, just like when it was created through the GUI.

[oracle@myorcl12c ~]$ orapki wallet display -wallet testwallet -pwd test1234

Oracle PKI Tool : Version 12.1.0.1
Copyright (c) 2004, 2012, Oracle and/or its affiliates. All rights reserved.
Requested Certificates:
User Certificates:
Trusted Certificates:
Subject:        OU=Class 2 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US
Subject:        OU=Class 3 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US
Subject:        CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE Corporation,C=US
Subject:        OU=Class 1 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US

Step 3: import certificates
Now we will import previously downloaded certificates into our wallet. (refer to the previous post for details on how to obtain such files)

Firstly I have stored 3 certificate files on the server:
/home/oracle/Certificates/BuiltinObjectToken:EquifaxSecureCA
/home/oracle/Certificates/GoogleInternetAuthority
/home/oracle/Certificates/*.google.be

Then following commands will do the import:

[oracle@myorcl12c ~]$ orapki wallet add -wallet testwallet -trusted_cert -cert /home/oracle/Certificates/BuiltinObjectToken:EquifaxSecureCA -pwd test1234
[oracle@myorcl12c ~]$ orapki wallet add -wallet testwallet -trusted_cert -cert /home/oracle/Certificates/GoogleInternetAuthority -pwd test1234
[oracle@myorcl12c ~]$ orapki wallet add -wallet testwallet -trusted_cert -cert /home/oracle/Certificates/*.google.be -pwd test1234
[oracle@myorcl12c ~]$ orapki wallet display -wallet testwallet -pwd test1234

Oracle PKI Tool : Version 12.1.0.1
Copyright (c) 2004, 2012, Oracle and/or its affiliates. All rights reserved.
Requested Certificates:
User Certificates:
Trusted Certificates:
Subject:        CN=Google Internet Authority,O=Google Inc,C=US
Subject:        OU=Class 3 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US
Subject:        CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE Corporation,C=US
Subject:        CN=*.google.be,O=Google Inc,L=Mountain View,ST=California,C=US
Subject:        OU=Equifax Secure Certificate Authority,O=Equifax,C=US
Subject:        OU=Class 2 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US
Subject:        OU=Class 1 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US

When adding the Wallet reference to your PL/SQL code, the folder /home/Oracle/testwallet should be used for the example above.

Eg:

[oracle@myorcl12c ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Tue Aug 27 14:11:43 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> SET SERVEROUTPUT ON
SQL> DECLARE
2    lo_req  UTL_HTTP.req;
3    lo_resp UTL_HTTP.resp;
4  BEGIN
5    UTL_HTTP.SET_WALLET ('file:/home/oracle/Wallet/','test1234');
6    lo_req := UTL_HTTP.begin_request('https://www.google.com');
7    lo_resp := UTL_HTTP.get_response(lo_req);
8    dbms_output.put_line(lo_resp.status_code);
9    UTL_HTTP.end_response(lo_resp);
10  END;
11  /

200
PL/SQL procedure successfully completed.
SQL>

Step 4: clear wallet
As a final step, use following command to clear the wallet.

orapki wallet remove -wallet testwallet -trusted_cert_all -pwd test1234

Another 5 neat 12c features for Oracle developers

In this post I will put 5 other new 12c features in the spotlight (in addition to the features of a previous post), that really makes the 12c an improvement against the previous versions of the Oracle database.

To get this result I listed up all the major new features and wanted to know my top 5 features that would make my life easier(as a developer), excluding the features from the previous  post (I certainly would have added the sequence modification(feature 1)), when doing development on an Oracle database.

  1. Top end query -> I really like this feature and I’m still wondering why it took Oracle so long before creating it. It is something I could have used a lot in the past, but instead I had to create far too complicated, not as nice readable queries to achieve this. How does it work? Well it’s very easy and it’s readable and can be used in a wide variety of cases. Some examples:
    Only get the first 3 rows:
    select * from X order by id
    fetch first 3 rows only;

    Skip the first 3 rows and get the next 3 rows:

    select * from X order by id
    offset 3 rows fetch next 3 rows only;

    Get the first 50% of records

    select * from X order by id
    fetch first 50 percent rows only;

    Get the first 3 rows together with the records equal to these department id’s

    select * from emp order by deptno
    fetch first 3 rows with ties;

    If you want the capture the last rows, you can obviously change ‘first’ with ‘last’…

  2. In the 12c database the use of 32767 characters for a VARCHAR2 in SQL is now available instead of the maximum of 4000(this is also the case for RAW and nvarchar2).
    We all have been waiting a long time for this one and before we had to use the clob datatype.
    But beware this is not an out of the box feature, you will have to execute the lines below before this is enabled :
    shutdown immediate
    startup upgrade
    alter system set max_string_size=EXTENDED scope=both;
    @<ORACLE_HOME>/rdbms/admin/utl32k.sql
    Shutdown immediate
    Startup

    More info can be found on: http://docs.oracle.com/cd/E16655_01/server.121/e17615/refrn10321.htm

  3. The invisible column is a feature of which I was wondering where I could use it for.
    Well it could be handy when you are adding a column to your table, but you don’t want any existing code to be impacted by it.
    Another case where it could be useful, is when using audit columns.  Columns as the creation_dt, update_dt, user_creation and user_update will only be of any added value when you would like to audit a certain column.
    Packages with inserts, updates, references to this table will not be impacted by the creation of this column.
    On the other hand there is also a risk that you forget that this column is in there, because you have to explicitly call for it (a describe or select * will not show this column). You can create invisible columns like this:

    ALTER TABLE
    ADD  INVISIBLE;

    If you want to make the column visible again, use this:

    ALTER TABLE
    MODIFY VISIBLE;

    In summary it could be handy, but don’t forget this column or it will pollute your table.

  4.  The with clause inline plsql feature is also something that I think is very welcome.
    It will make it possible to create a procedure or function inside your select statement instead of having to create this in a package or function. Oracle also says that this will optimize the performance against having to call a schema procedure/function(I still have to test this).
    A little example:

    WITH
    FUNCTION fnc$_add_one(p_num IN NUMBER) IS
    BEGIN
    RETURN p_num+1;
    END;
    SELECT fnc$_add_one(1)
    FROM DUAL;
  5.  Most of the time I use the ANSI way of writing for a left outer join, but the oracle way of writing left outer joins is still often used by many of the oracle developers.
    But there was one thing that you could do in ANSI, that you couldn’t in the oracle way.  You couldn’t write multiple tables on the left of an outer join, untill12c…
    In 11g and before when coding something like this:
    select *
    from a,b,c
    where a.id = b.id
    and a.id = c.id(+)
    and b.id = c.id2(+);

    This resulted in -> ORA-01417: a table may be outer joined to at most one other table

    In 12c this will work, also the ANSI solution obviously still works both on 12c and on 11g

    select *
    from a
    JOIN b ON (a.id = b.id)
    LEFT OUTER JOIN c ON (a.id = c.id AND b.id = c.id2);

Together with the previous post this makes 10 reasons why you should start to use the Oracle 12c database :-)

5 neat little features of the 12C database to remember

In this post, I’d like to introduce 5 of the many new features Oracle 12C brings to us, database developer’s.
Of course this blog would be to long to explain them all in detail, so I will stick
to a small introduction.

  1. Generating a primary key without triggers, using nextval or identity
    In 12C, you are now able to use sequence.nextval or the new keyword ‘identity’ as default values.
    The ‘identity’ keyword will generate the value max(id)+1 for your primary key. So now you don’t need to create triggers anymore, when generating PK’s with a sequence.
    And problems with sequences that are not in sync, when moving/copying tables to another schema/database, can be avoided by using the ‘identity’ keyword.
    Example PK row declaration :
    id_pers         number default person_seq.nextval primary key;
    id_pers         number generated as identity;
  2. Accessible key word : define which code can call your function/procedure.
    One of the major problems of PL/SQL is, when developing a lot of packages/procedures/functions, in the end there is no telling who is called by who. This problem can now be answered by ‘white listing’. This means that, on creation, you are telling the package/function/procedure/type by whom it is accessible, or may be used.
    The accessible by clause takes packages/functions/procedures/triggers as accessor clause.
    Example white listing :
    - create procedure get_sales_data accessible by (my_sales_proc)…
    – create procedure get_sales_data accessible by (my_after_update_trigger)
    - create package my_package accessible by (my_other_package)…When the object is not accessible, following error will be thrown during compilation, or at runtime, in case of an anonymous PL/SQL block :PLS-00904: insufficient privilege to access object MY_PACKAGE.MY_PROCEDURE
  3. Temporal Validity of a row
    Sometimes rows in a table are valid or not, depending on a timeframe. For instance a subscription for a magazine, may only be valid for a year. Adding this validity to a row goes as follows :

     create table subscriptions
     ( person_id             number,
     subscription_id                 number,
     person_name                   varchar2(500),
     subscr_start_date   date,
     subscr_end_date              date,
     period for valid(subscr_start_date , subscr_end_date)
    )
    

    Now with following query we can select the ‘valid’ subscriptions :

     select * from subscriptions
     as of period for valid sysdate;
    
  4. New PL/SQL Package UTL_CALL_STACK
    The UTL_CALL_STACK package provides subprograms to return the current call stack for a PL/SQL program. This could already be done by DBMS_UTILITY.FORMAT_CALL_STACK, but this new package returns this information in a more structured way, and includes the depth of the call (calling level) and the names of the subprograms.
    This will make this information more usable in code. Related to this subject, 2 new directives are added in 12c, next to $$PLSQL_LINE and $$PLSQL_UNIT that already existed).
    – $$PLSQL_OWNER
    – $$PLSQL_TYPE

       dbms_output_put.line("Owner of this package is "+$$PLSQL_OWNER);
    

    Will print : Owner of this package is SCOTT

  5. An Invoker’s Rights Function Can Be Result Cached
    Caching results of a PL/SQL function already exists in 11g. Basically what happens is that, for a certain function, you define that the result, for given parameters should be cached in memory.
    So first time function getPerson(123) is executed, the data is fetched from the database, second time the function is called with parameter ‘123’, the result is fetched from the cache in memory, resulting in a better performance.
    Whenever a DML statement is executed on the table(s) used in that function, the cache is automatically cleared, causing the next call to return the new data. (Since 11G rel. 2, Oracle manages these dependencies himself.)
    So in our case, Oracle caches the result’s of function getPerson() for every key it is called with.
    Through Oracle Database 11g Release 2 (11.2), only definer’s rights PL/SQL functions could be result cached. Now in 12c, the identity of the invoker is implicitly added to that key.

As already mentioned, the possibilities of these new features go way beyond what I describe here. But hopefully it’s a start to a few experiments on your side !

More info can be found at http://docs.oracle.com/cd/E16655_01/server.121/e17906/chapter1.htm