Java 8 Lambda : the very basics

Functional interface

In many cases – when talking about Java 8 – lambda-functions are the first thing that comes up. I found it a concept hard to grasp, as it slightly changes the way you think about coding. In this blog, I would like to show you some very simple examples, and explain what is what in the lambda world.

In versions prior to Java 8, we had SAM interfaces : Single Abstract Method-interfaces. In Java 8, the concept of SAM-interfaces has been re-created and called “functional interfaces”. These interfaces can be used in lambda-expressions. Here’s a definition of such an interface that we will be using in the examples.

@FunctionalInterface
interface MyCalculator {
	public double calculate(double val1, double val2);
}

The “FunctionalInterface” annotation is used to generate compiler level errors, when the interface is not a functional interface. A functional interface is an interface with 1 abstract method. The annotation is not required, but improves the readability of your code.

Using the lambda

When the functional interface is created, you can declare a variable to be assigned with a lambda-expression.

MyCalculator multiply = (value1, value2) -> value1 * value2;
MyCalculator sum = (value1, value2) -> value1 + value2;
System.out.println(“multiply = “+multiply.calculate(5,6));
System.out.println(“sum = “+sum.calculate(5,6));

Generates following output:
multiply = 30.0
sum = 11.0

So if you are wondering what is what in a lambda-expression :
Before the arrow = variables of your function
After the arrow = code to be executed by the abstract method of the functional interface

If we would do this the ‘old-school’ way, our code would look like this :

MyCalculator multiply = new MyCalculator() {
	public double calculate(double val1, double val2) {
					return val1 * val2;
				}
			};
MyCalculator sum = new MyCalculator() {
	public double calculate(double val1, double val2) {
					return val1 + val2;
				}
			};
System.out.println("multiply = "+multiply.calculate(5,6));
System.out.println("sum = "+sum.calculate(5,6));

The advantage of lambda is pretty clear : less code, better readability.

If you want to define more than 1 line of code, write it like this :

MyCalculator calc = (value1, value2) -> {
			System.out.println("Before calculating");
			double result = value1* value2* 3;
			System.out.println("After calculating");
			return result;
};
System.out.println(calc.calculate(3, 4));

Generates the following output:
Before calculating
After calculating
36.0

Use of non-local variables in lambda expressions

In versions prior to Java 8, the following code :

Calendar cal = Calendar.getInstance();
MyCalculator sum = new MyCalculator() {
	public double calculate(double val1, double val2) {
		cal.setFirstDayOfWeek(Calendar.SUNDAY);
		return val1 + val2;
	}
};

would generate an error :
“Cannot refer to the non-final local variable cal defined in an enclosing Scope”
on statement
cal.setFirstDayOfWeek(Calendar.SUNDAY);

because the variable “cal” was not defined as final.

The Java 8 compiler has a new capability that will convert non-local variables, used in lambda-expressions and anonymous inner class methods, automatically to final.
So in Java 8, this code will actually compile. Even better: now, we can write the previous code as a lambda-expression :

Calendar cal = Calendar.getInstance();
MyCalculator sum2 = (value1,value2) -> {
cal.setFirstDayOfWeek(Calendar.SUNDAY);
			return value1+value2;
		};

Streams API and lambdas

New in Java 8 is the Stream API (java.util.stream). According to Oracle’s documentation : “A stream is not a data structure that stores elements; instead, it transports elements from a source (data structure, array, generator function, or an I/O channel), through a pipeline of computational operations.”

This is where the lambdas make their entrance, as a pipeline is a sequence of lambda expressions that can process or interrogate every element in the stream. In short : by using streams and lambda’s, we can execute a whole bunch of operations on a Collection, all in 1 statement.

Suppose we want to print out every even number from a list of Integers.

List<Integer> list = Arrays.asList(1,2,3,4,5,6,7,8,9,10);
List<Integer> evenList = list.stream().filter(value -> value%2==0).collect(Collectors.toList());
evenList.stream().forEach(value -> System.out.println(value));

In the second line, the filter method will return the values matching the predicate, in this case, a lambda-expression. The collect method will return the result list.
On the third line, we use a stream to print out every value of our new list, containing the even values.

If you had to write the above code in Java 7, you would need more code, so more possible flaws, less readability.
But there is more :

Map<Person.Sex, List<Person>> byGender = roster.stream().collect(Collectors.groupingBy(Person::getGender));

In 1 line of code, from a list of person objects, we create a map with the gender as the key.

Read the Java Doc on java.util.streams, and you’ll discover a whole new way of writing code, including parallel processing of your stream. Also, check out the Collectors API where you can find methods to convert your collections into other sorts of collections, without any for loops, etc…

The better you understand lambdas and streams, the more you’ll understand why they are such a hot issue in the Java world, the more you’ll improve your code!

Functional programming a brief introduction to Scala

In this blogpost I’ll try to explain on a very basic level what functional programming (FP) is about. look at is as an introduction to the amazing world of FP. FP is already here for quite a few decades. It was mostly used in the academic world and specialized industries. Since the coming of Scala http://www.scala-lang.org, FP became more and more mainstream.

So what is functional programming?

(from Wikipedia http://en.wikipedia.org/wiki/Functional_programming) In computer sciencefunctional programming is a programming paradigm, a style of building the structure and elements of computer programs, that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions. In functional code, the output value of a function depends only on the arguments that are input to the function, so calling a function f twice with the same value for an argument x will produce the same resultf(x) each time. Eliminating side effects, i.e. changes in state that do not depend on the function inputs, can make it much easier to understand and predict the behavior of a program, which is one of the key motivations for the development of functional programming.

Installing Scala

I’m using the scala Read-Evaluate-Print-Loop (REPL) to show you the examples. It’s basically a Scala commandline interpreter. Just get the latest version of scala and install it. To get started with scala REPL, open a command prompt or terminal session; go to the directory where you installed scala and further to the bin directory. Enter the command scala or ./scala and you’re ready to go!

Read-Evaluate-Print-Loop

Scala Command line REPL

Scala

Scala is a programming language that mixes paradigms of both FP and OO. Scala uses its own compiler (scalac) to compile your code to byte code. Probably the biggest advantage of Scala is that it runs on your standard java infrastructure. It is even capable of calling your java code and visa versa although you need to take special care when integrating both languages. It’s functional As the name already states, functional programming allows the developer write your code on a functional level. In other words you don’t have to translate your functional solution to a lower, technical, level. If I want to know the outcome of: all even numbers between 1 and 10 and multiplied them by 2, you write in Scala

(1 to 10).toList.filter(_ % 2 == 0).map(_ * 2)

While in java it looks more like this

List input = {1,2,3,4,5,6,7,8,9,10}.toList();
List results = new ArrayList();
for(int val : input) {
  if(val % 2 == 0) {
    results.add(val * 2);
  }
}
return results;

Looking at the Scala code you will notice a few things.

  • The code is expressive and readable. No clutter, no unnecessary boiler plate code.
  • There is no assignment of variables (stateless)
  • Functions (_ % 2) and (_ * 2) is added as argument to a method
    • The underscore (_) represents a wildcard. In this case, each element of the list

Expressive

Scala, as all functional languages, is an expressive language. But what does it mean? First of all, it makes your code much more readable. You almost program what you say. A friend of mine, who’s working in the travel business, understood a Scala code snippet, while the java counterpart was Chinese for her. Is this important? Yes, if you know that code is read 10 times more than written.
Secondly expressions return values. Which can lead to less code.

Boilerplate code be gone!

The guys from Scala did their best to make the code as clean as possible. Every character unnecessary to run your code can be made obsolete. This includes ( ) for zero argument method calls, semi colons ‘;’, points ‘.’ etc… The scala compiler is quite intelligent and will do all the hard work for you to convert the dense code into the full-blown byte code.

Values and variables

var myName = “Ief”
myName: String = Ief

As you can see, it is not mandatory to define a type. Still Scala is strongly typed and knows that it is a String. Strongly typed means that you don’t need to define types a lot. Confused?

val myName = “Ief”

is the same as

 val myName : String = “Ief” 

Once the type of a variable or value is assigned you don’t need to repeat it again. Val represent a value, a final variable named ‘name’ of the type Sting with only an accesor method (getter).

var myName = “Ief”

Is a variable named name of the type String with an accesor method (getter) and a mutator method (setter)

Classes

A class Person with a variable name and a value surname would look like this in Scala

 class Person(var name : String, val surname : String) 

While the java counterpart would look like this

public class Person {
  private String name;
  private final String surname;

  public Person(final String surname) {
    this.surname = surname;
  }

  public String getName();
    return name;
  }

  public void setName(final String name) {
    this.name = name;
  }

  public String getSurname();
    return surname;
  }
}

You can instantiate the class using the following code

val me = new Person(“Ief”, “Cuynen”)
me: Person = Person@4d826d77

me.name
res7: String = Ief

me.name = “Jef”
me.name: String = Jef

me.surname
res8: String = Cuynen

me.surname = "Peeters"
res9: error: reassignment to val

You probably noticed that for zero argument methods you can skip the ( ) Scala is packed with syntactic sugar. A few more examples:

Regular expressions can be created just by calling the r method on a String

val regularExpression = “[0-9][a-Z]”.r

A Range of numbers from 1 to 10

1 to 10
res11: scala.collection.immutable.Range.Inclusive = Range(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)

Functions

Functions are defined using the def keyword. Every function returns the value of the last statement of the function, even without an explicit return statement. It is even possible to return a new function as return value.

def addOne(x: Int): Int = x +1
addOne: (x: Int)Int

addOne(2)
res12: Int = 3

In scala it is also possible to return multiple values in 1 go by using tuples. Tuples are a way to group different values of different types into a single container without creating an explicit class.

def printStuff(x : Int): (Int, String)  = (x, "Hello number " + x)
printStuff: (x: Int)(Int, String)

printStuff(1)
res13: (Int, String) = (1,Hello number 1)

Functions can be used as arguments for other functions. This way it is possible to add behaviour.

def isEven(x : Int): Boolean =  x % 2 == 0
def isUneven(x : Int): Boolean = x % 2 == 1

(1 to 10).filter(isEven(_))
res20: scala.collection.immutable.IndexedSeq[Int] = Vector(2, 4, 6, 8, 10)

(1 to 10).filter(isUneven(_))
res21: scala.collection.immutable.IndexedSeq[Int] = Vector(1, 3, 5, 7, 9)

Using functions as arguments greatly reduce the need for subclassing. Normally a subclass is used to ‘bind’ different behaviour to the same interface. By using functions as arguments, the behaviour is injected into interface.

Functions can be even combined into other functions which are called high order functions.

Option

Using null values is FP and Scala is a very bad practice. When you think about it, null values are pure evil in any programming for that matter. They cause unwanted side effects and you need to always check for null values to prevent NullPointerExceptions. Within Scala you can use the Option counterpart which is a decent way to solve the null evilness.

scala> def convertToInt(x: String): Option[Int] = {
     |    try {
     |        Some(Integer.parseInt(x))
     |     } catch {
     |        case e: Exception => None
     |    }
     | }
convertToInt: (x: String)Option[Int]

This function parses a String to an Int. In case the String can be successfully parsed, the Int value is returned (Some). In case an exception is thrown, None is returned

scala> convertToInt("1")
res22: Option[Int] = Some(1)
scala> convertToInt("abc")
res23: Option[Int] = None

To get the actual value, the getOrElse function can be called on the Option.

scala> val x = convertToInt("1").getOrElse(0)
x: Int = 1
scala> val x = convertToInt("abc").getOrElse(0)
x: Int = 0

Another way to get the value of the Option is to use a matcher.

scala> convertToInt("1") match {
     |     case Some(x) => println(x)
     |     case None => println("That String was not really an Int was it?")
     | }
1

scala> convertToInt("abc") match {
     |     case Some(x) => println(x)
     |     case None => println("That String was not really an Int was it?")
     | }
That String was not really an Int was it?

Pattern matching

You already saw above what you can do with a matcher. Look at it as the Java switch on steroids. It can match about any everything.

scala> def matchThis(x : Any) = x match {
     |     case 1 => println(1)
     |     case s : String => println("Match String " + s)
     |     case _ => println("whatever")
     | }
matchThis: (x: Any)Unit

scala> matchThis(1)
1

scala> matchThis("abc")
Match String abc

scala> matchThis(true)
whatever

Caveats

One of the caveats is that the java compiler does not compile scala and the scala compiler does not compile java. So 2-way dependencies probably lead to a big headache.

5 Minute JavaScript #19: Polyfills

The past few weeks were dedicated to the useful array methods. However these methods have been implemented in ECMAScript 5 and are therefore not available in older browsers such as IE8 and sometimes we have to support legacy browsers.

In JavaScript, we use polyfills to create behavior that is not implemented in browser. We can use polyfills to add every kind of behaviour to our application and a lot of websites provide these polyfills.

Basically what happens is that we first do a feature detection. Is the feature available in the browser? For example, does Array.prototype.map exist? Yes? That’s great! It means the feature has been implemented and is probably faster than every polyfill that we could come up with. Is it not available? Too bad, but we can create the Array.prototype.map method ourselves.

The Mozilla Developer Network has some of the most robust polyfills that are still fast in use. It also has an excellent documentation about the array methods. Here’s the page for the map functionality.

Using a polyfill doesn’t change the way the method is being called. If your browser does not support the map-method and you use a polyfill, you will still use it like this: [1, 2, 3].map(…); This is useful, because you don’t have to change syntax for different browsers.

Another solution is to use a framework that has implemented these functionalities. One of my favourites is underscore. This framework adds more useful methods than just the array methods to your function toolbelt.

framework adds more useful methods than just the array methods to your function toolbelt.

JEE : Using EJB and Context annotations in a JAX-RS Provider class

A lot of new annotations have been introduced since the JEE6 spec. Before we had EJBs and servlets to cover most of our server-side objects in a JEE application. But in JEE6, CDI and JAX-RS have been added, along with a few other JSRs that are implemented using annotations. This results in a long list of annotations, where – in my opinion – it is not always clear which one to use, nor to understand how they work together.

In this blog, I would like to show you how EJB and CDI work or don’t work together, in combination with a JAX-RS Resource.

JAX-RS is used for writing REST-services. It allows you to create these services, simply by annotating a Java class.

This is our REST service class aka “resource class”. It will create a REST service on the /test url, an it returns a String. Here, for test purposes, we always generate a RuntimeException.


@Path("/test")
public class TestResource {
@GET
public String getCustomer() {
if (1==1) throw new RuntimeException("Whoops, something goes wrong");

return "Customer ABC";
}
}

Because we don’t want to send a stack trace to the client, but a meaningful response in the form of an Error class, we use a provider class and annotate it with @Provider (javax.ws.rs.ext.Provider). This annotation is part of the JAX-RS spec.

Annotating a class with @Provider – according to the API-documentation – “marks an implementation of an extension interface that should be discoverable by JAX-RS runtime during a provider scanning phase.”

Our provider class will catch any exception thrown by the resource class, and send a proper response object to the client instead.

The returned Error class, that will be marshalled to an xml object and returned to the client :

@XmlRootElement

public class Error {
private String desc;
public Error() {
super();
}

public Error(String desc) {
super();
this.desc = desc;
}

public String getDesc() {
return desc;
}

public void setDesc(String desc) {
this.desc = desc;
}

}

The provider class that will catch any exception and return the error class looks like :


@Provider

public class GenericExceptionMapper implements ExceptionMapper<Exception> {

@Context HttpServletRequest httpRequest;

public Response toResponse(Exception ex) {
return Response.status(500).entity(new Error("Error during call with method : "+httpRequest.getMethod())).build();

}

}

As you can, see, we use the @Context annotation to access the current http request. When an exception is thrown by the resource class, it will automatically be handled by the toResponse method of our provider class. It returns a response, containing the Error class, that will be sent to the client in xml format. This is done by the JAX-RS framework.

Now we have an EJB bean that we want to use in order to log this error to a database, before sending the response. So we will add this to the provider class as follows :


@Stateless

@Provider

public class GenericExceptionMapper implements ExceptionMapper<Exception> {

@Context HttpServletRequest httpRequest;

@EJB ErrorEJB errorEJB;

public Response toResponse(Exception ex) {
// log the error to the database
errorEJB.logError(ex));
// return the response
return Response.status(500)
.entity(new Error("Error during call with method : "+httpRequest.getMethod()))
.build();
}

We made the GenericExceptionMapper a stateless bean using @Stateless, in order to be able to inject the ErrorEJB into the provider.

But now, we will get a NullPointerException on the httpRequest.getMethod, as the @Context annotation won’t work  anymore.

This is because, before our update, we only used the @Provider annotation , so this class was a CDI bean. With CDI, the container looks up the injected classes in a “scope”, which will basically be a hashmap that exists for a specific period of time :

  • @RequestScoped : per request
  • @SessionScoped : per HTTP session
  • @ApplicationScoped : per application

Into that environment we can inject the HttpServletRequest using the @Context annotation

By adding the @Statefull annotation, we made an EJB from our CDI bean.

With EJBs, the container looks also into a hashmap, but checks if the bean is of type @Stateful, @Stateless or @Singleton. So it isn’t aware of any http context.

In order to solve this problem, we will inject the our ErrorEJB as CDI bean iso an EJB.

This can be done as follows :

  • replace the @Stateless by @RequestScoped
  • replace @EJB by @Inject
  • adding an empty beans.xml file to the WEB-INF

Which gives us :


@RequestScoped

@Provider

public class GenericExceptionMapper implements ExceptionMapper<Exception> {

@Context HttpServletRequest httpRequest;

@Inject ErrorEJB errorEJB;

public Response toResponse(Exception ex) {

errorEJB.logError(ex));

return Response.status(500)
.entity(new Error("Error during call with method : "+httpRequest.getMethod()))
.build();

}

Now for JEE7, there are even more annotations available, so make sure to check the docs before using them.

5 Minute JavaScript #18: reduce

The past weeks we dived in the wonderful world of array methods. Previously we already discussed forEach, filter, some and every, and map. Today we’ll take a look at the reduce method. While being extremely useful, its concept can be hard to grasp.

Also known as fold (in this case foldLeft) in other functional programming languages, reduce kan be used to combine all elements of an array into one single return value. That return value can be anything. It could be an array (then the reduce functions as a filter/map), but it can also be: an object, string, number, boolean… Everything is possible.

A simple example can be to use the reduce method for calculating a sum. Here’s the code how we would write it in any other imperative language.

var numbers = [1, 2, 3];
var sum = 0;
for (var i = 0; i < numbers.length; i++) {
	sum += numbers[i];
}

We can rewrite this code as follows using the reduce method

var sum = numbers.reduce(function (prev, cur) { return prev + cur; });

The reduce callback function takes two important parameters (prev, cur in this case). First of all we have the previous value. This is the value that got returned in the previous callback function. The current parameter is the current item in the list.

You might get your head around this functionality, but it’s not really useful in real applications. Its real strength lies in the fact that you can press a lot of elements together into one single return value. A use case for reduce might be this:

var user = { id: 'UUID', version: 0 };
var users = [/* list of users */];
var idVersionMap = users.reduce(function (map, user){ 
    map[user.id] = user.version; 
    return map; 
}, {});

In our blogpost about the map method we used map to create a new list of objects (with id and version). Here, we use the reduce method to create a single object that acts as a map with key = id and value = version. The empty {} that we passed along in the reduce callback function is the initial value the method needs to use.

Introduction to Oracle Database 12c

With the launch of DB 12c in 2013, Oracle introduced a new architectural concept, called “multitenant databases”, where you have one super database (=container; CDB) and one or more sub databases (= pluggable DBs; PDB).

Before running the installer on an Oracle Linux 6 environment, a library can be installed through yum to meet all the system per-requisites:

yum install oracle-rdbms-server-12cR1-preinstall

The software installer and DBCA are similar to 11g, except for this screen where you can pre-configure your CDB and PDBs:

db12c01

This DBCA execution will not only create a CDB and 1 PDB, but a “seed pluggable database” as well.  You can use this seed database as a template to create other pluggable databases.

db12c02

By default, after running the DBCA, all CDBs and PDBs are up and running.

Now, we will reboot the host machine and try to start all of our components.

Is there a difference when starting the listener and the CDB?

No!  You can start the listener and your CDB in exactly the same way as you did with your pre-12c database.

How can you connect to the CDB?

Very simple: just the same as in the past with pre-12c databases.

$ sqlplus system@apexdev

SQL*Plus: Release 12.1.0.1.0 Production on Thu Mar 19 22:53:55 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Enter password:

Last Successful login time: Mon Mar 16 2015 22:20:50 +01:00

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>

Are the PDBs opened by default, when starting the CDB?

No.  This can be verified by this query:

SQL> select open_mode from v$pdbs where name=’APEXDEV_PDB1′;

OPEN_MODE
———-
MOUNTED

The PDB is mounted; to open it, just run this command:

SQL> alter pluggable database apexdev_pdb1 open read write;

Pluggable database altered.

SQL> select open_mode from v$pdbs where name=’APEXDEV_PDB1′;

OPEN_MODE
———-
READ WRITE

Note: this must be done as “SYSDBA”.

How can we connect to the PDB?

There are 2 methods:

First method: connect to the CDB and then switch to the PDB by setting the container:

$ sqlplus system@apexdev

SQL> show con_name

CON_NAME
——————————
CDB$ROOT
SQL> alter session set container=apexdev_pdb1;

Session altered.

SQL> show con_name

CON_NAME
——————————
APEXDEV_PDB1

Second method: Modify your tnsnames.ora file by adding an entry for the PDB, based on the CDB entry.

Now, you can connect as usual to the PDB:

[oracle@ol6db1 oracle]$ sqlplus system@apexdev_pdb1

SQL*Plus: Release 12.1.0.1.0 Production on Tue Mar 24 20:37:38 2015

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Enter password:
Last Successful login time: Thu Mar 19 2015 18:59:51 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show con_name

CON_NAME
——————————
APEXDEV_PDB1

As you can see, it is all quite easy.  One of the main benefits of this architecture is that you can handle every PDB as a separate database that can be upgraded or plugged/unplugged independently from other databases.

And of course your database will be ready for the cloud!

5 Minute JavaScript #17: some and every

In the previous blogpost we discussed the map method on arrays in JavaScript. We still have some useful methods the go. Next in line are the some and every methods. These methods are similar and can be very useful when validating data.

var isEven = function (n) { return n % 2 === 0 };
var areAllEven = [2, 4, 6].every(isEven);
var someEven = [1, 2, 3].some(isEven);

The every method will check if for every element in the list, the callback functions returns true. If there is one single item in the array that returns false, the every method will return false as well. The some method is satisfied when there is at least one element in the array where the callback returns true.

var stockItem = { hasBeenShipped: true };
var selection = [/* list of stock items selected in a list */];
var hasNotBeenShipped = function (si) { return !si.hasBeenShipped };
var hasBeenShipped = function (si) { return si.hasBeenShipped };

$('#sendStockItems').attr('disabled', selection.every(hasNotBeenShipped));
$('#sendStockItems').attr('disabled', selection.some(hasBeenShipped));

The example here will disable a button when the selection in the list contains one or more items that already have been shipped. This code is very readable and concise. You can almost read exactly what it does: set attribute disabled when selection has some items that hasBeenShipped.