Feeds:
Posts
Comments

Posts Tagged ‘OSGi’

Learn by Errors : Java + OSGi

Recently I worked on getting Apache Hive work inside an OSGi environment. While
not proving to be a proverbial piece of cake (software right?.. Why am I not
surprised? :)), it led me through an assortment of Java and OSGi errors. Here I
am listing some of them that bit me bit hard (no pun intended) so that I
thought of making a blog out them just for my own satisfaction.

java.lang.VerifyError

I got this nastiness during initialization of one of OSGi service components.
The culprit was not immediately identifiable since the offending bundle was in
ACTIVE state. On the surface everything looked fine except for the fact the
Hive server which was supposed to start during the initialization of the
service component present in the bundle was not up and running. A quick ‘ls’ in
the OSGi console revealed the service component is in ‘unsatisfied’ state.
Finally a ‘comp’ revealed the root cause, the VerifyError.

The VerifyError can occur if the runtime dependency of a class is different to that
of the dependency that was used at compilation time. For example if the method
signatures have changed between the dependencies then this error would result.
This is nicely explained at [1] in the accepted answer. As it turned out
slightly different versions of a package had been exposed in two bundles causing
the Hive bundle to pick up a different version over the version that was in the
compilation environment. Proper OSGi versioning turned out to be the solution.

java.lang.IncompatibleClassChangeError

This error also cropped up under a similar circumstance where two packages were
present in the system. As [2] clearly explains, the reason for this in my case
was an interface being changed to an abstract class between the conflicting
package versions. Again the versioning helped to save the day.

java.lang.LinkageError : loader constraint violation in xxxx – blah …

Now this seems to be a famous error specially in OSGi enviornments. Main root
cause seems to be two classes loaded by different ClassLoaders coming in to
contact in a certain way. For example say Class A object accept a Class B object
as a method parameter. Class B is loaded by ClassLoader-A which also loads Class
A. But at the method invocation time how ever an object of Class B which has
been loaded by ClassLoader-B is passed as an argument to an object of Class A
which has been loaded by ClassLoader-A. Now the result would be a big fat
LinkageError with a very verbose error message.

The graph based class loadingstructure in OSGi makes it specially conducive to these kind of errors. In my case the culprit was a package which had been duplicated in two different
bundles and a particular class in that package loaded by the separate
ClassLoaders of each of the bundles coming in to contact via a third bundle
present in the system during a method call. So this was a case of not following
“import what you export” best practice [3] in OSGi. Doing so would help to
reduce the exposure of duplicated packages across bundles and help to maintain a
consistent class space for a given package. And so this turned out to be the
resolution for that in this case.

Package uses conflict: Import-Package: yyy; version=”x.x.x”

I had my fair share of this inconvenience thrown at my face every so often
during the exercise. There are two excellent posts [4],[5] exactly on this issue
at SpringSource which helped a lot. However let me summarize my learning on this
issue. Simply if a bundle is being exposed to two versions of the same package
through a direct import and via a uses constraint this error would come up. The
diagram best illustrates this situation.

The bundle A imports org.foo version 1.0.0 directly. However it also imports
bundle org.bar from bundle B. However as it turns out package org.bar also uses
org.foo package albeit it’s a different version (2.0.0) than that of the version
imported by bundle A. Now bundle A is directly wired to version 1.0.0 of org.foo
and also being exposed to the version 2.0.0 of org.foo due to the
import of org.bar which is using version 2.0.0 of org.foo. Now since a bundle
cannot be wired to different versions of the same package, a uses conflict would
come up with offending import org.bar as the root cause. (e.g: Package uses
conflict: Import-Package: org.bar; version=”0.0.0″). The solution would be to
change package import versions of org.bar in either bundle A or bundle B so that
both would be pointing to the same package version. Another excellent blog by
Neil Bartlett on this can be found at [6].

java.lang.UnsatisfiedLinkError

One of my friends at work came across this while trying to incorporate another
third party library in to our OSGi enviornment. JavaDocs goes on to say that
this gets “Thrown if the Java Virtual Machine cannot find an appropriate
native-language definition of a method declared native”. The offending library
was a linux .so (dynamically linked library) file which was not visible to
bundle ClassLoader at runtime. We were able to get it working by directly
including the library resource to the bundle ClassLoader. An earlier attempt on
setting this resource on TCCL (Thread Context ClassLoader) failed and this let
us to the realization that the TCCL is typically not the bundle class loader. A
good reading on TCCL under Equinox OSGi enviornment can be found at [7].

 

[1] http://stackoverflow.com/questions/100107/reasons-of-getting-a-java-lang-verifyerror
[2] http://stackoverflow.com/questions/1980452/what-causes-java-lang-incompatibleclasschangeerror
[3] http://blog.osgi.org/2007/04/importance-of-exporting-nd-importing.html
[4] http://blog.springsource.org/2008/10/20/understanding-the-osgi-uses-directive/
[5] http://blog.springsource.org/2008/11/22/diagnosing-osgi-uses-conflicts/

[6] http://njbartlett.name/2011/02/09/uses-constraints.html
[7] http://wiki.eclipse.org/ContextClassLoader_Enhancements

Read Full Post »

My previous couple of posts covered basic usage and security aspects of Apache Thrift. You can also use Thrift Servlet based transport to expose a Thrift service via a Servlet. This can be useful if you are in a OSGi environment since you can expose it to the outside world using OSGi HTTPService. But unfortunately it’s not possible to use this Servlet implementation directly in a web application(which should have been very useful) for reasons I will describe in the latter part of this post. For this implementation it is required to extend ‘TServlet‘ along side with your service implementation. I will be using the same Thrift service(arithmetic.thrift) and respective implementation and generated code from my earlier blog posts for this example as well.

Thrift Servlet

public class ArithmeticServiceServlet extends TServlet {

    public ArithmeticServiceServlet(TProcessor processor, TProtocolFactory inProtocolFactory,
                           TProtocolFactory outProtocolFactory) {
        super(processor, inProtocolFactory, outProtocolFactory);
    }

    public ArithmeticServiceServlet(TProcessor processor, TProtocolFactory protocolFactory) {
        super(processor, protocolFactory);
    }

}

No implementation of doGet or doPost is necessary by default since mapping of your service implementation class to respective doGet and doPost methods is done inside TServlet.

Registering the Servlet

This entails getting the OSGi HTTPService and registering the Servlet with it. This code snippet assumes you have already obtained a HTTPService reference using a preferred method (e.g: Using declarative service etc.).

    public void register() throws Exception{
        ArithmeticService.Processor processor = new ArithmeticService.Processor(
                new ArithmeticServiceImpl());
        TBinaryProtocol.Factory inProtFactory = new TBinaryProtocol.Factory(true, true);
        TBinaryProtocol.Factory outProtFactory = new TBinaryProtocol.Factory(true, true);

        httpServiceInstance.registerServlet("/arithmeticService", new ArithmeticServiceServlet(
                processor, inProtFactory, outProtFactory), new Hashtable()
                , httpServiceInstance.createDefaultHttpContext());

    }

Servlet is registered with “/arithmeticService” context.

Consuming the Service

Now let’s write the client to consume the service. Here THttpClient class from Thrift is used.

public class ServletClient {

    public void invoke() throws Exception {
        TTransport client = new THttpClient("http://localhost/arithmeticService");
        TProtocol protocol = new TBinaryProtocol(client);
        ArithmeticService.Client serviceClient = new ArithmeticService.Client(protocol);
        client.open();

        long addResult = serviceClient.add(100, 200);
        System.out.println("Add result: " + addResult);
        long multiplyResult = serviceClient.multiply(20, 40);
        System.out.println("Multiply result: " + multiplyResult);

        client.close();

    }

    public static void main(String[] args) throws Exception {
        ServletClient client = new ServletClient();
        client.invoke();
    }

}

Problem with Web Apps

Now it would have been great if we can use this Servlet in one of our web applications. But as you can see from our ‘ArithmeticServiceServlet’ implementation it hasn’t got the default no argument constructor which is a deal breaker for using this Servlet in a web application. The web container needs a no argument constructor in order to initialize the Servlet. So for now no for web apps. :(.

Read Full Post »

Recently I ran in to an interesting little problem with “resolution:=optional” OSGi directive. Basically what “resolution:=optional” says is that the even though the package imported with this directive is not present in the system  bundle would be resolved at run time. This is logical in case of compile time dependencies of the bundle not being needed at runtime. Or is it?..

I had to ponder a little while before putting down that last statement until I got how it can be so. Usually bundle needs compile time dependencies at run time as well to function correctly. But what if we have an execution path which never gets called at runtime inside our bundle. If that execution path contains imports then these import dependencies will not be needed at run time.

Typically Maven Bundle Plugin is used to incorporate bundle creation at build time. So in this case the general practice I used to follow was to explicitly import obvious dependencies and use a “resolution:=optional” directive with “*” wild card as a catch all mechanism for all other imports. Even if we include some imports which are not needed at runtime we can get away with optional directive since bundle will resolve without them, right?

This is all good and rosy until p2 come in to the picture. Apparently p2 installs bundles exporting optional imports too if it can find them during provisioning. So you end up installing bundles that may  not required at runtime along side the feature. Though generally not disastrous this is a concern if you want to keep your feature installation lean.

In my case the issue was some unwanted bundle getting mysteriously installed with my feature even though it’s packages are not explicitly imported in any of bundle’s sources. As it turned out I had included a dependency required for tests without specifying it should be in the test scope. So naturally with “*;resolution:=optional” in place the Bundle Plugin dutifully did it job by placing an import in the manifest which led to this particular bundle installation at provisioning time.

Solution was trivial. Place the dependency in test scope and Bundle Plugin did not include an import since test scoped dependencies will not be imported in the manifest at all. Alternatively I could have explicitly unimported the offending packages or make requires.greedy instruction false at feature p2.inf to stop P2  greedily installing optional dependencies [1] though these solutions are hacky which do not deal with the root cause directly in my case. But it’s always to good to have alternatives isn’t it?. :).

[1] http://ekkescorner.wordpress.com/2010/04/26/p2-is-my-friend-again/

Read Full Post »

I use Maven Bundle plugin a lot since our projects are OSGi based. Basic idea I had of this plugin is that this plugin makes an OSGi bundle out of a normal jar (basically by augmenting the META-INF file) if we do declare exported and private package information correctly in it’s configuration section in the POM declaration. Normal procedure followed would be to copy the configuration from an existing POM and tweaking it according to the current project requirements. My knowledge about it was just about it until recently. It was when I bumped up on a strange issue one day I was forced to read more on the subject and fortunately solution was not that far away with the amount of comprehensive documentation and FAQ section found at plugin’s site [1].

So why am I making a blog out of this if all what you want to know is well documented somewhere else?

Well, mostly to document some points which do matter, in a concise point form as a future self reference rather than going through the lengthy documentation every time when my memory fails me. (Not a rare occurance I would say.. :)).

So here goes.. My points with bullets.. :).

  • <Export-Package> will copy the packages to the bundle and export them in the MANIFEST while <Private-Package> will copy the packages to the bundle even-though they will not be exported.
  •  If you want a class to be included in the bundle make sure the package that it lives in is present in either of these instructions or otherwise it will not get included in the bundle.
  •  If <Export-Package> and <Private-Package> overlaps <Export-Package> takes precedence.
  • Using negation “!” in <Export-Package> declaration will result in package being not included bundle and not exported as well.
  • If a negation following an non-negated package declaration overlaps with the non-negated package declaration, the negated package pattern will not take effect. Always put negations first in  <Export-Package> instruction to avoid these kind of issues.
  •  Plugin generates the Import-Package bundle manifest header based on the Java imports present in the classes of the bundle so generally it’s not needed to specify it explicitly.
  • If a referenced packages do not need to be available for at runtime for bundle to perform its intended task (e.g: only a test scoped class using the package which is only needed during test runs..) use negation inside not to import it.
  • If you need to import a package not referenced by any class in bundle use resolution:=optional directive after the package name as follows.

org.external.package;resolution:=optional

   The bundle would resolve even if the package is not present at runtime.

  • Use to import a dynamically loaded class (e.g: Using Class.forName()) given that it’s exported in originating bundle. Generally using “*” wild-card would do, since it will enable loading any dynamically loaded class in other bundles given that the package it lives in has been exported by those bundles.

So that’s about it for my notes. If you want a good tutorial have a look at [2] as well. By the way in case you are wondering about the issue I had, the second point sums it up pretty nice. :).

[1] http://felix.apache.org/site/apache-felix-maven-bundle-plugin-bnd.html

[2] http://wso2.org/library/tutorials/develop-osgi-bundles-using-maven-bundle-plugin

Read Full Post »

An earlier post dealt with how a file can be copied to a product during a feature installation during a p2 provisioning action. This post expands on that. Although natives touchpoint has got a copy instruction it only works with files. But it has got an unzip instruction which can be used to unzip a zip file to a given target at feature installation time. So we can workaround the limitation of copy instruction using unzip by having a zip file of the folder to be copied inside the feature.

Following is a sample p2.inf configuration to unzip a folder called configFolder to configurations directory in the product.

instructions.configure = \

org.eclipse.equinox.p2.touchpoint.natives.unzip(source:${installFolder}/

configFolder.zip,target:${installFolder}/configurations);

Read Full Post »

This post will describe how to install additional files such as configuration files, licenses etc. to a product along with a equinox feature installation. The files of interest is described below.

1. p2.inf

This file can be used to declare provisioning actions. More information about p2.inf format can be found at [1].  Various installation actions can be done by using touchpoints. A touchpoint presents an interface between p2 and a particular runtime configuration. Currently there are two touchpoints which contain instructions to performed, org.eclipse.equinox.p2.touchpoint.natives (os level instructions) and org.eclipse.equinox.p2.touchpoint.eclipse (p2 level instructions). Instructions fall in to one of the phases “install”, “uninstall”, “configure”, “unconfigure” in the p2 engine phases. Many of these instructions accept arguments.
Comprehensive documentation about touchpoints can be found at [2].

For this discussion we use natives touchpoint’s copy instruction. There is a particular syntax that has to be followed when specifying a instruction.

Citing from documentation..

 As an example – an “install” instruction for a bundle might consist of the following statement:

installBundle(bundle:${artifact});

* installBundle is the action name
* bundle is the parameter name
* ${artifact} is the parameter value. The value ${artifact} signifies the use of a pre-defined variable named “artifact”.

Now let’s specify we need to copy conf.xml file to configuration folder upon feature installation.

instructions.configure =  \ org.eclipse.equinox.p2.touchpoint.natives.copy(source:${installFolder}/

conf.xml,target:${installFolder}/configuration/conf.xml,overwrite:true);

Here we have specified this instruction should be carried out in “configure” phase using instructions.configure directive. ${installFolder} is one of the predefined variables defined in all the phases which denotes the root folder for current profile.

2. build.properties

The role of the build.properties file is to map development-time structures in a bundle’s project onto the structures described in the bundle’s manifest and needed at runtime. It’s documentation can be found at [3]. In this case we configure it to copy our configuration file to the generated feature during feature build. Important thing to note is that build.properties file is used at build time while p2.inf is used at feature installation time to perform similar jobs.

We use a feature specific property root to indicate to list the files and folders to be copied to the root of the generated artifacts. Documentation about rootfiles can be found at [4].

Here are the entries in the build.properties used in this example.

custom=true
root=file:conf.xml

From documentation..

‘custom=true’  – indicates that the build script is hand-crafted as opposed to automatically generated. Therefore no other value is consulted.

“file:” prefix is used to denote a relative path while “absolute:” prefix is used to denote an absolute path. Relative paths are taken relative to the containing feature.

With these configurations in place we are ready to build the feature and provision it in to the respective profile.

[1] http://wiki.eclipse.org/Equinox/p2/Customizing_Metadata

[2] http://help.eclipse.org/helios/index.jsp?topic=/org.eclipse.platform.doc.isv/guide/p2_actions_touchpoints.html

[3] http://help.eclipse.org/helios/topic/org.eclipse.pde.doc.user/reference/pde_feature_generating_build.htm

[4] http://help.eclipse.org/helios/index.jsp?topic=/org.eclipse.pde.doc.user/tasks/pde_rootfiles.htm

Read Full Post »