OSGi and Jetty Integration


If any OSGi framework based solution needs a servlet container, the OSGified Jetty becomes a good choice. It can be used as the underlying servlet transport provider within an OSGI framework.  This post focus on what are major points to consider when integration Jetty within an OSGi environment.


The main advantage of Jetty is that it is OSGi framework friendly. It has a set of bundles that can be used for this purpose. Most of the details regarding this bundles is available in the documentation. The  minimum set of bundles needed to embed Jetty in your OSGi environment is given in the documentation.

By default Jetty run-time expects jetty configuration files to be available in a predefined location. This is known as the jetty home. This can be set a a java system variable like below.


The /opt/custom/jetty will contain the jetty related configuration files under “etc” directory.

The default configuration file of jetty is known as the “jetty.xml”. The other files contains specific configuration. But they also can be merged into one single file as jetty.xml and used.

By default a jetty server instance will be started at port 8080. If this value need to be change then it can be done via setting a system property as below.


But in most cases the requirement would be to control on how and when to start a jetty instance in your OSGi environment. So to do that what we can do is to avoid the default jetty instance initialization. If you don’t set the jetty.home system variable, then a default jetty instance will not be started. Then using an OSGi bundle + OSGi Service registartion approach you can boot up a Jetty instance like below.

public class JettyBundleActivator implements BundleActivator {

    public void start(BundleContext bundleContext) throws Exception {

        String jettyHome = "/opt/jetty";
        Server server = new Server();
        //server configuration goes here
        String serverName = "jetty-server";
        Dictionary serverProps = new Hashtable();
        serverProps.put(OSGiServerConstants.MANAGED_JETTY_SERVER_NAME, serverName);
        serverProps.put(OSGiServerConstants.JETTY_PORT, "9763");
                        "file:" + jettyHome + File.separator + "jetty.xml");

        //register as an OSGi Service for Jetty to find
        bundleContext.registerService(Server.class.getName(), server, serverProps);

The above will register a jetty instance which will be identified by the jetty boot bundle (org.eclipse.jetty.osgi:jetty-osgi-boot) and the instance will be started. By this way, you can take control on the jetty instance boot-up. Like above, any number of jetty instances can be started using the same approach.

The following are the supported features of Jetty in an OSGi environment.

  1. Deploying Bundles as Webapps
  2. Deploying Bundles as Jetty ContextHandlers
  3. Deploying Services as Webapps
  4. Deploying Services as ContextHandlers

Exposing OSGi HttpService

With Jetty integration, the OSGi HttpService can also be exposed for use within the OSGi environment such as servlet registration, etc. An additional bundle is needed in-order to expose HttpService, which includes the HttpService implementation. The equinox http service servlet (HttpServiceServlet) can be used as a HttpService implementation and registered with Jetty for exposing the HttpService. The equinox bundle which can be used for this is org.eclipse.equinox:org.eclipse.equinox.http.servlet.

The below code segment shows the approach on using HttpServiceServlet registration with running Jetty server instance. This code segment can be used within a BundleActivator or a DeclarativeServiceCompoenent.

//exposing the OSGi HttpService by registering the HttpServiceServlet with Jetty.
ServletHolder holder = new ServletHolder(new HttpServiceServlet());
ServletContextHandler httpContext = new ServletContextHandler();

httpContext.addServlet(holder, "/*");
Dictionary servletProps = new Hashtable();
servletProps.put(OSGiServerConstants.MANAGED_JETTY_SERVER_NAME, serverName);

bundleContext.registerService(ContextHandler.class.getName(), httpContext, servletProps);

The above will register HttpServiceServlet with Jetty run-time which will expose the HttpService in the OSGi run-time.

Using OSGi HttpService

By referencing the HttpService, users can register new servlets with it. The jetty run-time will discover and route requests to those servlet request to via HttpServiceServlet.

An example for servlet registration with exposed HttpService is given as below.

        name = "HttpServiceComponent",
        description = "This service  component is responsible for retrieving the HttpService " +
                      "OSGi service and register servlets",
        immediate = true
public class HttpServiceComponent {

    private static final Logger logger = LoggerFactory.getLogger(HttpServiceComponent.class);

            name = "http.service",
            referenceInterface = HttpService.class,
            cardinality = ReferenceCardinality.MANDATORY_UNARY,
            policy = ReferencePolicy.STATIC,
            bind = "setHttpService",
            unbind = "unsetHttpService"
    private HttpService httpService;

    protected void start() {
        SampleServlet servlet = new SampleServlet();
        String context = "/sample";
        try {
            logger.info("Registering a sample servlet : {}", context);
            httpService.registerServlet(context, servlet, null,
        } catch (ServletException | NamespaceException e) {
            logger.error("Error while registering servlet", e);

    protected void setHttpService(HttpService httpService) {
        this.httpService = httpService;

    protected void unsetHttpService(HttpService httpService) {
        this.httpService = null;
Posted in How to, Java, OSGi | Tagged , , , , , | Leave a comment

Carbon 5.0 Clustering Framework – My Notes

The clustering module provides the clustering feature for carbon kernel which adds support for High Availability, Scalabilty and Failover. The overall architecture of clustering implementation in a single node is given below.

Below are the description of each components of the clustering framework.

ClusterConfiguration – The Cluster Configuration which holds the static information of the cluster. This is will be build and populated using the cluster.xml

ClusterContext – The cluster context which holds the run-time information of the cluster such as members, membership listeners.

ClusteringAgent – The ClusteringAgent which manages the cluster node in a cluster. This will basically do the starting, joining and shutdown the node with cluster. It also provide the functionality to send cluster messages to the cluster, or w set of cluster members in the cluster. Any new clustering implementation that need to be plugged into carbon, should implement this and register it as an OSGi service with the service level property (Agent) to uniquely identify it at run-time.

ClusterService – The cluster service is provided by the cluster framework. It provides a set of functions such as sending cluster message to cluster or to a set of members in the cluster retrieve the current members in the cluster, etc.

MebershipScheme – A representation of a membership scheme such as “multicast based” or “well-known address (WKA) based” schemes. This is directly related to the membership discovery mechanism.

The clustering framework by default supports Multicast or WKA (Well Known Addressing) membership schemes.

  1. Multicast – membership is automatically discovered using multicasting
  2. WKA – Well-Known Address based membership scheme. Membership is discovered with the help of one or more nodes running at a Well-Known Address. New members joining a cluster will first connect to a well-known node, register with the well-known node and get the membership list from it. When new members join, one of the well-known nodes will notify the others in the group. When a member leaves the cluster or is deemed to have left the cluster, it will be detected by the Group Membership Service (GMS) using a TCP ping mechanism.

Clustering Agent

The main part of a clustering implementation is the clustering agent. By default carbon will ship “Hazelcast” based clustering agent implementation. If a new implementation need to be plugged into carbon, it should implement this and then should be registered as an OSGi service with the service level property (Agent) to uniquely identify it at run-time.

Below provides a step by step guide on plugging a new clustering agent implementation (For example : zookeeper coordination framework based clustering agent implementation).

1. Implement the clustering agent interface.

public interface ClusteringAgent {
     * Initialize the agent which will initialize this node, and join the cluster
    void init(ClusterContext clusterContext) throws ClusterInitializationException;
     * Shutdown the agent which will remove this node from cluster
    void shutdown();
     * Send a message to all members in the cluster
    void sendMessage(ClusterMessage msg) throws MessageFailedException;
     * Send a message to a set of specific members in the cluster
    void sendMessage(ClusterMessage msg, List<ClusterMember> members) throws MessageFailedException;

2. Register this as an OSGi service so that the cluster framework will discover this. But the service registration should also provide a service level parameter “Agent” with the meaningful value, where the cluster framework will compare that parameter with the value in cluster.xml to correctly identify the cluster agent implementation at run-time. Service registration can now be done via annotation based approach. In carbon the Apache Felix SCR Annotation plugin is used for this purpose. The above agent class implementation will look like the below after it has been annotated with required annotation.

        name = "ZookeeperClusteringAgentServiceComponent",
        description = "The ClusteringAgent class which is based on Zookeeper",
        immediate = true
@Property(name = "Agent", value = "zookeeper")
public class ZookeeperClusteringAgent implements ClusteringAgent {
    public void init(ClusterContext clusterContext) throws ClusterInitializationException {
        // Add the logic that should be executed for agent initialization
    public void shutdown() {
        // This will be called when the cluster agent bundle/component deactivates.
    public void sendMessage(ClusterMessage msg) throws MessageFailedException {
        // This method should implement the logic of sending a message to all members in the cluster.
    public void sendMessage(ClusterMessage msg, List<ClusterMember> members)
            throws MessageFailedException {
        // This method should implement the logic of sending a message to specific members in the cluster.

Cluster Configration

The clustering can be configured via the cluster.xml for a carbon server. This file is located in $CARBON_HOME/repository/conf/cluster.xml. The following shows the default cluster configuration. To enable clustering, simply change the “Enable” element to “true”, which will enable clustering for that node.

<Cluster xsi:schemaLocation="http://wso2.com/schema/clustering/config cluster.xsd"
    <!--Indicates whether clustering should be enabled or disabled for this node-->
    <!-- The agent implementation used with cluster framework-->
     The clustering domain/group. Nodes in the same group will belong to the same multicast
     domain. There will not be interference between nodes in different groups.
    <!-- Properties specific to this member -->
            <Property name="backendServerUrl">https://${hostName}:${httpsPort}/services/</Property>
            <Property name="mgtConsoleUrl">https://${hostName}:${httpsPort}/</Property>
            <Property name="subDomain">worker</Property>
        The membership scheme used in this setup. The only values supported at the moment are
        "Multicast" and "WKA"
        1. Multicast - membership is automatically discovered using multicasting
        2. WKA - Well-Known Address based multicasting. Membership is discovered with the help
            of one or more nodes running at a Well-Known Address. New members joining a
            cluster will first connect to a well-known node, register with the well-known node
            and get the membership list from it. When new members join, one of the well-known
            nodes will notify the others in the group. When a member leaves the cluster or
            is deemed to have left the cluster, it will be detected by the Group Membership
            Service (GMS) using a TCP ping mechanism.
        <!-- multicast membership scheme related properties-->
        <!-- wka membership scheme related properties-->

Cluster Service

The Cluster API which is given as an OSGi service to carbon platform will have the following methods. The users can consume this service using OSGi service reference/discovery mechanism.

public interface Cluster {
     * Send the given cluster message to the whole cluster
     * @param clusterMessage the cluster message to be sent
     * @throws MessageFailedException on error
    void sendMessage(ClusterMessage clusterMessage) throws MessageFailedException;
     * Send the given cluster message to a set of members in the cluster
     * @param clusterMessage the cluster message to be sent
     * @param members        the list of members to send the cluster message
     * @throws MessageFailedException on error
    void sendMessage(ClusterMessage clusterMessage, List<ClusterMember> members)
            throws MessageFailedException;
     * Return the list of currently available members in the cluster
     * @return the member list
    List<ClusterMember> getMembers();
Posted in Carbon5, WSO2 | Tagged , , , | 1 Comment

Carbon 5.0 Deployment Framework – My Notes

Carbon 5  will be the next generation WSO2 Carbon Platform with a complete new architecture written from scratch. Carbon Kernel is the base of this platform and for all the WSO2 products.

The Carbon Deployment Framework is core a module at kernel for managing the deployment of artifacts in a carbon server.


The above picture illustrates the high level view of this framework. The framework consists of following parts.

  1. Scheduler : is responsible for the scheduling the deployment task periodically.
  2. Repository Scanner : is what does the scanning of deployer directories for artifact updates.
  3. DeployerServiceListener : is an OSGi service component, which will listen to deployer registrations/unregistrations from other components (CustomDeployers) and add/remove them to/from DeploymentEngine at run-time.

The deployment engine will operate at scheduled mode by default, where a scheduler task will run periodically and calls the repository scanner to scans the available deployer directories and find any new artifacts to be deployed, artifact to be updated, and artifact to be undeployed . Then it will call the relevant deployer of those artifacts to do the deployment/undeployment process.
The deployment scan interval is 15 seconds by default. It can be configured using the carbon.xml by changing the following property : DeploymentConfig -> UpdateInterval.
The default repository location for the deployment engine is $CARBON_HOME/repository/deployment/server.

How to write a new Deployer and plug with Deployment Framework

The Deployer SPI provide a way to implement your own deployer which will be used for the deployment of different type of artifacts in carbon. A deployer will process a particular artifact type and deploy it to a run-time configuration. A developer who wants write a deployer to process an artifact in carbon and add it to a runtime configuration, should implement this.

public interface Deployer {

     * Initialize the Deployer
     * This will contain all the code that need to be called when the deployer is initialized
    void init();

     * Process a deployable artifact and add it to the relevant runtime configuration
     * @param artifact the Artifact object to deploy
     * @return returns a key to uniquely identify an artifact within a runtime
     * @throws CarbonDeploymentException - when an error occurs while doing the deployment
    Object deploy(Artifact artifact) throws CarbonDeploymentException;

     * Remove a given artifact from the relevant runtime configuration
     * @param key the key of the deployed artifact used for undeploying it from the relevant runtime
     * @throws CarbonDeploymentException - when an error occurs while running the undeployment
    void undeploy(Object key) throws CarbonDeploymentException;

     * Updates a already deployed artifact and update its relevant runtime configuration
     * @param artifact the Artifact object to deploy
     * @return returns a key to uniquely identify an artifact within a runtime
     * @throws CarbonDeploymentException - when an error occurs while doing the deployment
    Object update(Artifact artifact) throws CarbonDeploymentException;

     * Returns the deploy directory location associated with the deployer.
     * It can be relative to CARBON_HOME or an abosolute path
     *      Eg : webapps, dataservices, sequences  or
     *           /dev/wso2/deployment/repository/  or
     *           file:/dev/wso2/deployment/repository/
     * @return deployer directory location
    URL getLocation();

     * Returns the type of the artifact that the deployer is capable of deploying
     *      Eg : webapp, dataservice
     * @return ArtifactType object which contains info about the artifact type
    ArtifactType getArtifactType();


The above is the interface to implement. Lets look at a real example.
As an example on how to write your own deployer and register it with Carbon Deployment Engine, lets look at a simple example deployer which process xml files. This deployer will process xml files found under “xmls” in the default carbon server repository, which is $CARBON_HOME/repository/deployment/server.

1. Create a simple maven project with the following dependecies


2. Implement the Deployer interface. For example like below

public class XMLDeployer implements Deployer {
    private static final Log log = LogFactory.getLog(XMLDeployer.class);
    private ArtifactType artifactType;
    private URL repository;

    public void init() {
        log.info("Initializing the XMLDeployer");
        artifactType = new ArtifactType("xml");
        try {
            repository = new URL("file:xmls");
        } catch (MalformedURLException e) {
    public String deploy(Artifact artifact) throws CarbonDeploymentException {
        log.info("Deploying : " +artifact.getName() + " in : " +
        return artifact.getName();
    public void undeploy(Object key) throws CarbonDeploymentException {
        log.info("Undeploying : " + key);
    public Object update(Artifact artifact) throws CarbonDeploymentException {
        log.info("Updating : " +artifact.getName() + " in : " +
        return artifact.getName();
    public URL getLocation() {
        return repository;
    public ArtifactType getArtifactType() {
        return artifactType;

3. Write an OSGi declarative service component which is used to register the above Deployer instance.

        name = "XMLDeployerServiceComponent",
        immediate = true
public class XMLDeployerServiceComponent {
    private Log log = LogFactory.getLog(XMLDeployerServiceComponent.class);
    private XMLDeployer xmlDeployer = new XMLDeployer();
    private ServiceRegistration registration;
    public void start(ComponentContext ctxt) {
        registration = ctxt.getBundleContext().
                registerService(Deployer.class.getName(), xmlDeployer, null);
    public void stop(ComponentContext ctxt) {

4. Once the above are in your project, we need to add the maven scr plugin and bundle plugin section to generate the components level metadata for scr annotations and bundle info. An example section on the pom file will like below

                <Bundle-Vendor>Sampe Inc</Bundle-Vendor>
                    org.sample.xml.deployer.*; version="${project.version}",

5. Build your deployer compoenent, which will generate an OSGi bundle. Now this bundle can be installed into a running carbon server instance using OSGI bundle install commands. By default carbon uses equinox osgi runtime. So once you start the carbon server you will see the commend console of equinox as below.

wso2@work:/media/data/wso2/carbon-kernel/distribution/target/wso2carbon-kernel-5.0.0-SNAPSHOT$ s
JAVA_HOME environment variable is set to /home/wso2/bin/java/jdk1.7.0_40
CARBON_HOME environment variable is set to /media/data/wso2/carbon-kernel/distribution/target/wso2carbon-kernel-5.0.0-SNAPSHOT

6. Then using the install command you can install the bundle as below. Once you install and start the bundle successfully, you will see the log message of the XMLDeployer instance getting initialized.

osgi> install file:/media/data/wso2/samples/sample-deployer/target/org.sample.xml.deployer-1.0.0-SNAPSHOT.jar
Bundle id is 36
RegisteredServices   null
ServicesInUse        null
LoaderProxy          org.sample.xml; bundle-version="1.0.0.SNAPSHOT"
Fragments            null
ClassLoader          null
Version              1.0.0.SNAPSHOT
LastModified         1390824613943
Headers               Bnd-LastModified = 1390813852361
 Build-Jdk = 1.7.0_40
 Built-By = wso2
 Bundle-Activator = org.sample.xml.deployer.internal.XMLDeployerBundleActivator
 Bundle-ManifestVersion = 2
 Bundle-Name = org.sample.xml.deployer
 Bundle-SymbolicName = org.sample.xml
 Bundle-Vendor = WSO2 Inc
 Bundle-Version = 1.0.0.SNAPSHOT
 Created-By = Apache Maven Bundle Plugin
 Export-Package = org.sample.xml.deployer;uses:="org.wso2.carbon.deployment.spi,org.apache.commons.logging,org.wso2.carbon.deployment,org.wso2.carbon.deployment.exception";version="1.0.0.SNAPSHOT"
 Import-Package = org.apache.commons.logging;version="[1.1,2)",org.osgi.framework;version="[1.7,2)",org.wso2.carbon.deployment;version="[5.0,6)",org.wso2.carbon.deployment.exception;version="[5.0,6)",org.wso2.carbon.deployment.spi;version="[5.0,6)"
 Manifest-Version = 1
 Service-Component = OSGI-INF/org.sample.xml.deployer.XMLDeployer.xml
 Tool = Bnd-1.43.0

7. Once you install and start the bundle successfully, you will see the log message of the XMLDeployer instance getting initialized as below. If this log appears, then the deployer is registered with the deployment framework successfully.

osgi> start 36
[2014-01-27 17:40:39,874]  INFO {org.sample.xml.deployer.XMLDeployer} -  Initializing the XMLDeployer

8. Testing of this deployer can be done by creating a directory named “xmls” under $CARBON_HOME/repository/deployment/server/ and adding a new xml file under this location. Then once the deployment engine scans this directory and find the new artifact, it will call the deploy method of the deployer instance. This will basically print the following log at the console. Similarly, the update and un-deployment of the deployer can also be tested by updating or removing the sample xml file.

[2014-01-27 17:56:15,205]  INFO {org.sample.xml.deployer.XMLDeployer} -  Deploying : sample.xml in : /media/wso2/carbon-kernel/distribution/target/wso2carbon-kernel-5.0.0-SNAPSHOT/repository/deployment/server/xmls/sample.xml
Posted in Carbon5, WSO2 | Tagged , , , , | 1 Comment

[Book] Enterprise Integration with WSO2 ESB

The Packt Publishing recently published a book on “Enterprise Integration with WSO2 ESB”. This was written by Prabath Siriwardana. I was a Technical Reviewer for this book.


Let me give some high level overview of this book. As a reviewer of this book, I had the opportunity to work on some of the fundamentals of Enterprise Integration Patterns with WSO2 ESB. The patterns supported by WSO2 ESB is explained in their documentation.

This book takes some important patterns, which are widely used at enterprise level and explains them using some well defined examples. Some of the patterns included in this book are :

  • Content-Based Router
  • Dynamic Router
  • Splitter
  • Scatter and Gather
  • Publish and Subscribe
  • Service Chaining
  • Content Enricher
  • Detour

The high level overview of the book includes the following

  • How to implement commonly used Enterprise Integration Patterns with WSO2 ESB
  • How to integrate WSO2 ESB with some well known third party Message Brokers
  • Business messaging and transformation, such as FIX and HL7, with WSO2 ESB
  • Integration with SAP using WSO2 ESB
  • Cloud Connectors (Connect to Twitter) from WSO2 ESB

This book is now available for buying at Packt Publishing.

Posted in Book, WSO2 | Tagged , , , , | Leave a comment

How WSO2 Carbon Works

I recently wrote an article on understanding WSO2 Carbon Architecture and how stuff works underneath. This article basically explains about how the server function at run-time with illustrations. Here is the summary and the contents covered in this article.


WSO2 Carbon is the base platform for all products of WSO2. By leveraging the OSGi technology, the WSO2 Carbon architecture is designed in such a way that it’s highly extensible, dynamic, and flexible. Over the years, the existing carbon platform was used to build a different set of products at WSO2 and helped to implement many solutions. The platform has gained maturity over time. There are new improvements, and features are added with each major release. The new family of products that are released now are based on Carbon version 4.0.0 and above. The 4.0.0 architecture undergone a major change than what is found in 3.2.0 based releases. The significant change is bringing Tomcat in to the OSGi environment, where in 3.2.0 based release, the tomcat was outside of OSGi environment and a bridging mechanism, called Servlet Bridge, was used to connect the outside world with OSGi environment.

This article covers the following in detail.

Posted in WSO2 | Tagged , , | Leave a comment

Introducing WSO2 Carbon 5 (C5)

Carbon 5 will be the next generation of WSO2 Carbon Platform.

Over the years, the existing carbon platform was used to build different set of products at WSO2 and helped implementing many solutions. The platform has gained maturity over time. This maturity and the experiences acquired over time, helped to identify the areas which can be improved or can be implemented using a better way in carbon platform.

Carbon Kernel is the base framework for all the WSO2 products. The core concepts of kernel such as deployment, clustering, configuration and context model, multi-tenancy, etc are inspired from and developed using the kernel architecture of Apache Axis2. The Apache Axis2 is a web-services engine. But its is also a server framework with the run-time and configuration model, which can be run independently. This was adopted at carbon kernel and this made carbon kernel tightly coupled with Apache Axis2.

Another point is that over time, like any other software systems, carbon kernel also has gained weight. There are more modules to manage now in carbon kernel alone. This increases the possibility of frequent patching and maintenance releases.

What’s new in C5 Carbon Kernel?

The Carbon Kernel for C5 will be a general purpose OSGi run-time. It will contain minimal features required for developing servers on top of it. The Carbon Kernel will be re-architect from scratch with the latest technologies and patterns to overcome the existing architectural limitations and will focus on the areas which can be improved as well as removing tight dependencies to projects like Apache Axis2.

The Milestone plan for Carbon 5 Kernel

  • Migrating to Equinox Kepler
  • Centralized Logging-backend
  • Moving the codebase from SVN to GIT
  • Basic C5 runtime
  • User API design and implementation
  • Carbon Deployment Engine
  • Carbon Clustering API and Implementation
  • Pluggable Runtime Framework
  • Configuration and Context Model
  • Repository API and implementation
  • Improved Patching model
  • Plugging Tomcat to C5
  • RESTFul admin services framework
  • Jaggery based UI framework
  • Per tenant security manager and Thread monitoring
  • Composite application model for C5
  • Plugging Axis2 run-time to C5
  • Improved Feature Manger implementation.

The first milestone was recently released with the deliverables mentioned. It can be downloaded from here.

Posted in Carbon, WSO2 | Tagged , , , | 3 Comments

Embedded Tomcat : Tips, Tricks and Hacks


Embedded Tomcat API comes handy where it only requires a few lines of code to start a tomcat instance within your application.  This makes the life easy for the users to get the useful features out from Apache Tomcat in their application.

In most cases, the users expect the embedded tomcat also to behave like the same way as a standalone tomcat instance. For example, if they want to change the global level server configuration, they expect the usage and support of server.xml for embedded tomcat as-well.

But when using embedded tomcat, this is not supported OOTB. You have to do some hacks and change the way you embed tomcat in to your application, to make this feature available.

Some of the useful features that are NOT supported OOTB in embedded tomcat are :

  1. Server descriptor (server.xml) support
  2. Global deployment descriptor (web.xml) support
  3. Webapp specific context descriptor (context.xml) support

In this post I will be explaining about how to overcome the above limitations in embedded tomcat by using some hacks and tricks.

The Tomcat class is what used in embedding tomcat in our applications. But it is better to extend this class and write it our-way. This will give the freedom to override some of the important methods which needs some custom changes. The following are the tips used in extending the Tomcat class to support the above mentioned limitations.

Tip 1 – Server descriptor (server.xml) support

The server.xml file is read/parsed and initialized by the class called Catalina in a standard tomcat instance. It creates a Digester class instance which in-turn parses the server.xml as a stream. So in our extended tomcat class, what we could do is, create an inner class which extends the Catalina and override only the method which creates the Digester instance.

    private static class MyExtendedCatalina extends Catalina {
        public Digester createStartDigester() {
            return super.createStartDigester();

Then this can be used in parsing the server.xml which is given as an inputStream

    Digester digester = extendedCatalina.createStartDigester();
    try {
    } catch (IOException e) {
        log.error("Error while parsing server.xml", e);

You can see that the Digester is coupled with Tomcat instance in-order to configure it self with Tomcat during start-up. The above code segment should be executed in the initialization part of the Extended Tomcat instance.

Tip 2 – Global deployment descriptor (web.xml) support

This is known as the root/default deployment descriptor for all webapps. The webapps inherits the settings in this file and override those values with their webapp specific deployment descriptor values, if any. Having this file gives some advantages such as defining some global level properties for all webapps, such as , default session timeout, etc.

To get this global web.xml added to all webapps, you have to override the “addWebapp” method of Tomcat class. Then add the following into that overridden method.

    Context context = null;
    try {
        context = new StandardContext();
        ContextConfig contextConfig = new ContextConfig();
        if (new File(pathToGlobalWebXml).exists()) {
        } else {
    } catch (Exception e) {
        log.error("Error while deploying webapp", e);

The above code segment adds/deploys a webapp to the server host as child. This is what happens with Tomcat class’s addWebapp as well. But the important thing to note here is the place (in bold) where we set global web.xml to the current webapps context using the setDefaultWebXml method. With the normal addWebapp method of Tomcat class, this is always set to NO_DEFAULT_XML.

Tip 3 – Webapp specific context descriptor (context.xml) support

As with global web.xml, there is also another useful feature, where each webapp can add some meta information using the context descriptor. It is added to the META-INF directory of the webapp. It is commonly used in adding context specific resources such as DataSource, JNDI Resources, Valves, ClassLoaders,  etc. which are specific to the webapp only. To get this support, what you should do is, add the following code segment to the same overridden addWebapp method of Tomcat class, before calling the addChild method with host object.

    JarFile webappWarFile = new JarFile(webappFilePath);
    JarEntry contextXmlFileEntry = webappWarFile.getJarEntry("META-INF/context.xml");
    if (contextXmlFileEntry != null) {
        context.setConfigFile(new URL("jar:file:" + webappFilePath + "!/" + "META-INF/context.xml"));

The point to note here is the segment which is bold. It calls the setConfigFile method of the webapp’s context object, which in-turn adds the context.xml descriptor to the webapps context.


The above three are the most commonly used features in a normal tomcat environment. By following the above tips, you can overcome those limitations that we face when using tomcat as embedded mode. This can be further extended to get the other features supported, such as global context.xml, etc.

Posted in Java, Tomcat | Tagged , , , , , , | 1 Comment

Transaction support with RabbitMQ

In a previous post of mine, I explained about the new AMQP transport developed for WSO2 ESB based on RabbitMQ Java Client. In this post I will be explaining about how we use the transactions support in RabbitMQ Java Client in our client consumer implementation, which is also used in the new AMQP Transport for WSO2 ESB.

Say that while trying to process a consumed message from an RabbitMQ AMQP queue, an unexpected error occurred. This will result in a state that the message was consumed from the queue but it was not used for the intended purpose. To avoid these kind of situation, RabbitMQ Java Client provide inbuilt support with auto acknowledgement and transactions.

The following is a way to achieve this

    ConnectionFactory factory = new ConnectionFactory();
    Connection connection = factory.newConnection();
    Channel channel = connection.createChannel();
    channel.queueDeclare(QUEUE_NAME, false, false, false, null);
    channel.exchangeDeclare(EXCHANGE_NAME, "direct", true);
    QueueingConsumer consumer = new QueueingConsumer(channel);
    channel.basicConsume(QUEUE_NAME, false, consumer);

    while (true) {
        QueueingConsumer.Delivery delivery = consumer.nextDelivery();
        String message = new String(delivery.getBody());
        if (delivery.getProperties().getContentType() != null) {
            channel.basicAck(delivery.getEnvelope().getDeliveryTag(), false);
        else {

In here I’m explicitly setting autoAcknowledgement as false when consuming the message, so that message remains in the queue and then start to process the message. During processing, I’m checking for a condition. If that condition is not satisfied I roll back the transaction, and if it is satisfied, I commit the transaction by sending an acknowledgement.

This is a basic way to handle transaction with RabbitMQ Java Client. We can extend this to support complex transaction processing as-well.

Posted in How to, Java, RabbitMQ, WSO2 | Tagged , , , , | Leave a comment

AMQP Transport for WSO2 ESB based on RabbitMQ Java Client

WSO2 ESB is a High Performance, Light weight, Open Source Enterprise Service Bus. It also has inbuilt support for integrating different technologies which uses different transports protocols. Some of the well known transports that WSO2 ESB supports are HTTP, HTTPS, POP, IMAP, SMTP, JMS, FIX, TCP, UDP, FTPS, SFTP, CIFS, MLLP, SMS.

AMQP is an application layer level messaging protocol for message oriented architecture. It operates like the same way as HTTP, FTP, SMTP etc, to make systems inter-operate with each other. It address the issues that are faced by systems where the inter-operability is achieved by using well defined API’s (e.g JMS). For example, if your system wants to talk to another system over JMS, you have to implement the JMS API. Whereas AMQP is a wire-level protocol which describe the format of the data that is sent across the network. So irrespective of the implementation language, any system/tool that can create/send/consume/read the AMQP messages which ad-hear to the AMQP data format, gets the ability to inter-operate with each other.

RabbitMQ Java Client is one such tool which allows you to send or receive AMQP messages. Apart from this, RabbitMQ is an AMQP broker implementation, which can be used as an AMQP broker too. In this post i will be explaining about the new AMQP transport for WSO2 ESB, which is implemented using the RabbitMQ Java Client.

Applies To

WSO2 ESB 4.5.1
RabbitMQ AMQP Java Client 3.0.3

The Scenario of Interest


The above scenario is considered here. This will demonstrate the ability of  WSO2 ESB to consume/publish AMQP messages from/to an AMQP broker. The “Sender” here can be anything, which is capable of publishing messages to an AMQP queue. Similarly the “Receiver” can be anything, which can consume messages form an AMQP queue. In this post I will be using RabbitMQ Java Client library in sending/receiving AMQP messages in both Sender and Receiver implementation.

A proxy service in ESB will be listening to Q1. When there is a message available in Q1, ESB will consume it. If the proxy defined in such a way that the messages should be sent to an AMQP destination , say Q2, then the consumed messages will be sent to the Q2. The proxy service configuration for this scenario is explained later in this post.

Installing RabbitMQ AMQP transport feature into WSO2 ESB

The RabbitMQ AMQP transport is developed as a separate module for transports project. It is also available as an installable p2 feature, which can be installed via the WSO2 Feature Manager in WSO2 ESB.

Following are the steps to install this feature.

1. Start the ESB server.
2. Download and unzip the p2-repo.zip file to some location. Copy the path (say /home/esb/p2-repo).
3. Go to Configure > Features from the management console view and add a new local repository by giving the path copied above.
4. Select the added repository and tick “Show only the latest versions” and click on Find features. You will see the “Axis2 Transport RabbitMQ AMQP” feature listed. Select that and install it.
5. After successful installation, shutdown the server.

Alternatively you can just copy both the jars found in plugins directory to {ESB_HOME}/repository/components/dropins/ directory.

Now the next thing is to configure the transport (Listener and Sender) in axis2.xml

Configuring RabbitMQ AMQP Transport in WSO2 ESB

1. Add the following configuration items to axis2.xml found in {ESB_HOME}/repository/conf/axis2/axis2.xml

(i) Under transport listeners section add the following RabbitMQ transport listener.

<transportReceiver name="rabbitmq" class="org.apache.axis2.transport.rabbitmq.RabbitMQListener">
   <parameter name="AMQPConnectionFactory" locked="false">
      <parameter name="rabbitmq.server.host.name" locked="false"></parameter>
      <parameter name="rabbitmq.server.port" locked="false">5672</parameter>
      <parameter name="rabbitmq.server.user.name" locked="false">user</parameter>
      <parameter name="rabbitmq.server.password" locked="false">abc123</parameter>

The parameters are self explanatory, which are used to create connection to AMQP broker. You can define any number of connection factories under <transportReceiver/> definition.

(ii) Under transport senders section add the following RabbitMQ transport sender.

<transportSender name="rabbitmq"

This is the transport sender which is used for sending AMQP messages out to a queue.

2. Start the ESB server.

Creating a proxy service which works with RabbitMQ AMQP transport

A sample proxy service which consumes and sends AMQP messages from and
to an RabbitMQ AMQP broker

<proxy xmlns="http://ws.apache.org/ns/synapse" name="AMQPProxy"
transports="rabbitmq" statistics="disable" trace="disable"
         <log level="full"/>
         <property name="OUT_ONLY" value="true"/>
         <property name="FORCE_SC_ACCEPTED" value="true" scope="axis2"/>
   <parameter name="rabbitmq.queue.name">queue1</parameter>
   <parameter name="rabbitmq.exchange.name">exchange1</parameter>
   <parameter name="rabbitmq.connection.factory">AMQPConnectionFactory</parameter>

1.  Transport configuration (transports=”rabbitmq”)
The proxy is defined under amqp transport as OUT_ONLY proxy service, where it does not expect a response from the endpoint.

2. RabbitMQ AMQP Connection factory configuration (AMQPConnectionFactory)

In here we specify that the name of connection factory which used to listen to the queue to consume messages as a parameter. This connection factory is defined in the AMQP transport listener configuration of axis2.xml. This proxy reads message from the specified queue and sends the messages to the defined in the endpoint address.

3. RabbitMQ Queue name to listen for messages (rabbitmq.queue.name)

This is the queue name that the listener configuration for AMQPConnectionFactory will be listening on. If no name is specfied, then it is assumed the the name of the Proxy Service used as the queue name.

3. RabbitMQ Endpoint configuration


The endpoint address should be of the format specified as above. All the parameters needed for the proxy service to send messages to endpoint should be given in the address uri format as mentioned above. If the exchange name is not specified, then as default, an empty value will be used as the exchange name.

4. RabbitMQ AMQP Transport properties

  • rabbitmq.server.host.name – Host name of the RabbitMQ server running on.
  • rabbitmq.server.port – Port value on which the server is running.
  • rabbitmq.server.user.name – User name to connect to RabbitMQ server.
  • rabbitmq.server.password – Password of the account to connect to RabbitMQ server.
  • rabbitmq.server.virtual.host – Virtual host name of the RabbitMQ server running on, if any.
  • rabbitmq.queue.name – Queue name to send or consume messages.
  • rabbitmq.exchange.name – Exchange name the queue is bound.

There can be situations where you only want to send AMQP messages out OR only want receive AMQP messages. The above proxy can be changed to suite both of the above scenarios. Also you can change it to work with different transport as-well. For example, a proxy which listen messages over AMQP and send over HTTP or JMS, etc.

Sample Proxy Service in operation

Note : This feature is currently tested for soap messages which are sent and consumed from AMQP broker queues with content type “text/xml”.

1. When the proxy service is created by following the above steps and deployed in ESB, it will be listening to the queue specified in rabbitmq.connection.factory under RabbitMQ AMQP Transport Listener. When there is a message available in queue, it will be consumed by the listener.

2. The consumed message will be then be sent to the endpoint queue specified in the Endpoint configuration of proxy.

A sample java client to send soap xml message to an AMQP queue

In a previous post of mine, I explained on how to use RabbitMQ Java client to send/receive messages. In here i’m using the same way to send and receive messages from/to an RabbitMQ queue.

Note : Give the correct queue name for publishing messages. This will be the same queue where the AMQP Transport listener will be listening on.

ConnectionFactory factory = new ConnectionFactory();
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(queueName, false, false, false, null);
channel.exchangeDeclare(exchangeName, "direct", true);
channel.queueBind(queueName, exchangeName, routingKey);

// The message to be sent
String message = "<soapenv:Envelope
                  xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope">\n" +  
                 "<soapenv:Header/>\n" +
                 "<soapenv:Body>\n" +
                 "  <p:greet xmlns:p=\"http://greet.service.kishanthan.org\">\n" + 
                 "     <in>" + name + "</in>\n" +
                 "  </p:greet>\n" +
                 "</soapenv:Body>\n" +

// Populate the AMQP message properties
AMQP.BasicProperties.Builder builder = new

// Custom user properties
Map<String, Object> headers = new HashMap<String, Object>();
headers.put("SOAP_ACTION", "greet");

// Publish the message to exchange
channel.basicPublish(exchangeName, queueName, builder.build(), message.getBytes());

A sample java client to consume the message from an AMQP queue

Note : Give the correct queue name for consuming messages. In here the queue name will be the one configured in the Endpoint configuration of the proxy service.

ConnectionFactory factory = new ConnectionFactory();
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(queueName, false, false, false, null);
channel.exchangeDeclare(exchangeName, "direct", true);
channel.queueBind(queueName, exchangeName, routingKey);

// Create the consumer
QueueingConsumer consumer = new QueueingConsumer(channel);
channel.basicConsume(queueName, true, consumer);

// Start consuming messages
while (true) {
   QueueingConsumer.Delivery delivery = consumer.nextDelivery();
   String message = new String(delivery.getBody());

The above two java clients can be used to test the scenario where you publish a message to a RabbitMQ AMQP queue, which is consumed by the ESB and published to another queue, which in turns consumed by a java client. When running both the above clients, you will get the messages sent by the Sender client at Receiver side. If the above works as expected, then you have configured RabbitMQ AMQP transport correctly in WSO2 ESB.


This new RabbitMQ AMQP transport implementation will solve the issue of calling an AMQP broker, such as RabbitMQ, directly without the need to use different transport mechanisms, such as JMS. The underlying protocol used in this transport is AMQP.

RabbitMQ Java Client is available under Mozilla Public Licence 1.1. WSO2 ESB is available under Apache Licence v2.

Posted in Java, RabbitMQ, WSO2 | Tagged , , , , | 17 Comments

Using RabbitMQ Java Client to send AMQP Messages to a AMQP Broker


RabbitMQ is an open source message broker which implements AMQP messaging protocol. It also has multiple client libraries (Java, .NET, Erlang) which can be used to send/recieve AMQP messages to/from an AMQP broker.

In this post I’m going to explain how to use the RabbitMQ Java Client Library to send and Receive messages. Since RabbitMQ also can act as an AMQP server, I’ll be using it in this post.

Configuring RabbitMQ Server

This documentation explains about how to configure RabbitMQ server in multiple OS environments. You have to use one of it accordingly and configure it.

Using RabbitMQ Maven dependency

Creating a maven project by using the rabbitmq maven dependency is relatively easy. This library is available as a maven dependency at central maven repository. Use the following maven dependency in your project.


Publishing an AMQP message

The following code segment shows the relevant parts needed in order to publish an AMQP message to the broker.

ConnectionFactory factory = new ConnectionFactory();
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();

try {
    channel.queueDeclare("Queue1", false, false, false, null);
    channel.exchangeDeclare("Exchange1", "direct", true);
    channel.queueBind("Queue1", "Exchange1", routingKey);
    String message = "This is a test message";
    AMQP.BasicProperties.Builder builder = new AMQP.BasicProperties().builder();
    channel.basicPublish("Exchange1", "Queue1", builder.build(), message.getBytes());

} catch (IOException e) {
} finally {

In here we specify the queue name with the channel. If the queue is not found on broker, then a new queue will be created. Then you have to define an exchange on which the queue is bound.

We also can add properties to the message before publishing. This is done by using AMQP.BasicProperties() instance. The following are the set of properties that can be added to an AMQP message.

“contentType, contentEncoding, headers, deliveryMode, priority, correlationId, replyTo, expiration, messageId, timestamp, type, userId, appId, clusterId”

Apart from the above properties, if you want add custom properties, you can do so with the “headers” Map as key value pairs.

Consuming an AMQP Message

The following code segment shows how to consume messages from a queue in a AMQP broker.

ConnectionFactory factory = new ConnectionFactory();
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare("Queue1", false, false, false, null);
channel.exchangeDeclare("Exchange1", "direct", true);
channel.queueBind("Queue1", "Exchange1", routingKey);
QueueingConsumer consumer = new QueueingConsumer(channel);
channel.basicConsume("Queue1", true, consumer);

while (true) {
     QueueingConsumer.Delivery delivery = consumer.nextDelivery();
     String message = new String(delivery.getBody());
     System.out.println("Message Received : '" + message + "'");

Using the above, you can consume the messages that comes to the queue name “Queue1”. You have to use the correct queue and exchange names. The object instance “QueueingConsumer” is what does the actual consuming of messages from queue. The call from the consumer, “consumer.nextDelivery()”, is a blocking which waits until a message gets delivered from the queue.

There can be situations where the AMQP server is running on default port with default user account. So when configuring the connection factory, you can also define those values accordingly.

For example :

ConnectionFactory factory = new ConnectionFactory()

This shows how to set different configuration values, such as port, username, password, etc for a connection factory.

Posted in How to, Java, RabbitMQ | Tagged , , , , , , | 1 Comment