In the first post, I wrote JMS basics and how to write a simple JMS client. In this post, I ll delve deeper in to the code and try to reveal the secrets of the magic :)

One can write a JMS client following the three steps below…

1. Setting up the JNDI context

2. Validating the JMS destinations

3. Creating and using the connection for the data exchange

Let us see each step in detail…

Setting up the JNDI context

Java Naming and Directory Interface(JNDI) is a standard implementation-independent API that allows applications to discover and look up data and objects using a “Name”. As a JMS client, I need to understand where to look for the ConnectionFactory and Destinations (topic/queue) that are intermediaries between the producer and consumer for the data exchange. JMS API doesn’t support this. Unlike the connections, sessions, producers, consumers and messages, ConnectionFactory and Destination objects cannot be obtained using JMS API. JNDI comes as a savior and provides a dynamic, portable and configurable mechanism to obtain these objects.  

Firstly, we need to create a connection to the JNDI Naming service and obtain the ConnectionFactory and Destination objects from them. JNDI provides a class javax.naming.InitialContext  for this purpose. This is the starting point for any JNDI lookup. The properties we put in the InitialContext depends on the JMS directory service we are using.

In the previous post, the initial context was pretty simple…

jndiContext = new InitialContext();

Ideally, we need to put all the required information to connect to the JNDI service. The  

Hashtable<String,String> environment = new Hashtable<String,String>();

environment.put(“java.naming.provider.url”,”ormi://machine1:12401”);         

environment.put(“java.naming.security.principal”,”UserName”);

environment.put(“java.naming.security.credentials”, “Password”);

environment.put(“java.naming.factory.initial”, “com.evermind.server.ApplicationClientInitialContextFactory”);

InitialContext  jndiContext = new InitialContext(environment);

Most of the JNDI lookups require all the four properties to be defined in the Initial context. The url is the one where we can locate the registry that contains the directory information. And the “java.naming.factory.initial” is the property that is used to select the service provider as the initial context. It specifies the class name of the initial context factory for the provider. The jar file that contains this class must be loaded into the JMS during the execution. In the above example where we are trying to write a client to OC4J JMS server, we are using “com.evermind.server.ApplicationClientInitialContextFactory” which is a part of oc4j-client.jar.  The username and password are the credentials to connect to the JMS. Some of the JMS providers support anonymous security context, while most assume that the credentials can be obtained from the JNDI or current thread.

 Validating the JMS destinations

Once the initial context is set, we need to get the JMS connection factory and the destination objects.

connectionFactory = (ConnectionFactory) jndiContext.lookup(“jms/TopicConnectionFactory”);
dest = (Destination) jndiContext.lookup(destName);

The javax.jms.ConnectionFactory is used to create connection object to a JMS server. A ConnectionFactory is a type of administered object, which means that its attributes and behavior are configured by the system administrator responsible for the messaging server. And the connection can created by the ConnectionFactory in the next step which represents a connection to the message server.

The javax.jms.Destination is an interface that encapsulates a provider-specific address. Queues and Topics are two different destinations and they are the administered objects just like the ConnectionFactory. We can get the destination object by looking up the administered objects in the JNDI namespace/registry. To this specific object the consumer would be created.

Creating and using the connection for the data exchange

connection = connectionFactory.createConnection();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
consumer = session.createConsumer(dest);
listener = new TextListener();
consumer.setMessageListener(listener);
connection.start();                                                                                                                                                                                             

//delay to read the messages in onMessage()                                                                                                       

connection.stop();

As mentioned before the connection represents a connection to the JMS server. Every connection created from the ConnectionFactory would be unique. The connection can be managed using start(), stop() and close() methods. Once the start() method is invoked the JMS server will start sending the messages. If we haven’t subscribed to any of the topic, all the messages will be discarded. So its better to subscribe to a destination before starting any connection. A stop() method will stop inbound messages on that connection until the start() is called again and close() method will close the connection with the JMS server.

A connection object is used to create a session object.  A Session object is responsible for creating message, consumer and producer objects. To have a good granular control over the consumers, producers and their transactions, we can have multiple sessions created from a connection. We can also create multiple connections to the server to serve the same purpose. But the connections are pretty costly. So it is always better to create multiple sessions than to create multiple connections.  

The first parameter of the createSession() method indicates if the session object will be transacted or not. Confused? Let me explain… In JMS, a transaction groups message or a message set in to one single atomic processing unit. Failure of the delivery of a single message may result in the redelivery of the message set.  This is the reason why we have to include jta.jar while running a JMS client as it would be expecting some JTA objects. In the above code snippet, the parameter is set to false, which means the Session will not be transacted. The second parameter indicates the acknowledgment mode used by the JMS client. An acknowledgment is nothing but a notification to the message server that the JMS client has received the message. In this case we chose AUTO_ACKNOWLEDGE, which means that the message is automatically acknowledged after it is received by the client.

The session object can also be used to create producer.

producer = session.createProducer(dest);

 It is also used to create a message and send it using the producer like below…

TextMessage message = session.createTextMessage();

message.setText(“Mymessage”);

producer.send(message);

“dest” is the Topic object which is a handler to the physical Topic on the messaging server. Topic is nothing but some kind of a news group to which a lot of message consumers can subscribe. When a message is published on to the Topic by a publisher, the message is sent to all the subscribers subscribed to the Topic.

Finally, the setMessageListener() method of the consumer object would register a listener to the object. Setting this would invoke the onMessage() method of the “listener” object whenever a JMS server pushes a message to the subscriber. We also need to note that setting a message listener on a consumer object while the object has already registered a message listener is undefined in the JMS specification. So it is better to avoid that situation.

I guess this post would help you in understanding how a JMS client works and would help you in writing one. Happy coding :)

JMS- Java Message Service is a standard that allows applications to send, receive, create and delete the messages.

Let us go through some basics of JMS before delving in to the code…

JMS was designed by Sun and several partner companies to standardize the messaging. It defines a common set of interfaces and associated semantics that allow programs written in the Java programming language to communicate with other messaging implementations.

Why JMS?
JMS makes life easy for a programmer while writing complex messaging components. Any JMS implementation promises maximum portability, asynchronous delivery of messages and reliable communication directly out of the box. JMS supports both point to point and publish/subscribe methods. Those who have written some basic socket programming to achieve the same would be able to appreciate JMS.

A few JMS elements we need to understand…

A JMS provider is the one who provides the implementation of the JMS interface.
A JMS client is an application that produces and/or receives messages
A JMs producer is an application that publishes the messages
A JMS consumer is an application that receives messages
A JMS message is a standard message which contains the data that needs to be transferred.
A JMS queue is where the messages are sent by the producer so that they get consumed by the consumer. Please note that this is a point to point solution. So there can be only one consumer for a queue.
A JMS Topic is a publish/subscribe solution which can have multiple consumers. A JMS client needs to subscribe to a topic to consume the messages
A Durable subscriber/consumer is a consumer when subscribes to a Topic/Queue, the Topic/Queue would save the message until it is consumed by the subscriber even if the subscriber is down.

Done!! Too much of theory… Time to get our hands dirty…

// Creating a JNDI API InitialContext object if none exists yet.
try {
jndiContext = new InitialContext();
} catch (NamingException e) {
System.out.println(“Could not create JNDI API context: ” +
e.toString());
System.exit(1);
}

/*
* Look up connection factory and destination. If either
* does not exist, exit. If you look up a
* TopicConnectionFactory or a QueueConnectionFactory,
* program behavior is the same.
*/
try {
connectionFactory = (ConnectionFactory) jndiContext.lookup(
“jms/TopicConnectionFactory”);
dest = (Destination) jndiContext.lookup(destName);
} catch (Exception e) {
System.out.println(“JNDI API lookup failed: ” + e.toString());
System.exit(1);
}

/*
* Create connection.
* Create session from connection; false means session is
* not transacted.
* Create consumer.
* Register message listener (TextListener).
* Receive text messages from destination.
* When all messages have been received, type Q to quit.
* Close connection.
*/
try {
connection = connectionFactory.createConnection();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
consumer = session.createConsumer(dest);
listener = new TextListener();
consumer.setMessageListener(listener);
connection.start();
System.out.println(“To end program, type Q or q, ” +
“then “);
inputStreamReader = new InputStreamReader(System.in);

while (!((answer == ‘q’) || (answer == ‘Q’))) {
try {
answer = (char) inputStreamReader.read();
} catch (IOException e) {
System.out.println(“I/O exception: ” + e.toString());
}
}
} catch (JMSException e) {
System.out.println(“Exception occurred: ” + e.toString());
} finally {
if (connection != null) {
try {
connection.close();
} catch (JMSException e) {
}
}

The above piece of code with the comments are self explanatory at a high level. The snippet is useul in writing an message consumer application. The piece of code that is highlighted requires a TextListener class that implements MessageListener. The onMessage method needs to be implemented in TextListener class which will be called when the message is sent to a Topic/Queue. The code look like below…

class TextListener implements MessageListener {
/*
* @param message the incoming message
*/
public void onMessage(Message message) {
TextMessage msg = null;

try {
if (message instanceof TextMessage) {
msg = (TextMessage) message;
System.out.println(“Reading message: ” + msg.getText());
} else {
System.out.println(“Message is not a TextMessage”);
}
} catch (JMSException e) {
System.out.println(“JMSException in onMessage(): ” + e.toString());
} catch (Throwable t) {
System.out.println(“Exception in onMessage():” + t.getMessage());
}
}
}

If you want to write a durable subscriber all we need to do is replace the session.createconsumer call with


private TopicSubscriber subscriber = null;
subscriber = session.createDurableSubscriber((Topic)destTopic, “Identifier”);

and after closing the connection unsubscribe the same.

session.unsubscribe(“Identifier”);

And finally to write a JMS producer application all we have to do is create a producer and send a message

producer = session.createProducer(dest);
producer.send(message);

Since JMS is based on J2EE, it requires application-client.xml file in META-INF directory. The application-client.xml file contains the JNDI information necessary for accessing the server application.

For a simple JMS client application, the content of the file would be something like this…

<application-client>

</application-client>

We also need the ejb.jar during the runtime to interpret these files.

With all the above instructions together you would be able to write a JMS client application that can send and receive messages. Sounds simple!! Yes!! It is…
For all those who believe in “Magic” can stop reading the post here and go ahead with writing your application. For those who don’t… wait for my next blog post to see how the Magic’s biggest secrets finally reveled :P …

My first post on Java Technologies… :)

One of the initial challenges I faced when I started coding in Java was writing a JMS client, about which I will write shortly. But today there is another interesting challenge to talk about. Loading the classes dynamically.

I had a requirement of taking a bunch of jars as an input from the user and use them in my code. Sounds simple??

Java provides a feature where u can edit the system environment variables using System object. So can’t we edit the classpath and solve the issue? No!! the problem here is, the system class loader is initialized at the very beginning of the start up sequence and it copies the classpath in to it. So if you dynamically change the classpath it won’t be useful as the system class loader would have already read the old classpath and loaded the classes accordingly.

There should be some approach to solve this…. Yes!! There is… Class Loaders… Before getting in to the details, let me give a brief description of the class loaders.

Loading a class is managed by the Class Loaders in JVM. Bootstrap Loader is the main loader which loads all the basic java classes during the bootstrap. As most of the Java programmers already know… there is no separate step for linking in java. When JVM loads a class, using a class loader, a lot happens along with the linking. All the operations like decoding of the binary format, compatibility checking, verifying the sequence of operations and constructing the java.lang.Class instance will be handled by JVM itself. This feature in Java brings a lot of flexibility to load the classes at runtime even though it adds a lot of overhead when the classes are initially loaded. Now, bootstrap isn’t the only class loader with JVM. As mentioned before, it also has system class loader which loads all the classes from the general classpath. It also loads all the application classes.

Apart from these, JAVA also provides a feature where applications can define their own class loaders. Each class constructed by the class loader is owned by the loader. This post is all about how to write your own class loader, how to add jars to system loader and the things you need to understand before using any of the approaches…

To begin with… the import statements that you need to add are…

import java.net.URL;
import java.net.URLClassLoader;

I need a Uniform Resource Locater(URL) to point my Jar. We can create it like below..

URL myJarFile = new URL(“jar”,””,”file:”+myfile.getAbsolutePath()+”!/”);

The first argument is the protocol that is being used to source the data. There are multiple protocols that are supported like http, https, file, ftp and jar. The second argument is the host where the source is. And the third argument is the absolute location of the source. This statement defines the URL that points us to the jar.

We can add this URL to the class loader so that all the classes in the jar can be used. We can either define a new Class loader or get the instance of the system class loader and add the URL to it. I ll show you both the methods here…

getSystemClassLoader() method can be used to get the System class loader object.

URLClassLoader sysLoader = (URLClassLoader)ClassLoader.getSystemClassLoader();

To add a jar to the system class loader, we need to add the URL to sysLoader object. Once we have the System class loader we need to get the declared method and invoke the same as below…

Class sysClass = URLClassLoader.class;
Method sysMethod = sysClass.getDeclaredMethod(“addURL”,new Class[] {URL.class});
sysMethod.setAccessible(true);
sysMethod.invoke(sysLoader, new Object[]{myJarFile});

Every class object gives all the hooks to access basic metadata of the class like the package it is in, its super class, all its interfaces, constructors, fields, methods etc..
As you see above, First I tried to get the instance of the Method(addURL) that is declared in the URLClassLoader class and then I am trying to invoke the method with the jar file as an argument. The purpose of this method is to add the URL to the system class loader which will make the all the classes in the URL visible to the application.

Using System class loader would be appropriate for simple application. But for some complex applications, like application servers, where we don’t want one application interfering another defining a separate class loader for each application makes sense. Java provides a facility to derive the class loaders from java.lang.ClassLoader. Every class loader has a reference to its parent. So whenever the class loader tries to load a class, it checks if the parent has already loaded it. So any class loaded by a class loader would not only be visible to itself but also to all its decedents. By default System class loader is the parent class loader for all user defined class loaders.

Now to create a Class loader

URLClassLoader cl = URLClassLoader.newInstance(new URL[] {myJarFile});

Now the jar is added to the class loader. The next step is to load the class, create an instance of it, get the method that needs to be executed and invoke it. Let us assume that we have a class myclass which has a method “String printMe(String, String)” that needs to be invoked.

The source looks like below…

Class MyClass = cl.loadClass(“com.mycomp.proj.myclass”);
Method printMeMethod = MyClass.getMethod(“printMe”, new Class[] {String.class, String.class});
Object MyClassObj = MyClass.newInstance();
Object response = printMeMethod.invoke(MyClassObj,”String1″, “String2″);

What if I need a non-default constructor to create myclass object like “myclass(String)”? As I said before the java provides all the hooks to get the meta info from the jar. So create the object of such a class we would need to get the constructor and create an instance from it like below

Constructor MyClassConstruct = MyClass.getConstructor(new Class[] {String.class});

Object MyClassObj= MyClassConstructConstruct.newInstance(“myString:);

Once the object is created using the constructor, you can invoke all the methods of that class like above.

These features of java is a great tool to build flexible code that can be hooked at the run time without any need for the source code links between classes.

This might be one of the simplest post but i guess it is worth to note these details…

Developers tend to forget simple things during the application development and one of the most ignored aspect is the “Installer”. Installation issues are pretty common and quite irritating. An installation failure would leave some traces of the product in the machine which wouldnt let the product to get installed again on the machine.

There are multiple places where a product might leave its traces.

1. %Program Files% folder : Go to this folder and delete all the files/folders related to the application that are of no use.

2. %temp% folder: This normally doesnt create any problem. But its good to delete them too.

3. Services : The installation might create a service and we will have no way of unistalling it as the uninstaller wouldnt have got copied. Fortunately windows provides a command line utility with which you can create or delete a service.

For deleting a service, Open the command prompt and type “sc delete <service name>”.

Now u can guess the command to create a service. Yes!! it would be “sc create <service name>”. Pretty interesting utility.

4. Registry: This is another place and you need to be very careful while working on the registry. Run a find for the Key words that are unique to product on the registry and delete those keys.

A reboot after these steps would be helpful.

Lately we faced an issue where the communication from the remote client  to the sever was failing.  We used to bind our server to 127.0.0.1. Here is the why the communication failed…

127.0.0.1 is localhost address on the loopback interface. Only local process would be able to communicate as we saw in our application.  We saw that the local clients were able to communicate with the server. There are many places where we require such a restriction.

0.0.0.0 – Binding  to this port indicates that our server is listening to all configured IPV4 addresses on all the interfaces. On the downside, the server now would be accessible over the wide network which should be considered as a security threat especially if it is over internet.

This is one of the very difficult issues we resolved which took us through a lot of trouble. The issue was first reported on an application A(name changed) that it wouldn’t start during the reboot of the machine.  Once the machine has started, the customer was able to start the application using the same commands that we used in SCM.

The application A is dependent on our application C. And in the SCM logs they see this… (esecuzione =execution && avvio=start)

06/09/2007       12:32:54 Service Control Manager Information        None     7036     N/A       34100PC023      The C service entered the esecuzione state.

06/09/2007        12:32:53 Service Control Manager Information        None     7036     N/A       34100PC023      The A service entered the esecuzione state.

At this point, C and then A are able to complete their initialization.  A starts before C and presumably is none the worse for wear for doing so.

06/09/2007        12:32:51            Service Control Manager Information        None     7036     N/A       34100PC023      The Servizi terminal service entered the esecuzione state.

06/09/2007        12:32:51            Service Control Manager Information        None     7036     N/A       34100PC023      The Windows Installer service entered the esecuzione state.

…..

06/09/2007        12:32:50            Service Control Manager Information        None     7036     N/A       34100PC023      The NLA (Network Location Awareness) service entered the esecuzione state.

06/09/2007        12:32:50            Service Control Manager Information        None     7035     NT AUTHORITY\SYSTEM            34100PC023      The NLA (Network Location Awareness) service was successfully sent a avvio control.

06/09/2007        12:32:50            Service Control Manager Error     None     7022     N/A       34100PC023      The A service hung on starting.

A hung, It also does something which can cause it to block BEFORE telling SCM that it has started.

06/09/2007        12:32:10            Service Control Manager Error     None     7022     N/A       34100PC023      The C service hung on starting.

Application C hung because comms. isn’t available. We were reasonably sure from the C’s logs that it hangs on a WSAStartup call.

So here in our scenario, application C doesnt fail to start but it times out during the startup. This results in the failure in the start up of application A. Our analysis after this was on that particular machine there was an unknown service which needs to start before a WSAStartup call will complete (when IP is initialized).  Application C calls WSAStartup early in its initialization, before it tells SCM it has started.  This call blocks until the unknown service has asked other optional services to complete IP initialization.  As SCM will only start one service at a time while booting, service startup freezes until SCM times out App C’s startup.

When C’s startup is timed out, whatever is necessary to complete IP initialization can continue but.. As A is dependent on C, SCM fails A. Eventually, IP initializes and the blocks on A and C are removed and everything starts up.

With the above analysis, we decided to fix it by reporting SCM that it has started before calling WSAStartup.  We made the changes and when we ran it over the customer environment, we faced same problem again. :(.

The WSAStartup call didnt hang but it failed with an error code.

Completely fed up… we asked the customer to run bootlogxp. This tool gives us better information about the loading of dll’s during the boot up. It gave the following update…

C:\WINDOWS\SYSTEM32\IMM32.DLL Start: 28.401 sec

C:\Programmi\C\bin\C.dll Start: 28.173 sec

C:\WINDOWS\SYSTEM32\DNSAPI.DLL Start: 173.365 sec

This along with the C’s logs didnt tell us any thing new, but confirmed one thing. The WSAStartup doesnt hang but the subsequent socket calls do hang till the comm’s are up.

The basic problem here was that service control manager (SCM) starts one service at a time and if one blocks, it slows down the whole boot sequence.  A blocked service will eventually be timed out, at which time, SCM will let it continue and start the next it has selected.  To get round this, we changed things so that we report to SCM that we have started at an earlier place in our initialization than we would like.  The downside of this is that we may be trying to use service dependencies as a means of getting things started in the right order, so that the services we need will be available before we start.  Unfortunately, reporting to SCM early means that we re-introduce the race conditions we were trying to avoid and this makes service dependencies less useful.

Another thing we did was to remove the dependency between A and C. After making these changes and running the applications on the customer’s machine, we encountered the same problem again.

And finally everyone lost hopes and started blaming on the system. Everyone suggested that it could be machine specific. The customer re-imaged another machine and it faced the same problem. Dead End….

The solution to this problem came in a quite interesting way… Our L2 tried to get more information about the system’s environment and by chance found the customer’s environment the system variable PATH contains entries for two network shares.

This answered a lot of question… The explanation seems to be this… When A starts up it tries to load E.DLL using LoadLibraryEx. Unfortunately, this DLL is not present on the system and it starts looking into all the directories specified in %PATH%.  This takes time as the network is not yet up on the machine.  Once these variables are removed, everything started working fine.

Before this, I didn’t know we could put network shares on the system path, as network shares tend to be visible to particular users and would normally be invisible to the ‘system’ user which services run under.  I’ve learned quite a lot from this issue… About how SCM works and the network paths in the system path variable. This info was helpful for me in resolving some more issues in the same space. I hope this helps everyone else…

*** glibc detected *** free(): invalid next size (fast): 0x083de008 ***

One of the very frustrating problem I have seen ever. We first observed this problem in an application of ours on one of our test machine which is a RHEL4 using glibc 2.3.x.

From the error message it is pretty much clear that the problem is because of freeing some invalid pointer. Since our application is a multi-threaded application, an obvious fix was to protect the area where the deletion is happening. We did that. But, the fix didn’t solve the issue but delayed it by 8 hours.

The variable that is being deleted is a char array which has been allocated memory using new operator. We copy the contents in to it using memcpy. Since the new operator handles memory allocation properly, we are not  very worried about the contents being copied. If the app is trying to copy some thing out of the boundary, it is expected that it would crash there. So we continued looking ahead and tried to lock the areas where the variable got created and initialized.  This didn’t solve the problem. but the problem got delayed for 16 days :)..

Yes!! Now comes the major problem… How the hell would you be able to solve an issue that happens only after 16 days run?

We went ahead and looked at all the possibilities. Couldn’t find a clue about what might be gone wrong.

For some reason I got a doubt that the memcpy is trying to copy something out of the bounds. I had this nagging feeling and so I went ahead and came up with a small test sample that replicates the code where we are seeing the crash. It gave us surprising results…

Here is the code that i have used….

#include <stdio.h>
#include <string.h>
int main()
{

char *temp;

int len=12;

temp = new char[len]; // Allocate 12 bytes of memory

printf(“Got the new object\nAllocating the memory now\n”);
temp[0]=’A’; // used the first
printf(“Allocated size =1\n”);
memcpy(temp+1,”BBBB”,4); // used 5 bytes
printf(“Allocated size =5\n”);
memcpy(temp+5,”CCCC”,4);//using 9 bytes
printf(“Allocated size =9\n”);
memcpy(temp+9,”DDDD”,4);// using 13 bytes… I expected to crash here.. But it didnt :(
printf(“Allocated size =13\n”);
memcpy(temp+13,”EEEE”,4);// using 17 bytes… No crash either
printf(“Allocated size =17\n”);
printf(“%s”,temp); // Printing all the 17 bytes…  Yes it does

delete[] temp; // deleting the variable
return 1;
}

I ran this piece of code on SUSE linux with glibc 2.2 and it didnt core. I saw the same results in HPUX, Solaris and AIX machines. No crashes either at memcpy or at delete. That was very surprising for me. I went ahead and ran the test program on RHEL 4 box.

Bingoo!! It cored and it cored with same errror. The machine has glibc2.3

*** glibc detected *** free(): invalid next size (fast): 0x083de008 ***

Initially I thought it depends on might depend on glibc version as the SUSE linux I had was using glibc2.2.

I tried running the test on all the machines where i could lay my hand on and the test results werent conclusive.

Most surprising thing was when i ran the tests on two RHEL5 machines with glibc 2.5.34. One of them cored with same error and other didn’t. I am still not sure the exact reason for this issue. But I made a few changes in my code to ensure that we dont copy out of the bounds.. some thing like this…

if(sizeof(temp1)> k){
memcpy(temp1,temp2,k);}
else{
return fail;}

This solved our issue. I hope this helps

Follow

Get every new post delivered to your Inbox.