This article on WebSphere Portal and portlets looks at best practices when designing portlets, either for new applications or for incorporating Domino applications into the WebSphere Portal.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Now that you're ready to start building portlets, you need to understand a few key considerations in portlet design: performance tracking, caching, resource sharing and device independence.
A portlet must cohabit a portal page with other portlets, which it may not be aware of, so performance becomes a key issue in portlet design. To compound the issue, the portal is the main entry point for users to access other applications. From the users' perspective, any performance issue is the fault of the portal. It is the portal that's slow -- or worse, "broken" -- not the ERP or sales application. Therefore it is extremely important for administrators to be able to trace the performance of the portlets in order to quickly and correctly isolate bottlenecks.
To determine whether the performance of your application is adequate or not, the first critical step is to document acceptable response times for existing applications. You can then test your system against those times. I call it a "critical" step because determining user expectations and testing against those expectations is the most important, yet overlooked, factor in the success of a portal. After all, perception is a key component of performance.
The easiest way to trace the performance of individual portlets is to add timers to your code. Timers are an excellent way to track response times and pinpoint the source of a performance bottleneck. Plant the timers at critical operations, such as calling third-party APIs or enterprise systems.
Logging is another tool for identifying bottlenecks. The first step is to enable the response time logger within the IBM HTTP server. That allows you to compare the HTTP server's access log to the client's response time log to determine whether a bottleneck exists in front of, or behind, the HTTP server. Be sure to log all executed SQL statements on the database. SQL commands can also be run through a command analyzer to see how they might be modified to improve efficiency. An example of such a tool is the SQL Explain facility in DB2.
Memory usage and garbage collection
When faced with dramatic performance failures, such as a frozen screen or unresponsive server, take a look at the garbage collection performance in the Java virtual machine (JVM). Garbage collection kicks in when the JVM cannot find sufficient memory to execute a request. Garbage collection is a Stop-The-World (STW) activity, which means the JVM stops all processes. Then it releases the memory used by objects that are no longer being referenced by other objects. This frees up space in the memory.
IBM's JVMs are tuned for the best out-of-the-box performance and use multiple helper threads on multiprocessor systems to minimize garbage collection time. Nevertheless, if you experience extremely slow response times, you should enable garbage collection logging to determine whether or not those times correspond to garbage collection activities. If that is the case, consider increasing the size of the heap -- the memory space allocated to the JVM -- for temporary relief until you find out what is causing garbage collection. When massive amounts of objects are created and discarded (no longer referenced), perhaps by a poorly designed portlet application, garbage collection activity will increase.
Caching is vital in Web application performance, but it's difficult to implement. That's partly because data varies considerably as to whether it should or shouldn't be cached and for how long a period of time. For instance, stock quotes aren't good prospects for caching for extended periods because they change too frequently; but user preferences are fine for caching because they typically are set once and changed infrequently.
Caching offers big gains in response times but must be carefully thought out before implementing. If portlet output is static in nature, or is valid for some fairly lengthy period of time before it is updated, then the portlet should enable the "display cache" by setting appropriate values for the
In a typical work environment, multiple users may need to access the same content. If too many users are accessing the same content, however, limited resources can significantly slow performance. If a portlet must wait for a session object to become available, that creates a bottleneck in the flow of the application.
An object pool is a set of limited resources, such as sessions, that can be reserved for use by portlets and then returned to the pool. Reserving and returning pooled objects avoids the overhead of having to create and destroy an object each time a portlet requests it. Some common object pools might contain session connection objects or view objects. Object pools are scalable, accessible by any number of threads, enable load balancing and are easy to use. However, developing and maintaining the code for an object pool takes time. So if your objective in pooling is to share view beans, consider caching the portlet content instead of the objects that drive it.
When designing portlet applications, it's best to use a device-independent method, which means it's not designed for any specific device but can be extended to support multiple devices as needed. There are several ways to support device independence in WebSphere Portal. One technique is to package Java Server Pages (JSPs) and associated resources in directories that match specific client types and use the getMarkupName() method to check the markup supported by the client at the time of the request. So when a portlet uses a JSP for rendering portlet content, the portal selects the proper JSP for the client, markup language and location (for example, Japan, Germany and the U.S.) indicated in the request.
No matter how closely you've adhered to best practices, you can never be sure of performance until an application has been deployed in the real world. But a good indication of performance comes from thorough testing. A pre-production test is, therefore, a critical reality check before deployment.
Once a portlet application is created and tested, it should then be moved to a staging environment that resembles, as closely as possible, the real production environment. In this sort of pre-rollout test, make sure your application performs correctly in the real world -- especially if the application will be interacting with third-party applications and enterprise systems.
For more information about the best practices and development options available for WebSphere Portlets, see Mobile Applications with IBM WebSphere Everyplace Access Design and Development, SG24-6259 and Patterns: Pervasive Portals Patterns for e-Business Series, SG24-6876.
Tony Higham is part of the WebSphere Portal team at IBM and an expert on Lotus, WebSphere and Java technologies. He can be reached at email@example.com.
Sue Hildreth is a contributing writer and editor based in Waltham, Mass. She can be reached at Sue.Hildreth@comcast.net.
Do you have comments on this tip? Let us know.
Please let others know how useful it is via the rating scale below. Do you have a useful Notes/Domino tip or code to share? Submit it to our monthly tip contest and you could win a prize and a spot in our Hall of Fame.