You can find the full source code for this website in the Seam package in the directory /examples/wiki. It is licensed under the LGPL.
For any Seam-powered website, optimizing the communication between the webbrowser client and the server is crucial. Although some of these optimization strategies can be considered generic, and some are handled by Seam on the server-side automatically, many others depend on the content that is being served. In other words, Seam can only do so much automatically and you, as the developer of a website, will have to manually tune your system for optimal performance and scalability.
This document outlines some common strategies useful for Seam application developers.
Your best tool for analyzing the flow of information and related performance characteristics is the Firebug plugin for Firefox. This plugin allows you to analyze request and response HTTP messages. For example, you can use it to see what resources are cached on the client, or what resources have been sent from the server using compression.
In addition, you should install the Firebug plugin Google Page Speed. This plugin can analyze the performance characteristics of a particular webpage and summarize potential optimization strategies, such as recommended changes you should make to caching of resources, compression, and so on.
Keep in mind however that although tools such as Page Speed are useful to get an overview, you still have to use your own judgement, and you should not follow its recommendations blindly. The tool will only group the analysis results by a predefined metric of importance. It's very well possible that a particular optimization that is highly recommended by Page Speed will, in the end, not much improve the perceived performance of your website.
As an example consider dynamically served images, e.g. from a database. Although Page Speed will show you that better compression of PNGs, JPGs, and so on could save you 50% bandwidth and that this would be a worthy and important optimization, actually implementing it would be quite a bit of work. And, because browsers (should) cache these images, the only time you'd see any real effect is on the first HTTP hit (cold cache); subsequently the browser would use the cached version (warm cache).
This distinction between cold and warm browser cache behavior is a fundamental aspect of HTTP communication optimization. Always know what will be affected and balance your effort and the amount of work with the expected behavior and perception of your website visitors.
Another example is the recommendation by Page Speed to allow parallel requests of CSS, Javascript, images and other dependencies by distributing them on different hostnames. Unless you have the infrastructure (DNS, virtual webserver hosts, etc.) to deal with this recommendation, you should simply ignore it. Although cold cache rendering speed of your website is important to get visitors, keeping them requires anyway that these resources be cached, so that subsequent visits are faster. Again, consider the effort and the expected result.
Especially for text-based content such as CSS and Javascript files, gzip compression before transferring the data to the client impacts the perceived cold-cache performance of your website. In other words, when a visitor opens your website for the first time, the speed with which it will render depends on how much data there is to download. Your goal is to reduce the number of kilobytes that have to be transferred across the wire.
Ideally, content compression is enabled globally and not on a per-application basis. This is the job of the webserver or servlet container. If your servlet container does not support compression, you can always try to write your own servlet filter - note that no such filter is built-in with Seam or JSF.
Tomcat will by default not compress content before transferring it to the client. You have to enable compression in your Tomcat server.xml for the content types you want to serve with compression:
<Connector port="80" address="${jboss.bind.address}" ... compressableMimeType="text/html,text/xml,text/plain,text/css,text/javascript" compression="on"/>
If supported by the client, Tomcat will now respond to any request for such a resource with a gzip-compressed version. Note that compression requires more CPU power on your server! Therefore you can also set a minimum number of bytes (content length), to disable compression for resources that are very small - remember that even if Page Speed tells you it is important, you won't gain much from compressing a 1kb resource to 800 bytes. On the other hand, if CPU load on your server is not your primary concern, you can compress all text content easily.
For reasons unknown, compression in Tomcat doesn't work for some content. We've observed this behavior with content that is being produced by servlets as well as simple static files that are served from the WAR webroot. Unfortunately, no clear failure pattern could be established. Therefore we have added some convenience methods in Seam 2.2.1 that allow you (and Seam itself) to enable compression on implementations of Seam's AbstractResource selectively. This is a Seam extension point you should consider instead of writing and configuring your own custom servlets. Your content will be handled through a single configured SeamResourceServlet:
@Scope(ScopeType.APPLICATION) @Name("myResource") @BypassInterceptors public class MyResource extends AbstractResource { // The final resource path is (depending on web.xml): /seam/resource/myresource public String getResourcePath() { return "/myresource"; } public void getResource(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/plain"); OutputStreamWriter outStream = new OutputStreamWriter( selectOutputStream(request, response) ); outStream.write("Some text content!"); outStream.close(); } @Override protected boolean isGzipEnabled() { return true; } @Override protected boolean isCompressedMimeType(String mimeType) { return mimeType.matches("text/.+"); } }
The selectOutputStream() method will automatically chose a compressed or non-compressed output and, when closing the stream, add the right HTTP response headers to inform the client about the content encoding. You can override the methods isGzipEnabled() and isCompressedMimeType() to dynamically implement compression.
Compression of transferred content is an important optimization, and it greatly influences how your visitors will perceive the speed of your website when they arrive for the first time, with a cold browser cache. You also have to consider how the cache is used and what happens on subsequent page views from the same browser. Most content can be cached on the browser, as it is static and if at all it rarely changes.
You can control the browser's caching behavior with HTTP response headers. The most important header is Cache-Control: max-age=<number of seconds>. A browser will consider the content of the resource to be fresh
as long as it hasn't expired. The browser will only make another roundtrip to the server to revalidate the content - that is, potentially retrieve it again - after it has expired in the local cache.
The appropriate expiration and validation strategy depends on the resource that is being requested. For example, a static PNG or CSS file served from your WAR webroot can probably be cached for hours, maybe even days or weeks. On the other hand, dynamically generated content, be it text or image data that is typically rendered from database content, requires fine-grained control of expiration timeouts.
The first resource you will probably consider caching is the rendered HTML output of the actual webpage. By design, a Seam application renders HTML dynamically and the content of the page changes all the time, so no browser caching is used here.
Next you might find some CSS, Javascript, and image dependencies in your main HTML code that trigger additional HTTP roundtrips. These dependencies are either your own, or have been added by the Seam framework automatically because you enabled certain features such as client-side AJAX.
If you use Richfaces, its resources will be served automatically with a cache expiration timeout of 24 hours (at the time of writing). You could disable this with the <web:ajax4jsf-filter enable-cache="false"/> setting in your components.xml.
More problematic are static resources that are served from your WAR webroot, typically your own CSS, Javascript, and image files. You also have to consider resources served by Seam through AbstractResource implementations and through your own custom servlets. None of these resources will be served with cache control headers and it's up to the heuristics of the browser's cache to decide if the content can be cached and how long it will be cached. Clearly, this is not an optimal situation.
Starting with Seam 2.2.1, you can configure a Seam filter that will apply cache control headers to responses in your components.xml:
<web:cache-control-filter name="imageCacheControlFilter" regex-url-pattern=".*(\.gif|\.png|\.jpg|\.jpeg)" value="max-age=86400"/> <web:cache-control-filter name="textCacheControlFilter" regex-url-pattern=".*(\.css|\.js)" value="max-age=1400"/>
Note that you do not have to name the filter if you only have one. However, in most cases you will need different expiration and cache control options for different resource types.
Caching content on the browser and keeping the cache warm
is your first priority. But when a cached item expires on the browser, the next page visit will again trigger loading of the resources. If the resource state has still not been modified on the server, after the browser cache expired the item, the same data would be transferred to the browser again. So instead of demanding that the resource should be send under any circumstances, browsers can execute a conditional GET or HEAD request first to check if the resource has been modified. This is also called cache revalidation.
If such a conditional request arrives on your server you can verify the condition(s) that have been sent by the browser, and send the right answer back: Either you return 304 Not Modified and no content, which tells the browser to keep using the cached content. Or you send back a fresh representation, if the resource state has been modified and the conditions did not match.
The conditions are either content-based (If-None-Match and ETag) or time-based (If-Modified-Since and Last-Modified) and which you use depends on how easy it is to calculate either a unique hash of the resource state, or to find the last modification timestamp of the resource on the server.
In Seam 2.2.1 you can use the ConditionalAbstractResource superclass instead of the AbstractResource superclass for this purpose:
@Scope(ScopeType.APPLICATION) @Name("myResource") @BypassInterceptors public class MyResource extends ConditionalAbstractResource { // The final resource path is /seam/resource/myresource public String getResourcePath() { return "/myresource"; } public void getResource(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/plain"); String textContent = "This is some text."; String entityTag = createEntityTag(textContent, false); // Hashes the text // or if you want to work with time-based validation // Long lastModified = getLastModifiedTimestamp(<path of resource>); if (!sendConditional(request, response, entityTag, <lastModified or null>)) { OutputStreamWriter outStream = new OutputStreamWriter( selectOutputStream(request, response) ); outStream.write(textContent); outStream.close(); } } }
This example uses an entity tag, a hash that uniquely identifies the current state of the resource. The sendConditional() method returns true if the browser's conditions could be satisfied, and it will automatically send a 304 Not Modified response. If the browser's conditions could not be satisifed, for example if the browser transmitted a different entity tag or if the last modification time is later than the one the browser has, you need to send the full representation of the resource back to the browser.
Richfaces content, especially Javascript and CSS for client-side AJAX, is being served compressed, and cached, if the above settings are applied.
However, RichFaces resources are not bundled. For example, if you use the Tree component of RichFaces, many more resources will be downloaded from the server as dependencies of the HTML page:
tree.js tree-selection.js tree-item.js tree-item-dnd.js ...
All in all, a page that uses a few RichFaces components may download dozens of dependencies. Note that the content of these resources is cached and subsequent page requests will not download the dependencies again until the cached items expire. The problem is the initial cold cache page request, and the time it takes to download these resources sequentally.
So our goal is to minimize the number of requests, then parallelize the remaining requests that are absolutely necessary. Finally, we need to order the requests properly so that CSS files can be downloaded in parallel.
First, RichFaces needs to bundle related resources, probably by type. For example, all Javascript that is required to execute a page should be bundled up in a single resource. On the other hand, although that would create a cacheable resource state, it would be bound to a particular page - and a unique identifier for that resource would have to be generated. Which means that subsequent hits on other pages would require downloading a different bundle.
Bundling resources by AJAX functionality seems to make more sense. For example, all Javascript files for a Tree component should be served with one GET request. All CSS files required for the Tree component should be served with one request. All Javascript files related to drag-and-drop should be bundled up so they can be served with one request. This requires categorization of RichFaces resources and changes to the automatically generated HTML header, where dependencies are listed.
After minimizing the number of requests, we need to parallelize and order the requests properly for optimal performance.
A browser will download CSS files in parallel if they are declared before any Javascript dependencies in the HTML header. Today RichFaces does not do that.
Also, a browser will not issue parallel requests for Javascript dependencies because they are usually served from the same host (www.yoursite.tld). The only way to optimize this is outside of the scope of RichFaces, as it would require setup of virtual hosts and major changes to the deployment of the application. However, it's worth considering how RichFaces could support that kind of configuration.