Jan Burse, created Oct 24. 2011
While seeing that his ink is running out Robinson Crusoe might have taken one page of his diary, fabric a message, put it into a bottle and consign it to the sea. Our sales system would not work with this form of communication. Not only would the delivery mechanism be unreliable and too slow. Also inking a simple summon for help is probably not what we would like to communicate by our sales system. The sales system should be able to automatically create letters that are specific to the customer case and that form part of a business dialog.
The automatic letters are driven by a template processor. The requirements for the template processor are many fold. Since customers can individually choose the communication language an important aspect is internationalization. Further the template processor should be able to generate letters ranging from small plain text notifications to larger HTML based multimedia flyers. Finally we would like to have a quick deployment cycle. Our requirements for the template processor quickly ruled out the current main technology of the application. One problem is that JSP dynamically generated Java classes clutter the permanent space.
The custom template processor supports two use cases. The first use case is preview and validation of the template. For this use case the template processor only needs the template and the data schema as an input. The output will be a preview of the template and a list of errors. The second use case is the generation of the text output. This time besides the template the input consists of the actual data and not only the data schema. The output will be the plain text respectively the HTML mark-up of the letter. For safety even a validated template might still produce an exception during application when some actual data is missing.
We might compare the custom template processor with the extensible style sheet language for transformation (XSLT). Concerning the presentation of the actual data we don’t use an XML object. Instead we make use of the Matula database layer together with the entity JSON encoding for complex data types and sub types as describe in a previous blog entry. An initial batch of data is fetched by the template processor before processing the template. During the template processing, when reference objects are encountered, further batches of data might be fetched by the template processor. The data might reside in different sources so that integration can happen.
The templates itself are represented as XML objects. A template then defines a mixture of raw output text and control constructs. The text is produced by sequentially combining the raw output text with the execution of the control constructs. Among the control constructs we find loops and branching. A template might refer to sub templates, which are looked up from the template store by matching a condition. Contrary to XSLT the loops, the branching and the conditions are currently not based on XPath expressions. Since the entity JSON encoding descends from Vector and Hashtable our expressions are based on array element and field access. Results of entity JSON expressions can even be stored in template block variables.
The template processor serves us already very well in the actual sales system. There are a few items on our wish list. The finding that an enhanced schema could serve well to the management of structured data also translates to the template processor. The preview validation uses a weak type system, sub types are modelled as union types. By an enhanced schema and a stronger type system we could spot more errors early on. Accidentally we started lending an additional benefit to the template processor. By using conditions as filters and expressions as selectors we recently started using templates as a customer base query language.