One Xpand user less, one happy Xtend user more

This week I am consulting a customer who had introduced a model based development approach based almost 10 years ago and used it with success since then. At the time back then Xpand was the most powerful code generation engine and UML models were often used to generate code from. Xtext did not even exist at that time. The customer uses Enterprise Architect and the Enterprise Architect exporter from the components4oaw. The last release was in 2011 and the project has not been developed further. For my customer the component just did what it should, so there was no direct need to change anything at the process. The EA exporter has its flaws, especially since it needs an Enterprise Architect installation and this means it only works on Windows machines. For Enterprise Architect users who do model based development, we therefore offer the YAKINDU Enterprise Architect Bridge, which scales better and can process .eap models on any platform.

My task was to help the customer to modernize their tool chain. To be fair, Xpand is not the right choice anymore. The Xtend Language combines the strengths from Xpand (great templating support, functional programming, static typing, polymorphic dispatch, mature Eclipse tooling) and resolved some weaknesses (performance, compiled code instead of interpreted, Java integration, extensibility of expressions).

For the customer who never had used Xtend so far, but was quite familiar with Xpand, it was quite a surprise how close both languages really are. Most of the concepts can be mapped 1:1 from good old Xpand to Xtend. We created a small generator from scratch and copied functions and templates and translated them manually for demonstration. They understood Xtend within minutes then. Most of the work is monkey-see-monkey-do. For such cases I wrote a small migration script which can translate Xpand,Xtend(1) and Check code to Xtend(2) classes. We used that script now for an initial translation.

One of the reasons why this migration script is not published is that cannot translate Xpand code completely to Xtend. The tool parses Xpand templates and traverses the AST to transform the expressions to Xtend equivalents. But it does not know about the type system which is used by the generator.

And here Xtend is not as powerful as Xpand, especially when using UML. In Xpand, the UML type system adapter analyzed the applied profiles of a model to create virtual types for stereotypes. Elements with stereotypes applied can be processed as if they were of a subtype of the extended type, and tagged values became attributes. In Xtend, there is only the Java type system, and for processing UML models this means that templates have to use the UML metamodel directly. The old “feeling” of a real type system can be simulated with a set of extension functions. I usually introduce an extension class per profile which offers for example a method per tagged value. The creation of such an extension class can again be automated by generating it from a profile .uml model.

Another disadvantage against the Xpand framework is that Xtend is not a code generation framework, but a general purpose programming language. How a generator component looks like that invokes the templates, where to produce output to, how to integrate constraint checks, this all is not provided.

What I usually do here is to use Xtext’s infrastructure like the IGenerator interface and validation based on Xtext’s AbstractDeclarativeValidator and @Check annotations. I reuse Xtext’s MWE Reader and Generator component. However, this requires some work and advanced knowledge. The basic approach is here to make UML models recognized by Xtext as a generic EMF resource. To actually use the Xtext components, a generator specific Guice module has to be created which extends AbstractGenericResourceRuntimeModule and satisfy several dependencies which an Xtext language already as configured by default. A good part of the approach was described by Christian Dietrich in his blog posts “Xtend2 Code Generators with Non-Xtext Models” and “Xtext 2.0 and UML“. I may go into details in a later blog post.

Although Xtend is such a natural choice for writing code generators and it is so well-integrated in the Xtext ecosystem, there is framework support missing for non-Xtext models. It is easy to write a trivial framework, but why should everyone start writing their own when good infrastructure already exists in the Xtext framework? From what I experienced again, there is need to have some more framework support for this use case. I doubt that it could be part of Xtext itself, but maybe we will provide a framework for Xtend based code generators in the future.

At the end we were able to show and demonstrate a migration path in one single workshop day. The customer was happy to save several days or weeks of time. The workshop costs were compensated by far for them. From a sales perspective it might not be wise to leave a customer in a state where he is not dependent on our services in the next time, but this is not how we work. For me it feels right.

Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime.

Xbase Customization: Redefining operator keywords

If you use Xbase in your Xtext based DSL, you are usually satisfied with the set of operators the expression language defines. They are closely related to what you are used to in Java or similar languages.

However, in a customer’s workshop the customer wished to have custom keywords for some operators. For example, the operator && should be alternatively presented as AND, and the || operator as OR.

To demonstrate this customization we’ll start with Xtext’s famous Domainmodel Example. The Domainmodel.xtext grammar is derived from org.eclipse.xtext.xbase.Xbase and the Operation rule uses an XBlockExpression for the Operation’s body:

grammar org.eclipse.xtext.example.domainmodel.Domainmodel with org.eclipse.xtext.xbase.Xbase


	'op' name=ValidID '(' (params+=FullJvmFormalParameter (',' params+=FullJvmFormalParameter)*)? ')' (':' type=JvmTypeReference)?

Unit Test

We’ll extend the language test-driven, thus we first create a unit test that uses the new feature, but will fail first until we have successfully implemented it. Fortunately there is already a suitable test class in project org.eclipse.xtext.example.domainmodel.tests.

We extend the class ParserTest.xtend:

	def void testOverriddenKeyword() {
		val model = '''
			package example {
			  entity MyEntity {
			    property : String
			    op foo(String s) {
			    	return s!= null && s.length > 0 AND s.startsWith("bar")
		val pack = model.elements.head as PackageDeclaration
		val entity = pack.elements.head as Entity
		val op = entity.features.last as Operation
		val method = op.jvmElements.head as JvmOperation
		Assert::assertEquals("boolean", method.returnType.simpleName)

Note the Operation body, the expression uses both presentations of the And-operator.

return s!= null && s.length > 0 AND s.startsWith("bar")

When the ParserTest is executed it will now fail, of course, but only for the AND keyword:

java.lang.AssertionError: Expected no errors, but got :
ERROR (org.eclipse.xtext.diagnostics.Diagnostic.Linking) 
'The method or field AND is undefined' 
on XFeatureCall, offset 119, length 3
ERROR (org.eclipse.xtext.xbase.validation.IssueCodes.unreachable_code) 'Unreachable expression.' on XFeatureCall, offset 119, length 3

	at org.eclipse.xtext.junit4.validation.ValidationTestHelper.assertNoErrors(
	at org.eclipse.xtext.example.domainmodel.tests.ParserTest.testOverriddenKeyword(

Overloading the operator rules

Looking at Xbase.xtext shows that Xbase defines separate data type rules for operators:


Xtext’s Grammar Mixin feature allows a redefinition of those rules. The obvious customization is in Domainmodel.xtext:

	'||' | 'OR';
	'&&' | 'AND';

Regenerating the language’s Xtext implementation makes those keywords available to the syntax. However, the unit test still fails, but now with a different error:

java.lang.AssertionError: Expected no errors, but got :
ERROR (org.eclipse.xtext.diagnostics.Diagnostic.Linking) 
'AND cannot be resolved.' on XBinaryOperation, offset 119, length 3

	at org.eclipse.xtext.junit4.validation.ValidationTestHelper.assertNoErrors(
	at org.eclipse.xtext.example.domainmodel.tests.ParserTest.testOverriddenKeyword(


The missing piece is a customization of class OperatorMapping. Thus we create a subclass OperatorMappingCustom with constant QualifiedNames for the additional operator keywords and bind it in DomainmodelRuntimeModule:

public class OperatorMappingCustom extends OperatorMapping {
	public static final QualifiedName AND_2 = create("AND");
	public static final QualifiedName OR_2 = create("OR");
public class DomainmodelRuntimeModule extends AbstractDomainmodelRuntimeModule {
	public Class bindOperatorMapping() {
		return OperatorMappingCustom.class;

A naive approach is here to overload the initializeMapping(), as the Javadoc suggests (“Clients may want to override #initializeMapping() to add other operators.“):

But this fails again: Guice creation errors:

1) Error injecting constructor, java.lang.IllegalArgumentException: value already present: operator_and
  at org.eclipse.xtext.example.domainmodel.OperatorMappingCustom.(Unknown Source)
  at org.eclipse.xtext.example.domainmodel.OperatorMappingCustom.class(Unknown Source)
  while locating org.eclipse.xtext.example.domainmodel.OperatorMappingCustom
  while locating org.eclipse.xtext.xbase.scoping.featurecalls.OperatorMapping
    for field at org.eclipse.xtext.xbase.util.XExpressionHelper.operatorMapping(Unknown Source)
  while locating org.eclipse.xtext.xbase.util.XExpressionHelper
    for field at org.eclipse.xtext.xbase.validation.XbaseValidator.expressionHelper(Unknown Source)
  at org.eclipse.xtext.service.MethodBasedModule.configure(
  while locating org.eclipse.xtext.example.domainmodel.validation.DomainmodelJavaValidator
Caused by: java.lang.IllegalArgumentException: value already present: operator_and
	at org.eclipse.xtext.example.domainmodel.OperatorMappingCustom.initializeMapping(
	at org.eclipse.xtext.xbase.scoping.featurecalls.OperatorMapping.(
	at org.eclipse.xtext.example.domainmodel.OperatorMappingCustom.(
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

Instead, the method getMethodName() must be overridden and delegate the invocations of the new operators to the existing ones. The resulting OperatorMappingCustom is then:

public class OperatorMappingCustom extends OperatorMapping {
	public static final QualifiedName AND_2 = create("AND");
	public static final QualifiedName OR_2 = create("OR");

	public QualifiedName getMethodName(QualifiedName operator) {
		if (AND_2.equals(operator)) {
			return getMethodName(AND);
		if (OR_2.equals(operator)) {
			return getMethodName(OR);
		return super.getMethodName(operator);

Finally, the unit test will execute successful.


Xbase allows customization of operators by overriding the operator’s data type rule from Xbase.xtext. This adds the operator keywords to the language, but fails at runtime. Additionaly the class OperatorMapping must be customized and method getMethodName() overloaded.

Fornax MWE Workflow Maven Plugin 3.5.1 published on Maven Central

The Fornax Workflow plugin is a Maven Plugin that executes MWE/MWE2 workflows within Maven. It has been there for quite some years now, and whoever needed to integrate MWE/MWE2 workflows in a headless build was likely using it. The Fornax Platform has been an address where components around openArchitectureWare, Xpand and Xtext have been developed. While all other subprojects don’t play any role anymore, the Workflow plugin is still in frequent use.

Over the years we had to change the underlying infrastructure some times. The plugin was hosted on the project’s own repository server, and projects using the plugin had to configure an additional plugin repository either in their POMs or settings.xml. This was undesired, but at the end not really a blocker. However, with a recent change of the repository manager, users experienced problems accessing the Fornax repository Currently, this URL is redirected to a server hosted at itemis, and users might get problems with the HTTPS connection.

It always bothered me that we had to host this plugin on a separate repository, and since it is a widely used component, it is logical that it should be available from the Central repository. But it was never a blocker for me. Now finally I got the driver to change this.

Long story short, the plugin is now published at Maven Central as version 3.5.1. I highly recommend to upgrade to this version and remove the Fornax Maven repository from your configuration. The coordinates did not change, they are still org.fornax.toolsupport:fornax-oaw-m2-plugin. I would like to change this sometime in the future (e.g. the name parts “oaw” and “m2” are not up-to-date anymore), maybe with moving development to another project hosting platform.

Version 3.5.1 does not differ much from 3.4.0, which is the version likely used by the world today. The main work was on refactoring the POM and its parent in order to meet the requirements for deployment on Maven Central. Further, I worked on automation of the release process with the maven-release-plugin.

There is one additional feature in 3.5.x: The new property useTestScope can be used to skip dependencies from the test scope for computation of the Java classpath used to execute a workflow in forked mode. On Windows systems the classpath sometimes reaches the limit of allowed command line length, especially since the local Maven repository is below the user’s home directory by default, which has already a rather long path prefix. By default, the plugin will exclude now these test scope dependencies unless the user configures the property explicitly. In 3.5.0 there was a small logical bug with this feature which made the plugin unusable, so please do not use that version. The version 3.5.1 can be used without problems for all using 3.4.0 so far.

Enabling Spring in Scout applications

Today I am attending the first Scout User Day 2014 in Ludwigsburg, which is aligned with EclipseCon Europe 2014 starting tomorrow. Yesterday we had a pre-event dinner with some attendees and the organizers at the Rossknecht restaurant. IMG_3161IMG_3160 I got into a chat with Nejc Gasper, who will give a talk titled “Build a Scout backend with Spring” today. I was a bit surprised as he told me he did not manage to get Spring’s classpath scanning working yet. Since we are doing this in our application, I think it is worth writing now down what we had to do to get this working. The goal in our application is primarely to use Spring as dependency injection container, since the customer uses Spring in all their other Java based applications, too, and wanted us to do so also.

Spring Configuration

The Spring configuration files are located in the folder META-INF/spring of the *.client, *.shared, *.server projects. In this configuration files, we mainly activate classpath scanning:

<?xml version="1.0" encoding="UTF-8"?>
<beans:beans xmlns:xsi=""
    xmlns:beans="" xmlns:p=""

    <context:annotation-config />
    <context:component-scan base-package="com.rhenus.fl" />
    <!-- -->
    <beans:bean id="conversionService" class="" />

The next important thing is to copy the files spring.handlers, spring.schemas<, spring.tooling into the META-INF folder. The files can be found in the META-INF directory of bundle org.springframework.context. screenshot 21 Without doing this, you will get errors while loading the Spring configuration like this:

Caused by: 
org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace []|Offending resource: URL [bundleresource://9.fwk1993775065:1/META-INF/spring/fl_client.xml]|
	at org.springframework.beans.factory.parsing.FailFastProblemReporter.error(
	at org.springframework.beans.factory.parsing.ReaderContext.error(
	at org.springframework.beans.factory.parsing.ReaderContext.error(
	at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.error(
	at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(
	at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(
	at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.parseBeanDefinitions(
	at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.doRegisterBeanDefinitions(
	at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.registerBeanDefinitions(
	at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registerBeanDefinitions(
	at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(
	at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(
	at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(

Bundle Activator

The Spring configuration files are loaded in the Bundle Activator classes of the three “main” Scout projects (client/shared/server). The Activator can also be used then to access the ApplicationContext. We use the GenericXmlApplicationContext to initialize the context from the XML configuration above. One important thing is that this class uses the ClassLoader of the Activator. Otherwise you will get again the error mentioned in the section above. The Activator class looks then as follows:

public class Activator extends Plugin {

 // The plug-in ID
 public static final String PLUGIN_ID = "com.rhenus.fl.application.client";
 public final static String SPRING_CONFIG_FILE = "META-INF/spring/fl_client.xml";

 // The shared instance
 private static Activator plugin;
 private ApplicationContext ctx;

 public void start(BundleContext context) throws Exception {
  plugin = this;

 public void stop(BundleContext context) throws Exception {
  plugin = null;

 public static Activator getDefault() {
  return plugin;

 private void init(BundleReference bundleContext) {
  URL url = getClass().getClassLoader().getResource(SPRING_CONFIG_FILE);

  UrlResource usr = new UrlResource(url);

  ctx = new GenericXmlApplicationContext() {
  public ClassLoader getClassLoader() {
   return Activator.class.getClassLoader();
  ((GenericXmlApplicationContext) ctx).load(usr);
  ((AbstractApplicationContext) ctx).refresh();


 public ApplicationContext getContext() {
  return ctx;

Service Factory

In order to use dependency injection in Scout services, the services themselves must be instantiated through the Spring ApplicationContext. The default implementation of course is not aware of Spring, so we need to customize this. Unfortunately we have to copy the class We need just to exchange one single line in the method updateInstanceCache(), where the service is instantiated, but this method is private in Scout. The line

m_service = m_serviceClass.newInstance();

is replaced by

m_service = getContext().getBean(m_serviceClass);

Since we have to provide different ApplicationContexts in the different plugins, we put this into the abstract class AbstractSpringAwareServerServiceFactory (full code):

public abstract class AbstractSpringAwareServerServiceFactory implements IServiceFactory {

 private void updateInstanceCache(ServiceRegistration registration) {
  synchronized (m_serviceLock) {
   if (m_service == null) {
    try {
//     m_service = m_serviceClass.newInstance();
     m_service = getContext().getBean(m_serviceClass);
     if (m_service instanceof IService2) {
      ((IService2) m_service).initializeService(registration);
     } else if (m_service instanceof IService) {
      ((IService) m_service).initializeService(registration);
    } catch (Throwable t) {
     LOG.error("Failed creating instance of " + m_serviceClass,
 protected abstract ApplicationContext getContext();


The concrete classes implement the method getContext() by accessing the method from the Bundle Activator:

public class ServerServiceFactory extends AbstractSpringAwareServerServiceFactory {

   * @param serviceClass
  public ServerServiceFactory(Class<?> serviceClass) {

  protected ApplicationContext getContext() {
    return Activator.getDefault().getContext();



The service factory class implemented above must be used now to create the services. This is done in the plugin.xml file:


Use Dependency Injection

Now we are finally able to use Dependency Injection with javax.inject.Inject with Scout services.

import org.springframework.stereotype.Component;
import javax.inject.Inject;

public class TMIRLN010Service extends AbstractTMIRLN010Service {
  protected ConversionService conversionService;


If everything is correct, you will now recognize the following lines in the console when starting up the Scout application:

Okt 27, 2014 8:45:28 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from URL [bundleresource://9.fwk1993775065:1/META-INF/spring/fl_client.xml]
Okt 27, 2014 8:45:29 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from URL [bundleresource://10.fwk1993775065:1/META-INF/spring/fl_shared.xml]
Okt 27, 2014 8:45:29 AM prepareRefresh
INFO: Refreshing com.rhenus.fl.application.shared.Activator$1@b40d694: startup date [Mon Oct 27 08:45:29 CET 2014]; root of context hierarchy
Okt 27, 2014 8:45:29 AM org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor 
INFO: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
Okt 27, 2014 8:45:29 AM prepareRefresh
INFO: Refreshing com.rhenus.fl.application.client.Activator$1@292f062b: startup date [Mon Oct 27 08:45:29 CET 2014]; root of context hierarchy
Okt 27, 2014 8:45:29 AM org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor 
INFO: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring

Scout tables with fixed columns

For tables with many columns it is often better if some of the first columns stay fixed when the user scrolls the table content horizontally. Since RAP 2.0, tables and trees support fixed columns, but Scout (current: 4.0/Eclipse Luna) does not provide this feature for its Table implementation. Since Scout supports different UI frameworks (Swing,SWT,RAP) and only RAP provides a table implementation with fixed columns support, this might be a reason for this missing feature.

The necessary additions can be added to Scout without changing Scout sources themselves. Scout allows to customize UI components, including existing ones. We can add this feature with fragments. The code mentioned below can be fetched from Github “scout-experimental”.

Client Extension Plugin


First, we need a Table implementation that provides an additional configuration property fixedColumns. The value is stored using the element’s Property Support:

package org.eclipse.scout.rt.extension.client.ui.basic.table;

import org.eclipse.scout.commons.annotations.ConfigProperty;

 * A table that supports Fixed Columns.
 * @see
public class AbstractTableWithFixedColumns extends AbstractExtensibleTable {
	public static final String PROP_FIXED_COLUMNS = "org.eclipse.rap.rwt.fixedColumns"; // =RWT.FIXED_COLUMNS
	 * Configures how many of the visible columns should be fixed.
	 * @return A value greater than 0
	protected int getConfiguredFixedColumns () {
		return -1;

	public void setFixedColumns (int n) {
		propertySupport.setPropertyInt(PROP_FIXED_COLUMNS, n);

	public int getFixedColumns () {
		return propertySupport.getPropertyInt(PROP_FIXED_COLUMNS);

	protected void initConfig() {


This custom class can be specified as default base class for tables in the Scout workspace preferences:
Scout base class

In the client code, the table field would be defined like this:

public class CtyTableField
      AbstractTableField<CtyTableField.Table> {
          public class Table extends AbstractTableWithFixedColumns {
             protected int getConfiguredFixedColumns() {
                return 2;

Scout RAP UI extension fragment


The class that must be customized is org.eclipse.scout.rt.ui.rap.basic.table.RwtScoutTable. I’m not good in creating names, so I call its extension just RwtScoutTableExt.

package org.eclipse.scout.rt.ui.rap.basic.table;

import org.eclipse.rap.rwt.RWT;
import org.eclipse.scout.rt.client.ui.basic.table.columns.IColumn;
import org.eclipse.scout.rt.ui.rap.ext.table.TableEx;
import org.eclipse.swt.widgets.Composite;

public class RwtScoutTableExt extends RwtScoutTable {
	protected void initializeUi(Composite parent) {
	    TableEx table = getUiField();
		// Fixed Columns Support
		// see
		// the configured fixed columns refer to visible columns only, but for
		// the RWT table the number of fixed
		// columns include also the non-visible ones. Compute how many columns
		// should be fixed in the RWT table.
		Integer configuredFixedColumns = (Integer) getScoutObject().getProperty(
		if (configuredFixedColumns != null && configuredFixedColumns > 0) {
			int fixedColumns = 0;
			int visibleColumns = 0;
			for (IColumn<?> column : getScoutObject().getColumns()) {
				if (column.isDisplayable() && column.isInitialVisible()) {
				if (visibleColumns == configuredFixedColumns) {
			table.setData(RWT.FIXED_COLUMNS, new Integer(fixedColumns));

After Scout has initialized the table component (call of super), the table has to be configured with the number of fixed columns:

table.setData(RWT.FIXED_COLUMNS, new Integer(fixedColumns));

The number of configured fixed columns has to be specified by a property of the table field. For now, you might wonder why we do not simply set the configured fixed column value. The reason is, that some columns might not be visible, and when configuring the fixed columns, you would expect that only the visible ones should count.


The table class itself is constructed by RwtScoutTableField. In order to construct our customized class, RwtScoutTableField must be subclassed and override the createRwtScoutTable() method:

package org.eclipse.scout.rt.ui.rap.form.fields.tablefield;

import org.eclipse.scout.rt.client.ui.form.fields.smartfield.IContentAssistFieldProposalForm;
import org.eclipse.scout.rt.ui.rap.basic.table.IRwtScoutTable;
import org.eclipse.scout.rt.ui.rap.basic.table.RwtScoutTable;
import org.eclipse.scout.rt.ui.rap.basic.table.RwtScoutTableExt;
import org.eclipse.scout.rt.ui.rap.util.RwtUtility;

public class RwtScoutTableFieldExt extends RwtScoutTableField {
	protected IRwtScoutTable createRwtScoutTable() {
		if (getScoutObject().getForm() instanceof IContentAssistFieldProposalForm) {
			return new RwtScoutTable(RwtUtility.VARIANT_PROPOSAL_FORM);
		} else {
			return new RwtScoutTableExt();



The created fragment is a fragment of the bundle org.eclipse.scout.rt.ui.rap. At least from Scout version 3.9.0 on, likely earlier, the mentioned solution should work.

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: Scout UI RAP - Extension
Bundle-SymbolicName: de.kthoms.scout.rt.ui.rap;singleton:=true
Bundle-Version: 1.0.0.qualifier
Fragment-Host: org.eclipse.scout.rt.ui.rap;bundle-version="3.9.0"
Bundle-RequiredExecutionEnvironment: JavaSE-1.7
Export-Package: org.eclipse.scout.rt.ui.rap.form.fields.tablefield


The extended field class has to be configured in the fragment’s descriptor fragment.xml by using the org.eclipse.scout.rt.ui.rap.formfields extension point:

<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.4"?>
            name="Table field"


Note the scope="global" configuration. The interface org.eclipse.scout.rt.ui.rap.extension.IFormFieldExtension defines 3 scopes: default, global, local. The description for the global scope is:

IFormFieldExtension.SCOPE_GLOBAL to indicate this extension to have a global scope (whole eclipse). Global defined extensions overwrite the default implementation.

Overwriting the default implementation is exactly what we try to achieve here.


The first 2 columns are marked as fixed. In fact, in front of the first visible column the table has a non-displayable Id column. When scrolling the table horizontally, the columns “Iso 2 Code” and “Description” are fixed.

screenshot 2014-07-07 um 12.10.50
screenshot 2014-07-07 um 12.11.02

Remove “Build path specifies execution environment…” warnings from Problems View

I have often workspaces with projects which specify Java 1.5 as minimal execution environment. On my machine there is no JDK 1.5 installed, and it turns out that getting one for Mac OSX Mountain Lion is not trivial. Actually I don’t need a JDK 1.5, since the standard 1.6 JDK is compatible. However, this raises in the workspace these annoying warnings.

screenshot 2013-05-17 um 10.32.31

In the Execution Environments setting it is possible to mark the Java 1.6 installation as compatible to the J2SE-1.5 Execution Environment:

screenshot 2013-05-17 um 10.52.31

Although the JDK 1.6 is marked compatible now, it is not “strictly compatible”, so the warning message remains.

Next you could try to disable the warning message. There is a setting for this in the preference dialog Java/Compiler/Building:

screenshot 2013-05-17 um 10.35.17

After changing the setting you are asked to rebuild the projects. But again, the warning do not disappear. I suspect this to be a bug and raised Bug#408317 for this.

So the last chance is to filter these warnings. Therefore select the options menu in the Problems View and open the “Configure Contents” dialog. In the “Types” selection tree expand the “Java Build Path Problems” node and uncheck “JRE System Library Problem”.

screenshot 2013-05-17 um 11.03.53

Finally the warning messages disappear from the Problems View. However, they are just filtered from the view, the projects themselves will still have these resource markers, so you will have a warning overlay icon on the project in the Package Explorer View although the Problems View might be empty.

But this raises the next problem: Now the warning disappears and the code is compiled with Java 1.6, and thus against the 1.6 API. This leads to the problem that you could accidently use API from >= 1.6. For example, usage of String#isEmpty() would compile even if the Execution Environment is set to J2SE-1.5 (the Execution Environment anyway just defines the lowest requirement) and also if Java source compatibility is set to 1.6 in the compiler settings.

We need to detect this unwanted use of API that is not 1.5 compatible. Therefore the PDE tooling offers support to install an Execution Environment Description for J2SE-1.5 and set up API tooling. This will finally allow us to detect illegal API use:

screenshot 2013-05-17 um 14.00.15

I like to thank Laurent Goubet and Mikael Barbero for their valuable comments on the potential API problems.

Preparations for CodeGeneration 2013

As I am addicted to code generation and DSLs, the CodeGeneration conference in Cambridge is always a must each year. Last year I could not make it, since I had the chance to speak at EclipseCon North America, which was in the same week. This year Mark took EclipseCon into his considerations (it was last week), so me and my colleagues from itemis will be there again. Actually, this year we will be more itemis guys then ever. Mark already assumed in his opening words at CG2011 that almost everyone from itemis would be there, this year we prove itemis is larger. I think our company is so close related to the conference theme that it is natural that we have lots to present, and much interest to hear others about what they are doing and have learned in the past.

screenshot 2013-04-05 um 10.55.19

Before the actual CodeGeneration conference starts on Wednesday, there are some pre-conference activities. My colleagues Holger Schill and Moritz Eysholdt will hold an intensive 2-day Xtext workshop on monday and tuesday.

I will arrive monday noon, since I take the early flight directly from my hometown Dortmund to London Luton. From there, I have to take a 2 hour bus trip to Cambridge. In the evening, I plan to meet Holger, Moritz, Meinte Boersma and hopefully some others in the Castle Inn pub. When you arrive on monday, drop into the Castle Inn roughly at 20 PM (I guess we go to a restaurant before). You can reach me there on my mobile phone: screenshot 2013-04-05 um 12.30.02


On tuesday this year’s Language Workbench Challenge summit takes place. We have 14 submissions (wow!) for the LWC13 assignment. I have been working on the Xtext submission together with 2 colleagues, Johannes Dicks and Thomas Kutz. The results are available as open-source project lwc13-xtext at Eclipselabs. We have prepared a detailed step-by-step tutorial as submission paper. The resulting document LWC13-XtextSubmission.pdf is available for download. On the project homepage I have placed today a quick start tutorial. Oh boy, this project did cost some time. The actual solution is not much code, but as often, it is harder to write less code than more. It could be even less, but we took care of that the code is readable and understandable. And writing the document is at least that much work as the implementation.

screenshot 2013-04-08 um 13.52.06

Every presenter has only 15 minutes to present their approach. 15 minutes presentation for that much work. I guess that the other participants did invest also quite some time. Both of my co-authors got the chance to visit the conference, and Thomas will support me with the presentation. He will demo the resulting JSF application and DSL source code while I do the main talking. We did a test run of the talk yesterday evening, and easily exceeded 20 minutes. I think Angelo will bring his egg timer again, which begs no pardon with the talk time of speakers. But only that way we will be able to run 14 talks on one day. We will have to restrict on the most important aspects only.

Besides my colleagues Thomas and Johannes also Sven Lange will join us then. I have the pleasure to work with him in my long-time project at Deutsche Börse (German Stock Exchange), now since over half a year. Sven is a highly motivated, skilled and smart person. It is still the same project I have reported about at CodeGeneration 2009 together with my former colleague Heiko Behrens. Sven is full-time working on this project, while I am for 20% scheduled. We have migrated here a huge code generator project from Xpand to Xtend. This alone would be worth an experience report session. Sven is working on Xtend support for IntelliJ, which he might present in a Lightning Talk on wednesday.

Wednesday the conference will start. I will have my main talk “Alive and Kicking with Code Generation” together with Dr. Boris Baginski from ATOSS Software AG after lunch at 13:45.

screenshot 2013-04-05 um 10.57.34

Currently we are finalizing our presentation slides. Boris has been ill for some days and busy with a new release of their ASES product, a workforce management suite. This is really an interesting customer and project. They are evolving this product now for 25 years, and they make use of code generation for ages. I think one can say that it helped them to survive in their business, some contestants did not manage to make larger platform shifts and died. Most of them tried a big-bang replacement, but the business is too fast evolving so that the target is moving steadily. Boris and I will speak about this product and how it has been evolved over the years. ATOSS was one of the first major projects using openArchitectureWare 4 (which mainly means Xpand), and now they are currently preparing a shift to Xtend.

I am glad that this talk is already on wednesday, I never come to rest until I finished some talk. After it, I can just relax and enjoy the conference. I am expecting some interesting insights on different approaches. Especially experience reports are interesting for me. I did not finally decide which sessions I will attend. At the moment I plan to see John Hutchinson with “The Use of Model-Driven Development in Industry” in the morning, and Darius Silingas with “Why MDA Fails: Analysis of Unsuccessful Cases” in the afternoon.

In the evening it is again time for the punting boat tour. I already attended three times, but it will be great fun again for sure. Let’s hope the weather is not too bad. I saw a prediction of ~10°C and possibility of light shower. In the past we had luck, and on a warm, sunny day the tour is double fun. However, I’ll better put an umbrella into the suitcase.

On thursday I have again an active part in the hands-on session “Have Your Language Built While You Wait”, which is hosted by Risto Pohjonen from MetaCase. The idea of this session is that attendees can get a DSL with the language workbench of their choice built with the help of experts for this workbench. Of course I will assist on Xtext. If you had no chance to visit the Xtext workshop this might be your chance to get some hands on Xtext. This session was already run last year successfully. Last year my colleague Benjamin Schwertfeger took over the Xtext part, since we were at EclipseCon.

There are also some other talks around Xtext and Xtend. Both have been released in version 2.4 on March 20th, which brings some interesting new features. Most notably in regard to code generation are the Active Annotations. I guess this is also part of what Sven Efftinge will adress as future of code generation in his keynote “The Past, Present and Future of Code Generation” wednesday morning. More details he will present together with Sebastian Zarnekow in the tutorial “Internal DSLs with Xtend” (thursday 10:45-12:00). The last Xtext related talk will be from Moritz Eysholdt, called “Executable Specifications for Xtext Languages” (friday 10:45-12:15). I am actually not sure which of these talks I will attend personally. They are most relevant for my work, and I don’t work close enough with them to catch everything new in Xtext on my own. Thus, I’ll definetely would learn important aspects. On the other side, there are also other interesting talks in parallel.

The coming week will be an intensive experience with lots to learn and interesting persons to meet. Although I will really enjoy this time, I will be glad when I finally come back home. At the moment, my family is ill and I hope that I get not infected these days. I have been looking forward and worked for this event, so I am crossing fingers when I can board monday morning healthy.

I am sure the organizing team around Mark and Jacqui will do again a great job.


See you there and let’s make this event special!