L_from_pencils

Nowadays DSLs seem to be everywhere. If 5 years ago DSL was an exotic word in the UML dominated model driven world, today it has established a strong following. A recent research on how MDE is used in industry [1], indicated that nearly 40% of respondents use in-house DSLs (alongside of other languages). The in-house qualifier is important, as these DSLs are very likely to be developed with metamodels. In such cases, a quality benchmark may help language development. Yet, it is not easy to find such a benchmark, let alone one that is widely accepted.

 

Five levels of Metamodelling

One quality benchmark that I found useful is described by Tony Clarks et. al in [2]. The authors define 5 levels of quality. These briefly are:

  1. The lowest level: a simple abstract syntax is defined, but not implemented yet in a tool. The static and dynamic semantics of the language is informal and incomplete. There is no specific tool support: an existing language is repurposed, compliance with the DSL is manually maintained and models are mostly interpreted by users.
  2. At this level, the abstract syntax and static semantics have been largely defined, implemented in a tool and validated. The dynamic semantics is still informally defined.
  3. The abstract syntax is completely implemented and tested. Concrete syntax has been defined for the language, but not implemented yet. Optimization of the language architecture has started.
  4. The concrete syntax of the language has been implemented and tested. Users create models either visually and textually. The language architecture has been optimized for reuse and extensibility. Tool support for dynamic semantics begins to appear.
  5. The topmost level: all aspects of the language have been modeled, including its semantics. Models written in the language can be processed by the tool. Examples thereof include code generation, execution, simulation, verification. The language architecture is well optimized for reuse.

While the original intention of the benchmark was to assess metamodels, I found it also useful for estimating metamodelling capabilities of MD tools. If a tool is not capable of supporting development needs for a certain level, then that level will be the quality ceiling for all metamodels defined with the tool. In my experience, DSLs in traditional (fixed method) CASE Tools do not achieve level greater than 1. Metamodels in UML tools often do not reach level 4 (and often lack static semantics and concrete syntax). Language workbenches can typically produce level 5 metamodels.

Conclusion

The referenced benchmark provides a first order approximation of quality of metamodels. Furthermore, these 5 levels provide those looking for MDx technology, with a simple framework at least to question the marketing information by tool vendors. In my opinion this benchmark may be a useful ingredient in an answer to a more general question of comparing MD technologies.

What are your experiences with measuring quality of metamodels or comparing metamodelling capabilities of MDx tools? Which aspects are you interested in and how do you measure them?

References

[1] John Hutchinson, Mark Rouncefield, Jon Whittle, and Steinar Kristoffersen. Empirical Assessment of MDE in Industry. ICSE 2011.

[2] Tony Clark, Andy Evans, Paul Sammut, and James Willans. Applied Metamodelling: A foundation for Language Driven Development. Version 0.1. Xactium Ltd., 2004.

Image by Aqua