Archives for posts with tag: language

Due to increasing use of domain specific languages (DSLs), declarative style of modeling is quetly spreading among users of MDE tools. Indeed, it is easy to find examples of declarative DSLs, e.g. at DSM Forums or this blog. There is however a group of users, among which the delarative style of modeling has not managed to spread – transformation developers. I am not sure if it has something todo with the group itself or with the fact that the majority of today’s transformation definition languages (TDL) are still more imperative in style (I am aware of QVT Relations and ATL, but these are rather exceptions than the norm).

There are quite a few good reasons why one would consider using a declarative language for transformation definition: reduction of information content in transformation definitions (and hence higher productivity of transformation developers), more agile DSL evolution, transformation definitions as models, higher compatibility with parallel computing, etc..

Today I would like to share some practical results that illustrate reduction of information content due to use of a declarative language.

CHART vs. Java

The following examples are kindly provided by Maarten de Mol and Arend Rensink from University of Twente. In CHARTER project, they are working on certifiable transformation technology for development of safety-critical embedded systems.

Before proceeding to the examples, here are a few relevant highlights of their technology:

  • Partially declarative transformation definition language (CHART): based on graph transformation and intended to be useable by Java programmers.
  • Transformation compiler (RDT): given a transformation definition written in CHART, generates its executable implementation in Java. The produced code runs against and transforms user provided data.

Figures 1 and 2 present transformation rules findRich and addPicture respectively. Each figure shows its rule written both in CHART and Java. The important Java methods are match() and update(), which are the translations of the similarly named blocks in CHARTER rules. 

Figure_1a_-_findrich_chartFigure_1b_-_findrich_java

Figure 1: Rule findRich written in CHART (a) and Java (b)

In Figure 1a, a match block counts 10 LOC against 41 LOC in Java (Figure 1b), which constitutes a reduction of information content by 75%.

Figure_2a_-_addpicture_chartFigure_2b_-_addpicture_java

Figure 2: Rule addPicture written in CHART (a) and Java (b)

In Figure 2a, an update block counts 12 LOC against 65 LOC in Java, an 80% reduction.

Both examples show significant reduction of information content in CHART rules. The reduction is even stronger if one takes into account that Java implementations also have to address technical concerns, which do not exist in CHART rules. In this case reduction is 92% (13 LOC vs. 160 LOC for rule findRich).

In experence of another CHARTER partner, who evaluates CHART/RDT in practice, a CHART transformation definition counted 1024 lines of code against 8000 in Java, an 87% reduction of information content [1]. Author’s own industrial experiences elsewhere with AToM3 GG rules (declarative) and QVT Operational (imperative) agree with the above results as well.

While exact reduction numbers are certainly arguable, the overal trend in the above experiences is that use of a declarative TDL can result in dramatic reduction of information content and manyfold increase of development productivity.

Conclusion

Despite industrial successes of MDE (which are often hidden), it is my experience that model-driven methods have hard times keeping up as organizations evolve. One factor behing this lag is slow speed of transformation development. Practical industrial experiences such as above, show that declarative languages have potential to significantly improve agility of transformation development.

What are your experiences with declarative TDLs and agile language development? Can you share concrete examples or provide references to declarative TDLs?

References

[1] de Mol, M.J. and Rensink, A. and Hunt, J.J. (2012) Graph Transforming Java Data. In: Proceedings of the 15th International Conference on Fundamental Approaches to Software Engineering (FASE 2012), 26-29 Mar 2012, Talinn, Estonia. Lecture Notes in Computer Science. Springer Verlag.

Advertisements

Atom3logo

AToM3 is a language workbench developed at the Modelling, Simulation and Design Lab (MSDL) in the School of Computer Science of McGill University. Please note that the reviewed version is not the latest (0.3).

The focus of the review is the language workbench capabilities, that is everything related to specification of modeling languages and automated processing of models.

Freeform Multilingual Modeling

In AToM3, models (and metamodels) are visually described as graphs. There is no support for spatial relationships, such as containment or touch. While position of modeling elements may seem to imply spatial relationships among them (e.g. among a software component and a port), AToM3 does not recognize, maintain or process such relationships.

Modeling is performed by means of a visual editor: one selects a modeling concept from one of possibly multiple language toolbars (to the left of the canvas) and places (instantiates) it on the canvas. Any language toolbar can be easily removed or added by closing or opening its language-specification file. Furthermore, the toolbar itself is defined by a model in a so-called “Buttons” DSL (see Figure 1). At any time, the modeler is free to edit this model to e.g. arrange buttons in one or multiple rows, remove language concepts or specify additional buttons to launch transformations frequently used with the given language. Both language specification and toolbar files are generated by AToM3 from the language model (aka metamodel). Language independent tools like Edit, Connect, Delete form the general modeling toolbar (above the canvas).

Figure_1

Figure 1: A “Buttons” model for a DSL

A special feature of AToM3 is a freeform multi-language canvas. AToM3 breaks with the tradition of “strongly typed” diagrams that prevent intermixing modeling elements if not explicitly allowed by the diagram’s metamodel type. AToM3 canvas can be considered a diagram that allows any modeling elements. However, elements can only be connected if their metamodel allows this. Such canvas provides users with a high degree of modeling freedom. (As illustration of this freedom, AToM3 logo itself is a freeform model done using 5-6 DSLs). Furthermore, because models are not fragmented among islands of diagrams, information access is optimal. Another benefit is less effort on the metadeveloper’s part because a freeform model can be handled by a transformation without the prior need of metamodel integration.

Unfortunately as models grow in size and number, the single canvas does not scale well, nor does AToM3 provide the user with means to manage them.

AToM3 uses this editor and the freeform canvas in a few different contexts. The primary role is a modeling editor, however the same editor is used for metamodeling and specifying transformation definitions. Such reuse reduces the learning curve and more importantly, brings the benefits of a domain specific modeling environment and the freeform canvas to metadevelopers as well.

Language Specification

AToM3’s metalanguage is based on the Entity Relationship (ER) formalism. In order to provide complete metamodeling capabilities, concepts Entity and Relationship are extended with constraints and appearance properties (see Figure 2). Property constraints is used to define static semantics. Appearance defines visual presentation or concrete syntax of a language concept.

Figure_2Figure_3

Figure 2: Features of Entity or Relationship. Appearance editor

AToM3 provides overall excellent metamodeling capabilities that enable metadevelopers produce level 5 quality metamodels. The following details these capabilities.

Abstract Syntax

For this task metadevelopers are equipped with the ER-based metalanguage, which is very close to conceptual modeling techniques, such as ORM. This means that there is a minimum gap between conceptual, business world-oriented models and AToM3 metamodels. In fact, AToM3 abstract syntax models are surprisingly simple and void of technical details typical for metamodels, which makes the models very readable by subject experts. Figures 2 and 4 of the Curriculum Content Sequencing (CCS) demo illustrate this point.

Concrete Syntax

A simple but sufficient editor allows to define a vector presentation for a language concept. Figure 2 shows all that the editor has to offer.

Static Semantics

The constraints property contains rules that control how a modeling element can be connected to another element to form a meaningful composition. Such rules can be defined per language concept or a model and triggered by editor events (e.g. edit, save, transformation start) or on demand by user, thus covering all imaginable ways to invoke model checking.

AToM3 constraint language is Python, which is an unusual choice. Indeed, Python is not a constraint language, not formal (in the model-driven sense), and has side effects (AToM3 is written in Python too). However, my experience with AToM3 showed that none of those are real disadvantages in practice: Python is known for a concise and easy to read syntax and as constraint language, is intended for metadevelopers (who know how to deal with side-effects). In this role, Python proved to be powerful, flexible and efficient.

Dynamic Semantics

AToM3 uses a common approach to define DSL semantics by translating language concepts to concepts in another target domain with predefined dynamic semantics (e.g, C++, Java). This approach is known as translational.

Another less common approach supported by AToM3, is by modeling the operational behavior of language concepts [1]. The operational semantics approach specifies how models can be directly executed, typically by an interpreter. Such specifications are expressed in terms of operations on the language itself, which is in contrast to translating the language into another form. The advantage is that operational semantics are easier to understand and write. The disadvantage is that interpreters are normally not available for DSLs due to the very specific nature of the latter. (For an AToM3 illustration of how to build a custom interpreter in a model-driven way, please refer to this article.)

In AToM3 translational and operational approaches are implemented as transformations.

Transformation

AToM3 employes the graph rewriting approach to transform models. Transformations themselves are declaratively expressed as graph-grammar models. My experience with transformation models written in imperative languages (e.g. QVT Operational, MERL) is that more time is spent figuring out how to navigate host model structure to access right information than actually specifying what to do with this information. Declarative approach like that of AToM3, frees the metadeveloper from having to specify navigation, thus drastically reducing complexity of transformation modeling.

Figure_4_1Figure_4_2Figure_4_3Figure_4_4

Figure 3: A GG transformation model, a rule, an LHS and an element’s properties

To define a transformation in AToM3, one needs to create a graph grammar and specify one of more GG rules. Figure 3 shows a GG model for the export transformation in the CCS demo. Each rule specifies how a (sub)graph of a so-called host graph can be replaced by another (sub)graph. These (sub)graphs are respectively called the left-hand side (LHS) and the right-hand side (RHS). A rule is assigned an order (priority), a condition and an action. In AToM3, conditions and actions are programmed in Python. As in the case with the constraint language, Python performs very well in these roles too.

A special feature of AToM3 is that both LHS and RHS can be modeled with the DSL(s) of the host graph. In fact, the (sub)graph editor is based on the above mentioned model editor, and provides the metadeveloper with the freeform multilingual canvas, customizable language toolbars and transformations. The consequence is that it is very easy to construct sub-graphs and verify them with subject experts.

Figure_5

Figure 4: A host model together with a “parameter” model

An interesting feature of AToM3 transformation system is that it does not feature transformation parameters. This may seem limiting, however an equally effective alternative is to store “parameter” information in a model. The AToM3 canvas makes it extremely easy to mix such “parameter” model(s) with a host model and pass them to a transformation. Figure 4 shows a sequencing model from the CCS demo together with a repository model (top left corner of the canvas). Given both, an export transformation can access the remote model repository, pass authentication, and store the sequencing model at the repository.

Another interesting feature is that transformation input can be also an element selected by users (unfortunately multiple selection does not work in this version). A promising application thereof is user-defined in-place transformations that automate frequent and routine modeling operations. For example, decomposing a group element into constituent objects (and vice-versa) with a click of a button. Industrial users that often work with large models would really appreciate the resulting reduction of repetitive strain.

Finally, AToM3 supports nearly all transformation kinds known to the author [2, 3]. It is easier to list what is unsupported: text-to-model and text-to-text (which is a consequence of the graphical nature of the language workbench), and the more exotic synchronization and bidirectional kinds. Due to its graph rewriting system, AToM3 is very strong in model-to-model (M2M) and model-to-text (M2T) transformations. A GG-based support for the latter, very popular category, is not obvious and therefore warrants an extra explanation.

M2T Transformation

In AToM3 M2T means producing textual structures from graph structures. One way of doing this is via a transformation where the source and the target models are the same. Rules of such transformation do not perform any important rewriting, but use the graphical nature of the source language to traverse and annotate the source model with temporary information that is needed for text generation. Text itself is generated by side-effects encoded in actions of rules, which can access the annotations.

A typical M2T application is code generation. An example of a non trivial code generation made with AToM3 is ZCase, a software factory for Zope. In the CCS demo, ZCase is a part of the ERZope transformation chain.

Conclusion

The is no escaping the fact that AToM3 is a research tool and is not suitable for demanding industrial use. The workbench does not scale well for large models (both in terms of performance and user controls) and its tools are basic. There is no reliable support, no up-to-date exhaustive documentation, no collaborative development, no integration with version control and requirement management systems, and naturally plenty of bugs and annoyances. In short, the tool is far from being mature and ready for industrial users.

However, metadevelopers may find the above drawbacks quite tolerable, because they are better prepared to deal with technical issues and metamodels typically do not stress the tool’s scalability. On the positive side, AToM3 provides simple but optimal tools and set of features that work together to create one of the most robust and powerful language workbenches I know. Thereby AToM3 is extremely suitable for agile, responsive and timely development. Due to the maturity level of the workbench, its application is best limited to proofs of concept. To date, AToM3 is the language workbench of my choice for quick prototyping.

AToM3 is recommended to MDE students, analysts in need of quick prototyping and tool vendors seeking to improve their language workbenches. In my opinion, AToM3’s metamodeling and transformation technology is nearly optimal, and is still ahead of the larger and more inert commercial workbenches. While its problems are numerous, they are run-of-the-mill and knowledge and technologies to address them are commonly available. If these problems could have been removed, then AToM3 would have been the tool I could have easily recommended to industrial customers too.

References

[1] Tony Clark, Andy Evans, Paul Sammut, and James Willans. Applied Metamodelling: A foundation for Language Driven Development. Version 0.1. Xactium Ltd., 2004.

[2] Krzysztof Czarnecki and Simon Helsen. Classification of model transformation ap- proaches. In Jorn Bettin, Ghica van Emde Boas, Aditya Agrawal, Ed Willink, and Jean Bezivin, editors, 2nd OOPSLA Workshop on Generative Techniques in the Context of Model-Driven Architecture, Anaheim, CA, October 2003. ACM Press.

[3] Tom Mens, Krzysztof Czarnecki, and Pieter Van Gorp. Discussion – a taxonomy of model transformations. In Jean Bezivin and Reiko Heckel, editors, Language Engineering for Model-Driven Software Development, volume 04101 of Dagstuhl Seminar Proceedings, Dagstuhl, Germany, 2005. Internationales Begegnungs- und Forschungszentrum fuer Informatik (IBFI), Schloss Dagstuhl, Germany.

L_from_pencils

Nowadays DSLs seem to be everywhere. If 5 years ago DSL was an exotic word in the UML dominated model driven world, today it has established a strong following. A recent research on how MDE is used in industry [1], indicated that nearly 40% of respondents use in-house DSLs (alongside of other languages). The in-house qualifier is important, as these DSLs are very likely to be developed with metamodels. In such cases, a quality benchmark may help language development. Yet, it is not easy to find such a benchmark, let alone one that is widely accepted.

 

Five levels of Metamodelling

One quality benchmark that I found useful is described by Tony Clarks et. al in [2]. The authors define 5 levels of quality. These briefly are:

  1. The lowest level: a simple abstract syntax is defined, but not implemented yet in a tool. The static and dynamic semantics of the language is informal and incomplete. There is no specific tool support: an existing language is repurposed, compliance with the DSL is manually maintained and models are mostly interpreted by users.
  2. At this level, the abstract syntax and static semantics have been largely defined, implemented in a tool and validated. The dynamic semantics is still informally defined.
  3. The abstract syntax is completely implemented and tested. Concrete syntax has been defined for the language, but not implemented yet. Optimization of the language architecture has started.
  4. The concrete syntax of the language has been implemented and tested. Users create models either visually and textually. The language architecture has been optimized for reuse and extensibility. Tool support for dynamic semantics begins to appear.
  5. The topmost level: all aspects of the language have been modeled, including its semantics. Models written in the language can be processed by the tool. Examples thereof include code generation, execution, simulation, verification. The language architecture is well optimized for reuse.

While the original intention of the benchmark was to assess metamodels, I found it also useful for estimating metamodelling capabilities of MD tools. If a tool is not capable of supporting development needs for a certain level, then that level will be the quality ceiling for all metamodels defined with the tool. In my experience, DSLs in traditional (fixed method) CASE Tools do not achieve level greater than 1. Metamodels in UML tools often do not reach level 4 (and often lack static semantics and concrete syntax). Language workbenches can typically produce level 5 metamodels.

Conclusion

The referenced benchmark provides a first order approximation of quality of metamodels. Furthermore, these 5 levels provide those looking for MDx technology, with a simple framework at least to question the marketing information by tool vendors. In my opinion this benchmark may be a useful ingredient in an answer to a more general question of comparing MD technologies.

What are your experiences with measuring quality of metamodels or comparing metamodelling capabilities of MDx tools? Which aspects are you interested in and how do you measure them?

References

[1] John Hutchinson, Mark Rouncefield, Jon Whittle, and Steinar Kristoffersen. Empirical Assessment of MDE in Industry. ICSE 2011.

[2] Tony Clark, Andy Evans, Paul Sammut, and James Willans. Applied Metamodelling: A foundation for Language Driven Development. Version 0.1. Xactium Ltd., 2004.

Image by Aqua

Media_httplsdluminise_liblj

MetaEdit+ DSM Environment by company MetaCase is a commercial language workbench that in contrast to inflexible CASE tools, enables users to build their own modeling and code generation tools (aka DSM tools). It comes in two main product components:

  • MetaEdit+ Modeler provides customizable DSM functionality for multiple users, multiple projects, running on all major platforms.
  • MetaEdit+ Workbench i) allows building custom modeling languages (DSLs), and text generators and 2) includes the functionality of MetaEdit+ Modeler and MetaEdit+ API (the latter is not reviewed in this document).

This review is written from the MDE perspective and will cover major MDE functionally related to specification of modeling languages. For a complete picture of MetaEdit+, readers are advised to consider other aspects (e.g. collaboration, versioning, etc…) as well.This review covers MetaEdit+ Workbench version 4.5.

Language Specification

Media_httplsdluminise_qbiie

MetaEdit+ supports graph-like visual languages represented as diagrams, matrixes or tables. There is a limited support for spatial languages: touch and containment relationships are derived from canvas coordinates of modeling elements. There is no actual tool support to preserve these relationships: for example, as a modeller moves a “container” element, contained elements do not move along as expected, but remain at old coordinates.In MetaEdit+, languages are specified with a set of specialized tools. In the following, we describe the tools per each aspect of the visual language definition: abstract syntax, concrete syntax, static and dynamic semantics.

Abstract Syntax

This aspect is defined with GOPPRR metatypes. GOPPRR is an acronym for metatypes Graph, Object, Property, Port, Role and Relationship. For each metatype, there is a form-based tool, e.g. Object tool allows specification of object types  and Graph tool allows assembling types produced with the other tools into a specification of abstract syntax. GOPPRR tools support single inheritance.Graph tool also allows linking DSL objects to graphs of other DSLs through decomposition and explosion structures. Furthermore, through sharing language concepts (of any OPPRR metatype) among graphs, DSLs can be integrated so that changes in one model can be automatically reflected in models based on different languages.An alternative to these form-based tools for abstract syntax specification is a visual metamodeling DSL. However, this functionality is best used as easy start-up leading to automated generation of barebone GOPPRR metamodels. Once a language developer changes a GOPPRR metamodel (which is inevitable), visual metamodeling is best discontinued to avoid manual round-trip between the two metamodels.

Concrete Syntax

By default, MetaEdit+ provides generic symbols. However, language developers are free to specify custom symbols for objects, roles and relationships. These symbols are either defined with a WYSIWYG vector drawing tool or imported from vector graphics (SVG) or bitmap files. Symbols can display text, property values and dynamic outputs produced by text generators (more on generators in section M2T Transformation). Moreover, symbols or their parts can be conditionally displayed. Finally, symbols can be reused among different DSLs via a symbol library.MetaEdit+ does not directly support multiple concrete syntaxes per language, which (the lack of such support) is still a common practice among language workbenches. However, its capability to display symbols based on conditions allows to work around this limitation.

Static Semantics

This aspect covers constraints and business rules. The purpose of these rules is to ensure a consistent and valid model.In general, DSM tools should verify a model against the static semantics of its DSL at different times. These times can be classified as ‘live’ (i.e. when a user is modelling) and ‘batch’ (i.e. invoked on events caused by actions such as user demand, model saving or transformation). Furthermore, tool actions following violation of a constraint can be classified as prevention (i.e. a violating action is canceled and a warning message is displayed) or merely informative (i.e. a violating action is allowed, but model will display clues about invalid constructions until the effect of the action is corrected).MetaEdit’s Constraint tool (available from the Graph tool) allows ‘live’ checks against constraints and preventive protection of models (‘live’ and ‘preventive’ in the terms of the above classifications). The tool is very expressive and easy to use, but covers only limited number of types of constraints, namely:

  • object connectivity in a relationship
  • object occurrence in a model
  • ports involved in a relationship
  • property uniqueness

More advanced constraints have to rely on MERL generator (see section M2T Transformation), which can inform users about invalid constructions during modeling (‘live’ and ‘informative’ in the terms of the above classifications). MERL generator can also be used for ‘batch informative’ and ‘batch preventive’ checks: a checking report can be run on demand or included as preventive check before any other transformation is carried out.

Dynamic Semantics

MetaEdit+ can define dynamic semantics through a process of translating DSL concepts to concepts in another target domain with defined dynamic semantics. Examples of target domains in code generation applications are e.g. C++ or Java. A major benefit of language workbenches is that they are capable of automating this and other useful kinds of processes.

PROCESS AUTOMATION

MDE applications need capabilities to automate processes in which models are inputs and outputs. MetaEdit+ provides various levels of support for model-to-model (M2M), model-to-text (M2T) (e.g. in code generation applications) and text-to-model (T2M) (import of legacy code, data type definitions, etc. into models) types of transformations. (The latter transformation type is not reviewed.)

M2T Transformation

Text (and more specifically code) generation is accomplished with Generator tool that can efficiently navigate models, filter and access information, and output text into external files, Generator Output tool and DSL symbols.  All these tasks are specified with imperative language MERL. While MERL is very concise and efficient for most of these tasks, I think that navigation and access tasks are better accomplished in a declarative way.MERL generators are defined per graph type (i.e. per DSL) and can be acquired from supertypes of a given graph type via an inheritance hierarchy. If a generator has to be used for different graph types, then the generator should be defined for the common parent graph type. On the other hand, DSL developer can define new or redefine generators already provided by parent graph types.Finally, MERL provides support for modularization by allowing includes of generators in other generators. Making modular generators pays off well, as there are many reuse opportunities in MetaEdit+: generators can be reused not only for text generation but also in concrete syntax (symbols) and validation/reporting purposes (symbols, generator output tool).

M2M Transformation

Models can be transformed 1) programmatically via the SOAP and WebServices-based API of MetaEdit+ (this option requires product component MetaEdit+ API) or 2) through code generation of an intermediate external representation (in the XML format) and consequent import thereof as new model.These two options amount to a generic support at a minimum level that is commonly provided nearly by all language workbenches. Moreover, code generation of an intermediate representation cannot implement in-place M2M transformations, of which application examples are: model optimization, model layout, model interpretation, model weaving and any repeatable model manipulation in general.

OTHER

  • DSL evolution: MetaEdit+ updates existing models instantly yet non-destructively to reflect changes in DSLs.  The update policy ensures that models created with the older DSL versions are not broken and remain usable with existing generators. Instant update is also very useful when fine-tuning a DSL with end users.
  • According to MetaCase, a MetaEdit+ project can hold over 4 billion objects. A typical project would contain about 40-100 models (graphs).
  • In the multi-user version, users can simultaneously access and share all models within a Repository. Locking is made at the object-level, so several users could collaboratively work on the same model at the same time.
  • Multi-user collaboration in MetaEdit+, product line analysis of commonality and variability and proper separation of concerns reduce the need for version control as it is known in software engineering. Therefore MetaEdit+ does not provide its own versioning system. Best practices for versioning with MetaEdit+ can be found here.
  • Model interoperability: by default, all models and DSLs can be exported in an XML format. The schemas are very simple, which make it easy to post-process such files if needed. Moreover, the M2T transformation capabilities of MetaEdit+ enable DSL developers to easily create custom export generators.

CONCLUSION

MetaEdit+ is a versatile language workbench that enables building high quality visual DSLs for any kind of domain, be it technical or business. Another key quality of MetaEdit+ is efficient DSL/GOPPRR tools, which allow light-weight, agile and fast DSL development and evolution. A testament to this quality is the fact that MetaCase is one of few language workbench makers that routinely designs and builds DSLs in improvisation with audience at conferences, workshops, etc. In my opinion, this impressive productivity is possible because GOPPRR tools are based on paradigms that are optimum for DSL development (DSM for DSM so to speak).Highlights of MetaEdit+ are:

  • Proper level of abstraction: DSL developers are completely shielded from details of how DSM-tools are implemented. DSL development tools focus on essential abstractions for specification of languages and generators.
  • High-levels of automation: DSM-tools are completely and automatically generated from abstract language specifications.
  • High quality of tools: each DSL development task has its own dedicated tool.
  • Numerous enhancements: high integration of tools, non-destructive evolution of languages, inheritance mechanism, reuse opportunities for types, symbols and generators, visual metamodeling, etc.
  • Very cheap introductory license.

Naturally, there are a few drawbacks as well:

  • No specific support for model-to-model transformation.
  • Somewhat limited constraints support.
  • Limited support for spatial relations.
  • Uncommon user interface.
  • Form-based GOPPRR tools prevent a global overview of a metamodel.
  • Expensive  standard licenses.

Code generation applications are the oldest tradition in MDE and this is where MetaEdit+ excels. As MDE discovers new applications, my experience is that the code generation specialization becomes restrictive. Admittedly, it is possible to implement some types of M2M transformations with code generation (via intermediate representation). However, the problem with this workaround is that it introduces accidental complexity both to MDE developers and more importantly to end users (that have to keep repeating the generate/import routine, sometimes complicated by model merge).That said, in my opinion MetaEdit+ gets the big things right. Whether its shortcomings are little things is a subjective matter that is best evaluated in the context of a concrete problem domain.