Archives for posts with tag: metric

While effort reduction and quality increase are both commonly recognized benefits of MDE, the former particularly has become its trademark, thanks to numerous generative uses in model-driven software development (MDSD). Examples include generation of code and configurations from models written in UML, DSLs and XML.

The_effort_in_mde_approach_with_partial_manual_coding

Figure: The effort in MDE approach with partial manual coding (adapted from [1])

The generative MDE automates well defined routine activities. An effective metric of depicting economical benefit thereof is effort. The above figure illustrates effort reduction due to automation and reuse in an MDE approach with partial manual coding. Of cause, the generative MDE improves quality as well: error reduction, enforced architecture conformance, and up-to-date documentation are common factors that have positive effect on software quality. But usually these are considered as icing on the cake that is effort reduction. In my experience this perspective on the economical value of MDE is common among both customers and MDE professionals. The perspective can be summarized as “the same with less”.

More with the same

Recently a client tasked me together with its domain experts to assess benefits of applying MDE to a difficult process within the organization. Having analyzed the before and after situations, we came up with estimated economical benefit expressed in effort savings. The estimate was hard to quantify, but “should” have been OK. Although I wanted to share this optimism, I felt that in practice the effort saving would be negligible if not even negative. This paradox was due to the fact that the largest activity in the problem domain was inherently creative and exploratory. 

In the figure, the output of a single exploration in this activity is shown as intermediate result, corresponding to line ad. As the figure suggests, code generation directly from the output is not possible (this happens further downstream in development). You may have noticed that the modelling curve rises more steeply towards point a. This rise occurs because modelling requires increased level of domain understanding and more information is needed by semantically rich operations, such as simulation, verification, code generation (eventually), etc. On the other hand, the figure shows effort reduction indicated by distance cd, which is the result of providing end users with proper abstractions, faster access to right knowledge, separation of concerns, DRY modelling, maintained consistency and integrity.

While working efficiency per exploration is likely to  increase (compare ab and cd), the leading concern is quality of the output. Here benefits are early detection of design errors, deep exploration of design choices, better communication and documentation, maximized reuse of domain-specific platforms in further development. Moreover, the domain experts noted that any saved effort would be re-invested in more alternative explorations in search of a more optimal output. This increased number of explorations, is likely to balance out any savings due to higher efficiency.

With these insights, economical benefits were expressed with quality metrics and linked to different business goals than initially thought.

Conclusion

The described MDE assessment targets a highly creative engineering activity that explores alternative choices. In extreme case, the main benefit is not effort reduction, but increased product and process quality. The icing on the cake is that processable models can open opportunities for generative uses as well.

In my experience, such and certainly less extreme quality-driven cases are not exotic. In recent years, quite a few MDE projects I’ve participated in, had benefits strongly linked to quality improvement. What are your MDE experiences with creative activities? What were the economical benefits and how were they conveyed?

References

[1] Thomas Stahl, Markus Voelter, Krzysztof Czarnecki. “Model-Driven Software Development: Technology, Engineering, Management”. Wiley; 1 edition (May 19, 2006)

L_from_pencils

Nowadays DSLs seem to be everywhere. If 5 years ago DSL was an exotic word in the UML dominated model driven world, today it has established a strong following. A recent research on how MDE is used in industry [1], indicated that nearly 40% of respondents use in-house DSLs (alongside of other languages). The in-house qualifier is important, as these DSLs are very likely to be developed with metamodels. In such cases, a quality benchmark may help language development. Yet, it is not easy to find such a benchmark, let alone one that is widely accepted.

 

Five levels of Metamodelling

One quality benchmark that I found useful is described by Tony Clarks et. al in [2]. The authors define 5 levels of quality. These briefly are:

  1. The lowest level: a simple abstract syntax is defined, but not implemented yet in a tool. The static and dynamic semantics of the language is informal and incomplete. There is no specific tool support: an existing language is repurposed, compliance with the DSL is manually maintained and models are mostly interpreted by users.
  2. At this level, the abstract syntax and static semantics have been largely defined, implemented in a tool and validated. The dynamic semantics is still informally defined.
  3. The abstract syntax is completely implemented and tested. Concrete syntax has been defined for the language, but not implemented yet. Optimization of the language architecture has started.
  4. The concrete syntax of the language has been implemented and tested. Users create models either visually and textually. The language architecture has been optimized for reuse and extensibility. Tool support for dynamic semantics begins to appear.
  5. The topmost level: all aspects of the language have been modeled, including its semantics. Models written in the language can be processed by the tool. Examples thereof include code generation, execution, simulation, verification. The language architecture is well optimized for reuse.

While the original intention of the benchmark was to assess metamodels, I found it also useful for estimating metamodelling capabilities of MD tools. If a tool is not capable of supporting development needs for a certain level, then that level will be the quality ceiling for all metamodels defined with the tool. In my experience, DSLs in traditional (fixed method) CASE Tools do not achieve level greater than 1. Metamodels in UML tools often do not reach level 4 (and often lack static semantics and concrete syntax). Language workbenches can typically produce level 5 metamodels.

Conclusion

The referenced benchmark provides a first order approximation of quality of metamodels. Furthermore, these 5 levels provide those looking for MDx technology, with a simple framework at least to question the marketing information by tool vendors. In my opinion this benchmark may be a useful ingredient in an answer to a more general question of comparing MD technologies.

What are your experiences with measuring quality of metamodels or comparing metamodelling capabilities of MDx tools? Which aspects are you interested in and how do you measure them?

References

[1] John Hutchinson, Mark Rouncefield, Jon Whittle, and Steinar Kristoffersen. Empirical Assessment of MDE in Industry. ICSE 2011.

[2] Tony Clark, Andy Evans, Paul Sammut, and James Willans. Applied Metamodelling: A foundation for Language Driven Development. Version 0.1. Xactium Ltd., 2004.

Image by Aqua