Archives for posts with tag: development

Due to increasing use of domain specific languages (DSLs), declarative style of modeling is quetly spreading among users of MDE tools. Indeed, it is easy to find examples of declarative DSLs, e.g. at DSM Forums or this blog. There is however a group of users, among which the delarative style of modeling has not managed to spread – transformation developers. I am not sure if it has something todo with the group itself or with the fact that the majority of today’s transformation definition languages (TDL) are still more imperative in style (I am aware of QVT Relations and ATL, but these are rather exceptions than the norm).

There are quite a few good reasons why one would consider using a declarative language for transformation definition: reduction of information content in transformation definitions (and hence higher productivity of transformation developers), more agile DSL evolution, transformation definitions as models, higher compatibility with parallel computing, etc..

Today I would like to share some practical results that illustrate reduction of information content due to use of a declarative language.

CHART vs. Java

The following examples are kindly provided by Maarten de Mol and Arend Rensink from University of Twente. In CHARTER project, they are working on certifiable transformation technology for development of safety-critical embedded systems.

Before proceeding to the examples, here are a few relevant highlights of their technology:

  • Partially declarative transformation definition language (CHART): based on graph transformation and intended to be useable by Java programmers.
  • Transformation compiler (RDT): given a transformation definition written in CHART, generates its executable implementation in Java. The produced code runs against and transforms user provided data.

Figures 1 and 2 present transformation rules findRich and addPicture respectively. Each figure shows its rule written both in CHART and Java. The important Java methods are match() and update(), which are the translations of the similarly named blocks in CHARTER rules. 

Figure_1a_-_findrich_chartFigure_1b_-_findrich_java

Figure 1: Rule findRich written in CHART (a) and Java (b)

In Figure 1a, a match block counts 10 LOC against 41 LOC in Java (Figure 1b), which constitutes a reduction of information content by 75%.

Figure_2a_-_addpicture_chartFigure_2b_-_addpicture_java

Figure 2: Rule addPicture written in CHART (a) and Java (b)

In Figure 2a, an update block counts 12 LOC against 65 LOC in Java, an 80% reduction.

Both examples show significant reduction of information content in CHART rules. The reduction is even stronger if one takes into account that Java implementations also have to address technical concerns, which do not exist in CHART rules. In this case reduction is 92% (13 LOC vs. 160 LOC for rule findRich).

In experence of another CHARTER partner, who evaluates CHART/RDT in practice, a CHART transformation definition counted 1024 lines of code against 8000 in Java, an 87% reduction of information content [1]. Author’s own industrial experiences elsewhere with AToM3 GG rules (declarative) and QVT Operational (imperative) agree with the above results as well.

While exact reduction numbers are certainly arguable, the overal trend in the above experiences is that use of a declarative TDL can result in dramatic reduction of information content and manyfold increase of development productivity.

Conclusion

Despite industrial successes of MDE (which are often hidden), it is my experience that model-driven methods have hard times keeping up as organizations evolve. One factor behing this lag is slow speed of transformation development. Practical industrial experiences such as above, show that declarative languages have potential to significantly improve agility of transformation development.

What are your experiences with declarative TDLs and agile language development? Can you share concrete examples or provide references to declarative TDLs?

References

[1] de Mol, M.J. and Rensink, A. and Hunt, J.J. (2012) Graph Transforming Java Data. In: Proceedings of the 15th International Conference on Fundamental Approaches to Software Engineering (FASE 2012), 26-29 Mar 2012, Talinn, Estonia. Lecture Notes in Computer Science. Springer Verlag.

Advertisements
Screen_shot_2011-10-21_at_1

This month, Luminis has started development of a surveillance use case. The purpose of the case is industrial assessment and validation of tools and technologies developed in the “Critical and High Assurance Requirements Transformed through Engineering Rigor” project (CHARTER). The ultimate goal of CHARTER is to ease, accelerate, and cost-reduce the certification of embedded systems. The CHARTER tool-suite employs real-time Java, Model Driven Development (MDD), rule-based compilation and formal verification. The coming series of articles at this blog will describe evaluation experiences in the surveillance use case.

The CHARTER project includes user partners from four key industries: aerospace, automotive, surveillance and medical, each of which develops embedded systems that require high assurance or formal certification in order to meet business or governmental requirements. The four user partners will each validate the CHARTER tools and methodology using industrial applications and actual development scenarios, which will provide feedback for the project and ensure the tools and technologies perform as expected, and deliver the expected improvements in embedded systems development. As part of the evaluation process, metrics will be used to quantify industrial experiences in terms of development effort, cost savings, verification time, etc., to document for others the benefits achieved.

CHARTER is an ARTEMIS JU project that was established to improve the software development process for developing critical embedded systems. Critical embedded software systems assist, accelerate, and control various aspects of society and are increasingly common in cars, aircraft, medical instruments and major industrial and utility plants. These systems are critical to human life and need to be held to the highest standards of performance through formal certification procedures. Improving the quality and robustness of these systems is paramount to their widespread adoption.

The best way to explain MDE to someone without any model-driven experience is to involve in an MDE development. The second best, for which blog is a suitable medium, is to show examples. Today I would like to share an example of a simple MDE application and give a glimpse into work items and a process behind its development.

The application in question is an MDE implementation of a sequencing system described in [1]. The purpose of this MDE exercise was to quickly build something concrete that would help Luminis learn a new vertical domain.

Problem Domain and Demo System

Curriculum content sequencing (CCS) is an important pedagogical service. The purpose of this service is management of learning routes to help students achieve curriculum goals. An emerging trend in education is adaptive learning that is tailored to backgrounds and preferences of individual students. This is in contrast to the traditional way of content sequencing that usually prescribed a single learning route for groups of individuals. Advances in e-learning provide an excellent platform for adaptive content sequencing.

Actors_and_services

Figure 1: Use case diagram of the demo system

A use case diagram in Figure 1 shows actors and important services of a simple content sequencing system. These services are:

Capture sequencing content provides Sequencing Expert with tools to design content sequencing knowledge necessary to reach a curriculum goal. 

Register learning units enables Materials Provider to connect leaning materials to nodes of content sequencing.

Suggest a learning route provides Student with a learning route tailored to the student’s curriculum competences, background and preferences. The service also suggests necessary learning units and their substitutions from other providers.

Problem Domain Ontology

Domain ontology is conceptualization of domain knowledge. Figure 2 shows an information model that captures ontology for the CCS domain. The model is created using the Object Role Modelling (ORM) methodology. Some readers may be also familiar with ORM’s close relatives from The Netherlands: NIAM and FCO-IM.

Simple_ssc_ontology

Figure 2: Ontology for the curriculum content sequencing domain

At this point a few disclaimers are due:

The model loosely conforms to ORM, namely main principles are followed, used notation is based on more popular UML and ER languages, and not all ORM tools are being employed (an example of an ORM tool in Figure 2 is relationship ‘succession’ that is objectified as an entity).

The model itself is not complete. Moreover, it cannot be claimed correct, as the model was not verified with subject matter experts. However, the model is sufficiently accurate for the purpose of the demo (see above).

Design

The system illustrated in Figure 1 is partitioned in 2 parts: a domain specific modeling environment (DSME) for Sequencing Expert and a web-based CMS for Student and Materials Provider. Both parts can import/export curriculum content sequencing designs from each other. Each part provides the actors with rich environment and enables reuse of generic services that are relevant to the application, e.g. user management, document flow, security, content search. Any application-specific functionality that changes frequently during the application’s life-cycle was engineered with models, otherwise programmed. Such points of frequent change are known as variation points.

Figure 3 shows the system parts, variation points (shown as boxes) and activities (diamonds) related to changes in the variation points. These are laid over the OMG’s four modeling layers (labeled M0-M3). Variations at level M1 are part of the normal application operation. Here Sequencing Expert and Materials Publisher define sequencing designs and learning units (LU) respectively. Variations at level M2 occur when domain definitions (see Figure 2) change, e.g. as developer’s understanding of the CCS domain evolves. These are changes to the system itself and are carried out by developers during development or consequent system updates. Grayed out shapes at M0 and M3 are system and technology related variation points that do not change once chosen.

Ccs_meta_application_pim

Figure 3: System parts, modeling architecture, variation points and change processes

Changes at level M2 are expensive and (given the experimental nature of the demo) frequent. Therefore these are best reduced or avoided. The direction of development activities implies that there is a single source of M2 changes, from which all the others can be derived. MDE is applied to isolate this source of change within a model (see Metamodel in the figure) and automate related development activities by means of transformations.

The following are selected technologies:

  • AToM3 ─ a language workbench. AToM3 uses Entity-Relationship (ER) and Graph Grammar (GG) as Metalanguage and Transformation definition language respectively. AToM3 is used as language workbench in (model-driven) development and as domain-specific modeling environment (DMSE) for Sequencing Expert.
  • Zope ─ a web application server that is typically used as intranet and extranet server, document publishing system, portal server and platform for collaboration. In this application, Zope provides a web-based CMS for Student and Material Provider.
  • ZCase ─ a model-driven software factory for code generation of Zope document types.
  • Python for development other than model-driven.

With these solutions and technologies in place, the following is left to be developed:

  1. Metamodels for sequencing designs and learning units (see Figure 2).
  2. A transformation bridging the gap between AToM3 metamodels and ZCase, thus forming a complete transformation chain Model→Code.
  3. Import/export routine between AToM3 and Zope
  4. Simple web-interfaces and document search for Student and Materials Provider

The last two items were trivial to implement. Item 2, while simple, requires introduction of new material (namely ER, Class diagrams, Zope, ZCase, Python) for explanation. For these reasons items 2,3 and 4 will not be covered in the article.

Capturing Sequencing Designs

With the selected technology we can now define a DSL for capturing sequencing designs in AToM3.

Sequencing_dsl_metamodel

Figure 4: CCS metamodel

Figure 4 shows a CCS metamodel written in ER. Note, that the screenshot shows only the abstract syntax, but the other aspects (see metamodelling quality) are defined as well. The quality of the metamodel is at level 5.

Using language workbench capabilities and the above metamodel, AToM3 configures itself into a dedicated modeling environment for curriculum expert. (This configuration is implemented by AToM3’s own transformation Generate DSME.)

A_learning_sequence_example_of_the_first_grade_mathematics_course_determined_by_curriculum_experts

Figure 5: A learning sequencing design for the first grade mathematics course

Figure 5 is a screenshot of the CCS modeling environment at work: Left vertical toolbar contains buttons for every modeling concept defined in the CCS metamodel. Top horizontal toolbar contains generic modeling tools, Edit and Connect in particular. In AToM3 modeling is performed by means of a visual editor: one selects a modeling concept, places it on the canvas and modifies its properties with the Edit tool. Further on, such created instances of modeling concepts can be coupled using the Connect tool. An example of such a visual model is a sequencing design for the first grade mathematics course, shown on the canvas of DSME.

Sequencing Designs in CMS

Given the CCS metamodel, transformation chain ER→Zope generates implementation of the corresponding document types for Zope. Figure 6 shows these document types as seen in CMS (see options in the drop down list). You may recognize the entities and relationships from the metamodel shown in Figure 4.

Cms_sequencing_document_types

Figure 6: Sequencing document types available in CMS

These types allow curriculum experts to build sequencing designs in a web browser. A more convenient way is to transfer sequencing designs from the DSME part. This is achieved by executing an export transformation on a sequencing design in AToM3. The result of this transformation is a fully searchable CCS document stored as graph structure in Zope. For example, given the AToM3 model in Figure 5, export would create a Zope document of type Sequencing multigraph as shown in the figure below.

Author_view_on_a_sequencing_model

Figure 7: Sequencing expert view on a CCS document

Figure 7 shows the web-interface that represents a CCS document as graph structure, i.e. a set of edges and nodes. (A viewer which can display graph-like structures in HTML web pages would be more appropriate, but is not necessary for this demo.) Note that the structure and properties of the AToM3 model from Figure 5 are transferred without loss of information.

The document shown in Figure 7 can be changed online and downloaded (via export tab seen in the screenshot) as an AToM3 model. Consequently, the downloaded file can be loaded in AToM3 for editing, thus closing the import/export round-trip.

Learning Unit Registration

Learning_unit_model

Figure 8: Learning Unit metamodel

Learning Unit is a simple and atomic Zope document type, whose AToM3 metamodel is shown in Figure 8. LU Zope type is generated and its instances are created, searched, read, and modified similar to those of the CCS document types. A simple web interface is sufficient for Material Provider. 

Obtaining Learning Routes

Currently, this service has a very basic recommendation mechanism:

  • Sequencing designs are searched to match query parameters. 
  • No account is taken of student’s curriculum competences, background and preferences (and there is no student profile). 

Searching_for_course_sequencingReader_view_on_a_sequencing_modelSubstitutive_materials_per_sd

The above screenshots illustrate the current functionality: The first is the web interface that allows students to define search parameters and displays search results. The next is the web view of the sequencing design previously shown in Figure 5. Selecting any node of the design, will display information about the sequencing node and suggest learning units and their substitutions from alternative materials providers (see the last screenshot).

Summary

In conclusion, I would like to highlight a few points about this MDE application:

The complete development of the application took about 2 man-week. The real power, however is how fast the application can be updated as the knowledge of the application domain evolves: a matter of an hour given even drastic changes in metamodels. This efficiency is possible due to application of the SSoT principle, proper abstractions and automation: Effectively, two separate PSM developments were replaced by a single PIM development, from which code is generated for two platforms (AToM3 and Zope).

In development of this application, off-the-shelf MDE frameworks were reused (namely AToM3 and ZCase) and traditional development process was interwoven with development of missing model-driven assets (metamodels, ER→ZCase transformation). The latter is a software development process too, and as a matter of fact, followed OMG’s CIM/PIM/PSM process architecture.

Naturally, development of model-driven assets has its particularities as well. For one, there is a much stronger isomorphism between objects in the world of application users and domain concepts in model-driven artifacts. Therefore, role of analysis models, such as shown in Figure 2 is more important than in the more requirements-oriented traditional development.

This demo provides extremely simplified service for learning route recommendation. Recommendation algorithm depends on the information structures and is sensitive to changes in those structures as well. The service is in fact a model interpretation, where model is a CCS design and a Student profile. A possible MDE approach to building an application-specific interpreter is described here (but be sure to read the approach’s limitations as well).

Developed metamodels reach level 5 of metamodeling quality. This topmost level means that all aspects of the language have been modeled, including its semantics. In this demo, semantics of ER was defined by using the translational approach, that is by translating ER concepts to Zope and AToM3 concepts that have precise semantics. Figure 3 shows these translations as Model→Code and Generate DSME activities. This illustrates that a language can have multiple semantics.

The demo also showed that modular transformations can be reused to form new transformation chains. Case in point is how ZCase was integrated in ER to Zope transformation by means of a relatively simple bridge. This is a counterexample to a claim that MDE solutions are inflexible.

Do you have experience with MDE applications within your organization? Can you share these experiences as well?

References

[1] Yu-Liang Chi. 2009. Ontology-based curriculum content sequencing system with semantic rules. Expert Syst. Appl. 36, 4 (May 2009), 7838-7847. 

L_from_pencils

Nowadays DSLs seem to be everywhere. If 5 years ago DSL was an exotic word in the UML dominated model driven world, today it has established a strong following. A recent research on how MDE is used in industry [1], indicated that nearly 40% of respondents use in-house DSLs (alongside of other languages). The in-house qualifier is important, as these DSLs are very likely to be developed with metamodels. In such cases, a quality benchmark may help language development. Yet, it is not easy to find such a benchmark, let alone one that is widely accepted.

 

Five levels of Metamodelling

One quality benchmark that I found useful is described by Tony Clarks et. al in [2]. The authors define 5 levels of quality. These briefly are:

  1. The lowest level: a simple abstract syntax is defined, but not implemented yet in a tool. The static and dynamic semantics of the language is informal and incomplete. There is no specific tool support: an existing language is repurposed, compliance with the DSL is manually maintained and models are mostly interpreted by users.
  2. At this level, the abstract syntax and static semantics have been largely defined, implemented in a tool and validated. The dynamic semantics is still informally defined.
  3. The abstract syntax is completely implemented and tested. Concrete syntax has been defined for the language, but not implemented yet. Optimization of the language architecture has started.
  4. The concrete syntax of the language has been implemented and tested. Users create models either visually and textually. The language architecture has been optimized for reuse and extensibility. Tool support for dynamic semantics begins to appear.
  5. The topmost level: all aspects of the language have been modeled, including its semantics. Models written in the language can be processed by the tool. Examples thereof include code generation, execution, simulation, verification. The language architecture is well optimized for reuse.

While the original intention of the benchmark was to assess metamodels, I found it also useful for estimating metamodelling capabilities of MD tools. If a tool is not capable of supporting development needs for a certain level, then that level will be the quality ceiling for all metamodels defined with the tool. In my experience, DSLs in traditional (fixed method) CASE Tools do not achieve level greater than 1. Metamodels in UML tools often do not reach level 4 (and often lack static semantics and concrete syntax). Language workbenches can typically produce level 5 metamodels.

Conclusion

The referenced benchmark provides a first order approximation of quality of metamodels. Furthermore, these 5 levels provide those looking for MDx technology, with a simple framework at least to question the marketing information by tool vendors. In my opinion this benchmark may be a useful ingredient in an answer to a more general question of comparing MD technologies.

What are your experiences with measuring quality of metamodels or comparing metamodelling capabilities of MDx tools? Which aspects are you interested in and how do you measure them?

References

[1] John Hutchinson, Mark Rouncefield, Jon Whittle, and Steinar Kristoffersen. Empirical Assessment of MDE in Industry. ICSE 2011.

[2] Tony Clark, Andy Evans, Paul Sammut, and James Willans. Applied Metamodelling: A foundation for Language Driven Development. Version 0.1. Xactium Ltd., 2004.

Image by Aqua

Blogs by Johan den Haan, Stefano Butti and Jordi Cabot raised interesting discussions about code generation (CG) and model interpretation (MI). One observation I took from these discussions is that MI is still little known. Previously I demonstrated operation of a custom-made model interpreter for a so-called weighbridge domain. Today I would like to share my experience of building this interpreter in a model-driven way.

MDE Approach

Two main choices underpin the process and technology used to develop the interpreter:

  1. Execution semantics of the interpreter is defined within the problem domain itself (weighbridge in this case), without translating it to another domain (e.g. .Net or Java) as it is the case with CG. Such definition of semantics is also known as operational semantics. The advantage is reduction of development complexity: out of at least 2 domains needed for CG, only one and the more abstract domain is sufficient.
  2. Operational semantics is defined within an MDE framework. This enables customization of modeling language for problem domains besides that of the weighbridge example. Moreover, transformation capabilities are used to define operational semantics. 

Domain-specific_nested_interpretation_mde_framework

Figure 1: Domain-specific, nested interpretation (DSNI) MDE framework

Figure 1 shows the MDA framework [1] after it has been adapted to reflect the above mentioned choices. (If you are confused between MDA and MDE, you might find this article useful.) In contrast to MDA, there is no PIM or PSM model, but single computational independent model (CIM) written in DSL. CIM is both source and target of Transformation Tool. Transformation Tool carries out transformation classified as same language, same model. Transformation Definition defines operational semantics. It is not important if Transformation Definition Language (TDL), extends the Metalanguage as in MDA or is customizable by means of meta-specification. Therefore TDL is omitted from the framework and TDL selection criteria are defined instead (see below). Finally, new concept System Context is connected to Transformation Tool. This is due to the fact that interpretation as system exhibits external behavior through communication with other systems.

This approach can be described as nested interpretation, where a domain-specific interpreter is executed (nested) by a generic interpreter. From this perspective, Transformation Tool assumes the role of a generic interpreter and execution of Transformation Definition fills in the role of the domain-specific interpreter.

TDL Selection Criteria

Selection criteria for transformation definition language are:

  • declarative modeling 
  • support for domain-specific notation

These criteria help to reduce development complexity and improve communication with problem domain experts.

Selected MD Technology

AToM3 is a free language workbench written in Python and under development at the Modelling, Simulation and Design Lab (MSDL) in the School of Computer Science of McGill University. The workbench closely matches the DSNI framework and meets the TDL selection criteria. 

In AToM3, DSLs and models are described as graphs. From a language specification written in the metalanguage (ER formalism), AToM3 generates a tool to visually manipulate (create and edit) models written in the specified DSL. Model transformations are performed by graph rewriting. The transformations themselves can thus be declaratively expressed as graph-grammar models. However, AToM3 provides no communication infrastructure as needed by the framework.

Proof of Concept

As demonstration, a language specification for the weighbridge domain was defined (see sections domain and weighbridge DSL here) and graph rewriting was used to model operational semantics (see below). Source code of AToM3 itself, being written in Python, was extended to support web services for the communication purpose.

Operational Semantics

As blueprint for operational semantics of the interpreter, we took πDemos [2], a small process-oriented discrete event simulation language. There is a number of πDemos events that change state of a weighbridge system. For each such event, [2] defined the transitions induced on system state. While the original used functional programming language, this work uses graph rewriting and a graph grammar (GG) rule is defined per event.

Priority GG Rule Description
50 importProcess Adds an external vehicle to EL
25 removeProcess Deletes a vehicle that has completed its todos (events)
40 newR Creates a new weighbridge
40 decP Creates a new vehicle class
40 newP Creates a vehicle from a vehicle class
40 getR Acquires a non-busy weighbridge
40 blockProcess Blocks a vehicle acquiring a busy weighbridge
40 promoteProcess Unblocks a delayed vehicle
40 useR Moves a vehicle on a weighbridge until service is complete
30 releaseResource Moves a served vehicle from a weighbridge to EL
41 putR Releases an occupied weighbridge

Table 1: Graph grammar rules of weighbridge events

Table 1 lists the minimum set of required events and their corresponding GG rules. Execution of such rules needs to be globally orchestrated through proper sequencing. The rules, together with execution sequencing, form an operational semantics model of the interpreter. 

For complete description of the model, please refer to [3]. In the following, we present a detailed description of an example rule, followed by the execution sequencing model.

Example GG Rule

LhsRhs

Figure 2: Subgraphs of the promoteProcess rule

Rule promoteProcess releases a busy weighbridge (bluish box in Figure 2.1) that delays at least one vehicle (yellow box labelled 5). In the new state, the weighbridge remains busy and the blocked vehicle (5) is removed from the head of queue Delay and inserted in waiting queue EL.

The rule is executed if:

  1. The left-hand side (LHS) shown in Figure 2.1 is matched in the host graph (the CIM model).
  2. Associated condition is true: the weighbridge in LHS is the one referred to in the imminent event putR (a todo) in the body (a todo list) of the first vehicle (label 21) in queue EL.

If the above holds, the matched part of the CIM model is substituted with the right-hand side (RHS) shown in Figure 2.2. Note new objects are labelled 10, 11, 13. The entities and relationships in RHS are initialized as follows:

  1. Objects copied from LHS keep all their properties. 
  2. Imminent event putR (a todo)  of the current vehicle (21) is completed. 
  3. All properties of blocked vehicle (5) are copied to vehicle (10).

Execution Sequencing

The execution sequencing is based on the next-event approach: Next event to execute is always the imminent event in the body of the current vehicle. Informally, the operational semantics of execution sequencing is as follows: if EL is empty, interpreter idles until at least one vehicle is inserted in EL. Such vehicle becomes current. If the body of the current vehicle is empty then it is removed from EL and EL is examined again. Otherwise, interpreter  executes a GG rule corresponding to the imminent event of the current and EL is examined again. Note that whenever interpreter is idle, EL is being updated with new vehicles that meanwhile might have arrived from system context.

The execution sequencing is implemented by organizing GG rules into groups, each group having its own base priority. These groups, in the descending order of priority are: vehicle removal, weighing activities and vehicle arrival. Within a group, each rule is assigned a relative priority. If pattern matching of two and more rules within a group is deterministic on the basis of LHSs and conditions, then these rules can share the same priority level. Example rule priorities are given in Table 1.

Conclusion

The demonstrated development approach is characterized by a very high level of abstraction, direct involvement of problem domain experts and absence of software development. All these factors contribute to fast development times: The lead time of this one man project including research and development was 3 weeks. Admittedly, tests of the produced model interpreter showed noticeable performance penalty due to 1) repurposing of MD technology that was not designed for use as interpreter and 2) the overhead introduced by nested interpretation. In my opinion there is much room for performance improvement and I am wondering if MDE can prove useful again. An important message from this experience is that model interpretation does not have to be prerogative of big commercial tools and can get closer to code generation in terms of accessibility.

References

[1] Anneke G. Kleppe, Jos Warmer and Wim Bast. “MDA Explained: The Model Driven Architecture: Practice and Promise”. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, April 2003.

[2] Graham Birtwistle and Chris Tofts. “An operational semantics of process-oriented simulation languages: Part 1 πDemos”. ACM Transactions on Modeling and Com- puter Simulation, 10(4):299–333, December 1994.

[3] Andriy Levytskyy. “Model Driven Construction and Customization of Modeling & Simulation Web Applications”. PhD thesis, Delft University of Technology, Delft, The Netherlands, January 2009.

Media_httplsdluminise_eewij

In MDD explicit knowledge of the domain is crucial for successful development of domain-specific modeling languages (DSML). Yet many starting DSL developers are missing the skill of domain knowledge conceptualization. The main symptoms are difficulty to come up with good language concepts and struggling with levels of abstractions. While language design remains an art, there are a number of paradigms, techniques and guidelines that can make creation of DSLs easier. These helping means are the core of the DSL design tutorial developed at Luminis Software Development. The tutorial was given for the first time during the PPL2010 conference that took place on November 17 & 18 at Océ R&D, Venlo, NL. A small group of participants learned basics of domain analysis, participated in domain definition and implemented a simple metamodel of their own. The general feedback was very positive. The slides for the tutorial can be downloaded here from the Bits&Chips website.

26-10-2013 Update: the slides are now available via slideshare: