1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071 |
- \section{Introduction}
- One of the shortcomings of current (meta-)modelling tools is their strong reliance on their implementation level.
- This reliance ranges from exposing the general purpose implementation language used (\textit{e.g.}, Java), to requiring some operations to operate directly on the internal data representation of the models (\textit{e.g.}, XMI).
- Such strong reliance on the implementation level offers some benefits though, such as higher efficiency both in execution (as it is likely compiled) and development (tool developers are familiar with the programming language).
- The disadvantages, however, are significant: models and algorithms become highly specific to the current state of the implementation, making it impossible, or at least difficult, to port models and algorithms from one tool to the other.
- These disadvantages not only prevent porting models between tools, but models can also become incompatible with newer versions of the tool.
- Should, for example, the internal data representation be changed ever so slightly, all previously created models become incompatible.
- Similarly, if the tool is ported from one implementation language to the other (\textit{e.g.}, from Python to C, for efficiency), all fragments of Python code in all models would have to be updated to C code too.
- Even more importantly for the scientific community, is the portability of algorithms.
- Every year, new algorithms are introduced to further the state of the art in model management operations.
- These algorithms, however, often strongly rely on the internal representation of models, such as XMI or graphs.
- Implementing these algorithms on a tool with a different internal representation could prove challenging due to the different assumptions that can be made.
- Similarly, these algorithms are implemented in the implementation language of the tool, making a reimplementation of these algorithms in a tool with a different implementation language a non-trivial task as it is.
- Yet another disadvantage relates to tool semantics: each tool has its own interpretation of what it means to instantiate a model and check its conformance.
- While the semantics is obvious to experienced users, switching between tools will frequently necessitate lookups in tool documentation to understand the semantics.
- Related to this, porting a model between tools also requires adaptation to this changed semantics, making it even more difficult than it already is.
- Tool semantics are therefore non-obvious and, more problematicly, hardcoded inside of the tool.
- So in order to completely understand the semantics, it becomes necessary to read the source code of the tool, which is a completely seperate entity from the tool itself (\textit{i.e.}, requires a different viewer/editor, is in a different language, and is at a completely different level of abstraction).
- To counter these disadvantages, we intend to break the strong reliance on the implementation level of tools.
- First, we observe why tools voluntarily chose for this strong reliance.
- One of the main reasons is undoubtedly purely pragmatic: tool developers are (very) familiar with specific programming languages, and thus use it wherever they can.
- As general purpose programming languages are a well-developed field, advanced tools are available, such as, efficient compilers, debuggers, and code analyzers.
- Additionally, many libraries are available for use, such as graphical libraries, parsers, data structures, and so on.
- This results, at least at first, in efficient code and fast development of new tools.
- But even adventurous tool developers, who want to model as much as possible, quickly hit a wall:
- the complete system, thus including the model management algorithms, needs to become independent of the implementation language, and should thus be explicitly modelled.
- It is these algorithms, however, that impose constraints such as conformance and strict metamodelling.
- And while it is possible to bootstrap these algorithms, the model of these algorithms will, by definition, violate strict metamodelling requirements.
- A simple example is the conformance algorithm: to determine whether a model conforms to another, information is required from both the metamodel and the model, thus combining two modelling layers.
- This is the exact thing that strict metamodelling prevents, as it prevents models from spanning multiple levels.
- This problem is shown in Figure~\ref{fig:spanning_algorithm}, where a petri net model linguistically conforms to a simple petri net metamodel.
- A conformance algorithm, however, has to access parts of the metamodel (\textit{e.g.}, \textit{Place}) and parts of the model (\textit{e.g.}, \textit{a place}), to check whether they conform.
- This makes it difficult to state on which level the algorithm itself has to reside: at the model or metamodel level.
- In the figure, this is shown through the use of a gradient: it sits somewhere in between the levels.
- Note, however, that each element does indeed conform to the physical level in a strict way.
- The physical level is part of the implementation and is therefore not (meant to be) user-accessible.
- The natural solution to this problem, as followed by most tools, and in particular deep metamodelling tools, is to shift these strict metamodelling violating operations to the physical conformance dimension.
- In that dimension, all models, even metamodels, become part of a single level: the implementation level.
- As the implementation level is implemented in the implementation language, no restrictions are imposed whatsoever, as models and metamodels are both just elements in the data structure.
- \begin{figure}
- \includegraphics[width=\columnwidth]{figures/spanning_algorithm.pdf}
- \caption{Illustration of the problem: the algorithm spans both the metamodel (M2) and model (M1) layer in the linguistic dimension, thus violating strict metamodelling.}
- \label{fig:spanning_algorithm}
- \end{figure}
- While this approach has served current (meta-)modelling tools well, it prevents the explicit modelling of the complete system, resulting in the strong reliance on the implementation level, with all previously associated problems.
- This problem is aggrevated in deep metamodelling, as there can be absolutely no assumptions in the linguistic dimension: users can create an arbitrary number of layers.
- Deep metamodelling further raises problems related to strict metamodelling, such as how to specify deep constraints~\cite{DCL}, which also span multiple levels.
- While we would prefer to model these explicitly, this becomes impossible due to the explicit level-crossing nature of the algorithms.
- In this paper, therefore, we plan to tackle this problem of strict metamodelling, allowing models of model management operations, while still keeping strict metamodelling in its original meaning.
- We do this by shifting parts of the physical conformance level, normally hardcoded in the tool, to the linguistic conformance level, where users can access it just like any other model.
- Not only does this allow for linguistic modelling of tool algorithms, but it decouples these algorithms from implementation details.
- Apart from explicitly modelling the tool itself, we believe that this approach serves well in combination with megamodelling~\cite{megamodelling}, where we start reasoning about inter-model relations.
- Currently, most megamodel management operations are still implemented in implementation languages, such as Java~\cite{MegamodelManagement}.
- Certainly the combination with runtime models~\cite{MegamodelsRuntime} has the potential to highly profit from our approach.
- The remainder of this paper is organized as follows.
- Section~\ref{sec:background} summarizes the required background in supporting explicitly modelled type/instance relations, and how multiple conformance relations are enabled through the application of this technique.
- Section~\ref{sec:conformance} touches upon the three main dimensions in terms of conformance, and presents how we encode each of these relations.
- Section~\ref{sec:conformancebottom} presents our approach to merging the physical conformance level into the linguistic conformance level through the use of $\mathit{conformance}_\perp$.
- Related work is explored in Section~\ref{sec:relatedwork}.
- Section~\ref{sec:conclusion} concludes the paper and gives future work.
|