1234567891011121314151617181920212223242526272829303132 |
- \section{Conclusion}
- \label{sec:conclusion}
- We observed that current (meta-)modelling tools have a very strong reliance on their implementation level through the use of code fragments in the implementation language (\textit{e.g.}, Java code as an action in a Statechart transition), explicit reliance on the internal data structures (\textit{e.g.}, XMI), by hardcoding tool semantics (\textit{e.g.}, model management operators in Java), hardcoding semantics of model attributes (\textit{e.g.}, a fixed potency attribute that is always there and further restricts instances), or a combination thereof.
- This results in models specific to the specific implementation language, the specific tool semantics, and even implementation details such as data structures.
- While this is is an easy solution with high performance, these disadvantages are significant.
- It is, however, not trivial to explicitly model the complete tool within the tool itself, due to many constraints imposed on metamodelling, in particular strict metamodelling.
- And while some previous approaches have completely done away with strict metamodelling, we believe that there is still huge value in strict metamodelling, and that it should be kept.
- To tackle this problem, we shifted relevant parts from the physical dimension over to the linguistic dimension.
- The internal data structure was made explicit, such that it can be altered in an explicitly modelled way.
- To further break the link between these tool services and the internal data structure, which should be an implementation detail, we allow for the actual linguistic and physical metamodel to differ, as long as there is a mapping between them.
- Operations will be performed on the linguistic metamodel, which the used data structure should translate to its own operations.
- These changes allow on one hand to model all algorithms explicitly over a metamodel that is known to be capable of representing each possible model, and is known never to change, even when the internal data structure changes.
- On the other hand, it also allows us to cope with strict metamodelling by switching to a different linguistic metamodel representation.
- In this low-level linguistic representation, all data is shown as a single graph, as it was also the case in the physical dimension, such that every possible link is known to be within the same level by definition: every element is at the same level anyhow.
- We envision multiple directions for future work.
- First, we intend to completely bootstrap our used tool, now that the conceptual limitations have been removed.
- Currently, the tool still relies on a Python implementation for the action language interpreter, which should now be shifted to an interpreter in action language itself.
- Subsequently, a code generator should be defined to export action language to any other implementation language.
- This is very similar to the way Squeak~\cite{Squeak}, an efficient and self-describing Smalltalk~\cite{Smalltalk} interpreter, was defined.
- We also wish to further investigate the relation of our tool to Smalltalk.
- Second, we only considered physical and linguistic conformance in this paper, whereas we left ontological conformance untouched.
- Ontological conformance might, thanks to multiple conformance relations and the break from the implementation level, also be integrated in this vision.
- Third, decoupling the internal data structures from the tool itself makes it possible to dynamically change data structure depending on access patterns, similar to the use of activity in simulation tools~\cite{PythonPDEVSActivity}.
- Fourth, after bootstrapping our tool, implementations in different implementation languages should be easy to create, simply by defining another code generator.
- We intend to generate our tool for several different platforms, to clearly show that there is no dependence whatsoever on the platform.
- Finally, we believe that the $\mathit{conformance}_\perp$ relation is an enabler for megamodelling~\cite{megamodelling}, and want to investigate this further.
|