-One of the shortcomings of current (meta-)modelling tools is their strong reliance on their implementation level.
-This reliance ranges from exposing the general purpose implementation language used (\textit{e.g.}, Java), to requiring some operations to operate directly on the internal data representation of the models (\textit{e.g.}, XMI).
-Such strong reliance on the implementation level offers some benefits though, such as higher efficiency both in execution (as it is likely compiled) and development (tool developers are familiar with the programming language).
-
-The disadvantages, however, are significant: models and algorithms become highly specific to the current state of the implementation, making it impossible, or at least difficult, to port models and algorithms from one tool to the other.
-These disadvantages not only prevent porting models between tools, but models can also become incompatible with newer versions of the tool.
-Should, for example, the internal data representation be changed ever so slightly, all previously created models become incompatible.
-Similarly, if the tool is ported from one implementation language to the other (\textit{e.g.}, from Python to C, for efficiency), all fragments of Python code in all models would have to be updated to C code too.
-Even more importantly for the scientific community, is the portability of algorithms.
-Every year, new algorithms are introduced to further the state of the art in model management operations.
-These algorithms, however, often strongly rely on the internal representation of models, such as XMI or graphs.
-Implementing these algorithms on a tool with a different internal representation could prove challenging due to the different assumptions that can be made.
-Similarly, these algorithms are implemented in the implementation language of the tool, making a reimplementation of these algorithms in a tool with a different implementation language a non-trivial task as it is.
-Yet another disadvantage relates to tool semantics: each tool has its own interpretation of what it means to instantiate a model and check its conformance.
-While the semantics is obvious to experienced users, switching between tools will frequently necessitate lookups in tool documentation to understand the semantics.
-Related to this, porting a model between tools also requires adaptation to this changed semantics, making it even more difficult than it already is.
-Tool semantics are therefore non-obvious and, more problematicly, hardcoded inside of the tool.
-So in order to completely understand the semantics, it becomes necessary to read the source code of the tool, which is a completely seperate entity from the tool itself (\textit{i.e.}, requires a different viewer/editor, is in a different language, and is at a completely different level of abstraction).
-
-To counter these disadvantages, we intend to break the strong reliance on the implementation level of tools.
-First, we observe why tools voluntarily chose for this strong reliance.
-One of the main reasons is undoubtedly purely pragmatic: tool developers are (very) familiar with specific programming languages, and thus use it wherever they can.
-As general purpose programming languages are a well-developed field, advanced tools are available, such as, efficient compilers, debuggers, and code analyzers.
-Additionally, many libraries are available for use, such as graphical libraries, parsers, data structures, and so on.
-This results, at least at first, in efficient code and fast development of new tools.
-
-But even adventurous tool developers, who want to model as much as possible, quickly hit a wall:
-the complete system, thus including the model management algorithms, needs to become independent of the implementation language, and should thus be explicitly modelled.
-It is these algorithms, however, that impose constraints such as conformance and strict metamodelling.
-And while it is possible to bootstrap these algorithms, the model of these algorithms will, by definition, violate strict metamodelling requirements.
-A simple example is the conformance algorithm: to determine whether a model conforms to another, information is required from both the metamodel and the model, thus combining two modelling layers.
-This is the exact thing that strict metamodelling prevents, as it prevents models from spanning multiple levels.
-
-This problem is shown in Figure~\ref{fig:spanning_algorithm}, where a petri net model linguistically conforms to a simple petri net metamodel.
-A conformance algorithm, however, has to access parts of the metamodel (\textit{e.g.}, \textit{Place}) and parts of the model (\textit{e.g.}, \textit{a place}), to check whether they conform.
-This makes it difficult to state on which level the algorithm itself has to reside: at the model or metamodel level.
-In the figure, this is shown through the use of a gradient: it sits somewhere in between the levels.
-Note, however, that each element does indeed conform to the physical level in a strict way.
-The physical level is part of the implementation and is therefore not (meant to be) user-accessible.
-
-The natural solution to this problem, as followed by most tools, and in particular deep metamodelling tools, is to shift these strict metamodelling violating operations to the physical conformance dimension.
-In that dimension, all models, even metamodels, become part of a single level: the implementation level.
-As the implementation level is implemented in the implementation language, no restrictions are imposed whatsoever, as models and metamodels are both just elements in the data structure.
- \caption{Illustration of the problem: the algorithm spans both the metamodel (M2) and model (M1) layer in the linguistic dimension, thus violating strict metamodelling.}
- \label{fig:spanning_algorithm}
-\end{figure}
-
-While this approach has served current (meta-)modelling tools well, it prevents the explicit modelling of the complete system, resulting in the strong reliance on the implementation level, with all previously associated problems.
-This problem is aggrevated in deep metamodelling, as there can be absolutely no assumptions in the linguistic dimension: users can create an arbitrary number of layers.
-Deep metamodelling further raises problems related to strict metamodelling, such as how to specify deep constraints~\cite{DCL}, which also span multiple levels.
-While we would prefer to model these explicitly, this becomes impossible due to the explicit level-crossing nature of the algorithms.
-
-In this paper, therefore, we plan to tackle this problem of strict metamodelling, allowing models of model management operations, while still keeping strict metamodelling in its original meaning.
-We do this by shifting parts of the physical conformance level, normally hardcoded in the tool, to the linguistic conformance level, where users can access it just like any other model.
-Not only does this allow for linguistic modelling of tool algorithms, but it decouples these algorithms from implementation details.
-
-Apart from explicitly modelling the tool itself, we believe that this approach serves well in combination with megamodelling~\cite{megamodelling}, where we start reasoning about inter-model relations.
-Currently, most megamodel management operations are still implemented in implementation languages, such as Java~\cite{MegamodelManagement}.
-Certainly the combination with runtime models~\cite{MegamodelsRuntime} has the potential to highly profit from our approach.
-
-The remainder of this paper is organized as follows.
-Section~\ref{sec:background} summarizes the required background in supporting explicitly modelled type/instance relations, and how multiple conformance relations are enabled through the application of this technique.
-Section~\ref{sec:conformance} touches upon the three main dimensions in terms of conformance, and presents how we encode each of these relations.
-Section~\ref{sec:conformancebottom} presents our approach to merging the physical conformance level into the linguistic conformance level through the use of $\mathit{conformance}_\perp$.
-Related work is explored in Section~\ref{sec:relatedwork}.
-Section~\ref{sec:conclusion} concludes the paper and gives future work.
-In this section, we briefly introduce the required concepts for the remainder of the paper.
-As our contribution is focussed on moving away from the implementation layer, we present our previous work in doing so.
-First, we discuss our explicitly modelled, neutral action language.
-It is a simple procedural action language, but its constructs are explicitly stored as a model themself (\textit{i.e.}, an abstract syntax graph).
-Second, we use this action language to explicitly model the type/instance relations, showing the benefits we gain from doing so.
-This explicit type/instance relation offers us multiple possible typings for a single model, which sits at the core of our approach.
-
-Each part will already briefly encounter the problems mentioned earlier in this paper.
-Solutions to this were always rather ad-hoc, as our primary intention was on bootstrapping our tool: the Modelverse~\cite{MULTI_Modelverse}.
-After the implementation of the enabling technology, presented in this section, we have come to an elegant solution, presented in this paper, to a recurring problem.
-
-\subsection{Explicitly Modelled Action Language}
-A first step in breaking free from the implementation layer, is by removing all code snippets in the implementation language that are scattered throughout the model.
-Since we don't want to depend on a single implementation language, a neutral action language was defined.
-
-While our action language serves a similar purpose as other neutral languages, like the OCL, its major difference is that it is explicitly modelled.
-Whereas an OCL interpreter follows the OCL standard, a 262-page specification, issued by the OMG~\cite{OCL}, our action language is formalized at a much lower level.
-Only a handful of low-level operations are supported, which are very similar to the primitive operations in any procedural language.
-As we store models as graphs internally, action language models are also represented as a graph.
-Semantics are formally defined through the use of graph transformations, defining a translation from one graph pattern (before execution of the instruction) to another (after execution of the instruction).
-Our interpreter itself is therefore only a few hundred lines of code, to which a set of graph transformation rules are passed.
-Only these few hundred lines of code are dependent on the underlying platform, and should be ported when porting the complete application.
-
-Apart from explicitly modelling the semantics, this also makes sure that the execution state is explicitly modelled, as that too is modified by the graph transformation.
-The major disadvantage, however, is performance, which is very low at the moment.
-
-An example rule is shown in Figure~\ref{fig:rule}, where the graph transformation rule is shown for when a \texttt{while} element is to be executed with a condition that evaluates to \texttt{True}.
-Note that not only the action language constructs are explicitly modelled, but also the complete execution context (\textit{i.e.}, execution stack) is explicitly represented in the Modelverse and is transformed according to these rules.
- \caption{Rule to execute when a \texttt{while} is executed with a condition evaluating to \texttt{True}.}
- \label{fig:rule}
-\end{figure}
-
-It is our intention to create a library of operations, containing, for example, the operations defined by OCL, which is modeled using this minimal action language.
-But while this is a noble plan, we quickly encounter the same problems that were raised in the beginning of this paper: strict metamodelling does not allow for models that span multiple levels.
-An explicit model of, let's say, a conformance relation, needs access to both the type and instance level, as shown in Figure~\ref{fig:spanning_algorithm}.
-We temporarily solved this problem by ignoring strict metamodelling requirements, such that we can bootstrap the tool anyhow.
-
-The contribution of a new action language itself, is therefore not sufficient to address the problems raised at the start of this paper.
-While some code snippets in models can be replaced with implementation-independent action code, such as OCL or even our own action language, this action language is not sufficient to model low-level operations that need to work across multiple levels in the modelling hierarchy.
-Model management operations, in particular, frequently require cross-level access to the model.
-A next step in allowing for our contribution, is to explicitly model the type/instance relation between models.
-By explicitly modelling the type/instance relation, we have shown that users gain more insight in tool semantics and can alter the semantics if desired.
-This resulted in the possibility for multiple types of type/instance relations.
-For example, a single model can be typed by multiple metamodels, possibly simply because they are different, but similar, metamodels, but maybe also because a less restrictive conformance check is being used.
-As there is no common agreement on how restricted a conformance relation should be (\textit{e.g.}, should it take into account all of potency, cardinality, or multiplicity, and how should it behave if these are violated?)
-
-We achieved this explicit semantics through explicit modelling of the type/instance relation, which constisted of several steps.
-
-First, the metamodel was stripped to only contain structural restrictions.
-As such, there were no longer any additional attributes that were undefined at the level above, such as potency, multiplicity, cardinalities, and so on.
-Attributes with that name could be present though, if they were allowed by the metamodel, but they have no associated semantics, thus not restricting instances.
-
-Second, type information from the model was split of into a seperate model.
-A model was thus reduced to a mere graph, which structurally conformed to another graph (the metamodel).
-The type information contained links between both graphs, indicating the type of each element in the model.
-Due to this seperation, a model could possibly have multiple type mappings, together with multiple metamodels.
-To determine whether a model was typed by another model, both models were required, but additionaly a mapping also needed to be present.
-
-Third, all semantics was shifted to the instantiation and conformance checking algorithms, which make up the type/instance relation.
-The instantiation algorithm checks all necessary constraints during instantiation, and, for example, prevents further instantiation if the potency has already reached zero.
-Similarly, the conformance check checks both the structure, as defined by the graphs, the types, as defined by the type mapping, and the additional constraints imposed by giving semantics to special attributes like potency and cardinalities.
-It is thus now the conformance algorithm which gives the semantics to these attributes, and no longer the tool internals, which are hidden from the user and non-modifiable.
-To make the algorithms independent of the implementation, they are implemented using the previously defined neutral action language, which is explicitly modelled.
-
-The most important aspect, in the context of this paper, is that it allows a single model to conform to multiple metamodels simultaneously, possibly even through different conformance checking algorithms.
-This is shown in Figure~\ref{fig:multi_conformance}, where a single petri net model is shown (as a graph; in the middle) which conforms to two different metamodels: an ordinary place/transition net metamodel (top), and one extended with an inhibitor arc (bottom).
-Additionally, minor differences exist between the two metamodels, such as different naming for the \textit{weight} of the transition.
-When a users uses the model, it is thus required to pass the corresponding type mapping, of which there are now two.
- \caption{A single petri net model with multiple conforming metamodels, each through a different type mapping (excerpt).}
- \label{fig:multi_conformance}
-\end{figure}
-
-Note again that we encounter the problem of strict metamodelling here.
-The conformance checking algorithm needs access to both the model and metamodel, thus crossing over multiple levels in the modelling hierarchy.
-This problem was again alleviated by not caring too much about strict metamodelling, thus again moving away from normal modelling, and into a realm between the tool implementation and the actual models in the tool.
-In this paper, we finally shift all parts from the implementation to the models in the tool.
-It was found out that there is not a single kind of type/instance relation, but actually multiple, to cope with the different kinds of relations~\cite{OCA}.
-A distinction was made between conformance to the physical implementation, and to the linguistic metamodel.
-This classification architecture was termed the Orthogonal Classification Architecture (OCA).
-Other work~\cite{Bruno,Ken} has shown the presence of yet another type/instance relation, which links back to the semantics of the model.
-Depending on which dimension is used, different type/instance hierarchies emerge, changing the implications of strict metamodelling.
-Sadly, terminology is rather inconsistent in the literature, making it difficult to clearly communicate about the different dimensions.
-This section serves to clear up the terminology we will use throughout this paper.
-
-A concise example is shown of a simple petri net model, which conforms to three different metamodels with respect to these different conformance relations.
-
-\subsection{Physical Conformance}
-The low-level view on conformance is what we call \textit{physical conformance}.
-This is the kind of conformance used by tool-developers and model management operations, as it does not impose many constraints.
-It sits at the physical level, where it is responsible for how data is represented physically in memory.
-Physical representation is unrelated to language engineering, and is completely managed by the implementation language of the tool.
-Physical conformance, therefore, is checked by their respective compilers (\textit{e.g.}, GCC and Java compiler).
-
-But since it is checked only by the compiler, no restrictions are placed upon the crossing of modelling levels.
-Indeed, all models, metamodels, metametamodels, and so on, are similar at this level: they are data created by the user that can be manipulated by the tool.
-Tool internals, as well as model management operations, are mostly implemented at this level, as it allows all possible modifications to the model.
-Additionally, this level is efficient due to its close relation to the implementation, and use of advanced tools such as efficient compilers.
-Since this is the lowest level of conformance, to which every model conforms by definition, all operations need to be applicable to all models.
-Most of the time, the only common representation of all models is the data structure used to store them.
-These operations thus alter the internal data structure directly, without any kind of domain-specific algorithm in between.
-
-While these modifications are again efficient, and furthermore easy to implement for the tool developer, the strong link to the implementation is made obvious.
-Porting all tool internals to a different tool doesn't only require porting between programming languages, but also between different internal data representations.
-Changing implementation details, which should be transparent to the user, will have significant implications on these algorithms as well.
-
-In our petri net example, this relates to how the petri net is stored in memory: with all elements being instances of a generic \textit{Class} defined by the implementation language.
-This is shown in Figure~\ref{fig:pn_physical}: the model conforms to a simple graph representation, where places and transitions are represented as nodes, and the edge between them is mapped to the edge of a graph.
-Depending on the implementation, a different physical metamodel can be used, for example that of an SQL database.
-The metamodel is the same for every model, be it petri nets, class diagrams, object diagrams, statecharts, or even a domain-specific language.
-The reason for this is simple: if the model doesn't conform to this metamodel, the tool has no way of representing it in memory.
- \caption{Physical conformance of a petri net model.}
- \label{fig:pn_physical}
-\end{figure}
-
-\subsection{Linguistic Conformance}
-\textit{Linguistic conformance} is the traditional view on conformance, which is heavily used for domain specific modelling.
-It is also the view offered to users: if a user creates a metamodel, and subsequently instantiates it, this is through linguistic conformance.
-The relation defines whether or not a user-defined model is a valid instance of another user-defined (possibly by another user) metamodel.
-Checks mostly have a structural notion (\textit{e.g.}, is a link between these entities allowed and are all required attributes present), though minor semantical constraints are also possible.
-These semantical constraints are for example range checks on attributes (\textit{e.g.}, integer must be larger than 5), global restrictions on values of attributes (\textit{e.g.}, the sum of these two attributes needs to be greater than 10), or global restrictions on the structure (\textit{e.g.}, no loop possible for a certain association).
-Whereas structural checks are easily implemented through the use of a metamodel, the additional semantic constraints are often implemented using some kind of executable language.
-Constraints are written in constraint languages, such as OCL, but are sometimes already shifted to the implementation language (\textit{e.g.}, implemented in Python as in AToM$^3$~\cite{AToM3} and AToMPM~\cite{AToMPM}).
-
-Contrary to the physical conformance dimension, users can extend the linguistic conformance dimension by adding, modifying, or deleting metamodels.
-As such, a model is not, by definition, always conforming to its provided metamodel.
-This conformance relation is also not checked by the implementation language, but by the tool itself.
-But whereas most programming languages are standardized and have a clear definition on what it means to conform, this is not always the case in the linguistic dimension.
-Each tool has its own interpretation of what it means for models to conform to another model.
-Making this relation explicit, and thus offering the user the choice, was the primary effort of our previous work~\cite{MultiConformance}.
-
-It is also at this level that strict metamodelling comes into play: no links, except for the \textit{instanceOf} link, is allowed to cross the levels defined by this relation.
-Tool users should ideally only be concerned with this dimension, as it offers support for domain-specific languages and makes use of all the features implemented by the tool (\textit{e.g.}, strict metamodelling, type checking, model transformations, and consistency management).
-
-In our petri net example, this relates to the metamodel of petri nets which constrains the structure of the net.
-Example constraints are that no direct link between places is possible (specified by the omission of an association from \textit{Place} to itself), and the number of tokens in a place needs to be positive (specified by a static semantics constraint).
-This is shown in Figure~\ref{fig:pn_linguistic}, where each element conforms to a metamodel that is specific to the model.
-Contrary to physical conformance, this metamodel is only valid for petri net instances.
-At this level, it is impossible to create, for example, a link from the place instance to the place type, due to strict metamodelling.
-There is also no mention of the implementation: this relation does not imply anything on how a place is represented in memory (\textit{e.g.}, as a node in a graph or as an ID in a SQL database).
-It does, however, constrain the structure of the model.
-The metamodel contains an association from \textit{Place} to \textit{Transition}, and vice versa, but no transition from \textit{Place} or \textit{Transition} to itself.
-This constrains the instances more than was the case with physical conformance.
-Furthermore, it is often this dimension of conformance that is used to specify concrete syntax.
-Strictly speaking, the representation of the model in physical conformance would have to be the actual graph that is stored in memory.
- \caption{Linguistic conformance of a petri net model.}
- \label{fig:pn_linguistic}
-\end{figure}
-
-\subsection{Ontological Conformance}
-The final conformance dimension is \textit{ontological conformance}, which relates purely to the semantics of the model.
-It is also one of the views offered to users, but is not related to the structure of the model, only to the properties the model satisfies.
-As it relates to semantics, execution of the model is required.
-Depending on the property of interest, the algorithm executed varies from, for example, simulation (\textit{e.g.}, trace satisfies some property) to state space analysis (\textit{e.g.}, deadlocking system).
-
-The algorithm to be executed often again relates back to the physical dimension, as this is where the implementation is defined in case the implementation language is used.
-
-In our petri net example, this relates to the properties satisfied by the petri net, such as deadlocking, safety, or reachability.
-Figure~\ref{fig:pn_ontological} represents a petri net without any tokens and not generators, so the petri net is clearly deadlocking and not live.
-Ontologically speaking, the petri net thus conforms to the \textit{deadlocking} property and not to the \textit{live} property.
-This concept is again broader than petri nets only, and might also be applicable to formalisms that have similar sementics or properties.
-But while previous conformance relations focussed purely on static aspects of the model (\textit{i.e.}, structure and static semantics), this conformance dimension focusses exclusively on the semantics through execution.
-\section{Explicit Modelling of Physical Conformance}
-\label{sec:conformancebottom}
-Recall that relying on the physical conformance relation was the cause of the problems we have previously observed.
-The theoretical limitation, preventing explicit modelling of these algorithms, were the limitations imposed by strict metamodelling: a model cannot span multiple levels.
-These problems let to the obfuscation of tool semantics, and the strong reliance on implementation details for all algorithms.
-
-\subsection{Moving Away from Physical Conformance}
-As all problems seem to be situated in the physical conformance dimension, the most direct solution would be to do away with this dimension completely.
-This is, however, not possible, as each model still requires a physical representation in memory, as well as model management operations defined over it.
-
-The closest we can get, is shifting away many responsibilities of the (hidden) physical conformance dimension, into the (explicitly modelled) linguistic conformance dimension.
-There is a natural relation between both physical and linguistic conformance, as both are related to the structure of the model.
-
-To do this, we define a new metamodel, which is identical to the (implicit) metamodel of the implementation layer.
-This metamodel, however, is defined in the linguistic dimension, thus making it explicit.
-For clarity in our discussion, we call this metamodel $\mathit{LTM}_\perp$, shown in Figure~\ref{fig:LTM_bottom}.
-It can be seen that it is a metamodel for basic graphs, where nodes might have values.
-These possible values are \textit{Type} (the type of any value type, including itself), \textit{Action} (the type for all action language constructs, such as \textit{While}, \textit{If}, and \textit{FunctionCall}.), \textit{Integer}, \textit{Float}, \textit{Boolean}, and \textit{String}.
-Additionaly, edges are a subclass of nodes, meaning that they can have incoming and outgoing edges themself.
-Since every element is a subclass of \textit{Node}, an edge can start and end at any element, including itself.
-As this is only at the conceptual level, it was done to make reasoning about edges from edges conceptually clearer.
-The leftmost association from \textit{Node} to itself represents the type of inheritance relations: since inheritance relations are also explicitly modelled~\cite{MultiConformance}, they require their own metamodel.
-And since the $\mathit{LTM}_\perp$ should be self-describing, it contains this type too.
-
-Since any model conforms to the (often implicit) physical metamodel in the physical dimension, they should also, by definition, conform linguistically to $\mathit{LTM}_\perp$.
-We call this new linguistic conformance to $\mathit{LTM}_\perp$ $\mathit{conformance}_\perp$.
-While it is actually the same as conformance in the physical dimension, we shift this to the linguistic dimension to offer it to the users.
-Thanks to the possibility for multiple metamodels for a single metamodel~\cite{MultiConformance}, it is possible for the model to be typed by multiple linguistic metamodels: $\mathit{LTM}_\perp$, and the original linguistic metamodel(s).
-Figure~\ref{fig:moving_ltm} shows the 1-to-1 mapping of the Physical Type Model (PTM) to the linguistic dimension.
-As each element necessarily conforms to the PTM, it will also, by definition, conform to the new $\mathit{LTM}_\perp$.
- \caption{$LTM_\perp$ added in the linguistic dimension, which is identical to the one in the physical dimension.}
- \label{fig:moving_ltm}
-\end{figure}
-
-\subsection{Coping with Strict Metamodelling}
-By lifting the physical conformance relation up to the linguistic conformance dimension, we achieve a way of explicitly modelling, albeit indirectly, in the physical dimension.
-Users are therefore able to, using their normal linguistic modelling tools, alter the physical dimension.
-The physical representation of the model is thus seen as an instance a linguistic metamodel.
-
-While the tool still complies to strict metamodelling in the linguistic dimension, $\mathit{LTM}_\perp$ is taken so general, that the complete metamodelling hierarchy can be expressed as a direct instance of it.
-This effectively flattens the original metamodelling hierarchy into a single level: $\mathit{LTM}_\perp$ at the metamodelling level, and everything else at the modelling level.
-In this single model level, which is only a different view on the same model, strict metamodelling does not restrict anything, even links between different levels (of the original hierarchy).
-Figure~\ref{fig:different_hierarchies} represents the two possible views on the modelling hierarchy: either through the usual conformance relation (Figure~\ref{fig:different_hierarchies_A}), or the new $\mathit{conformance}_\perp$ relation (Figure~\ref{fig:different_hierarchies_B}).
-
-Depending on the used metamodel and conformance relation, strict metamodelling can thus be interpreted differently.
-Note that this is still distinct from dropping strict metamodelling completely: strict metamodelling is still used throughout the complete environment, and still imposed on instances, even with the $\mathit{conformance}_\perp$ relation.
-But the implications of strict metamodelling depend entirely on the metamodel: for normal linguistic metamodels, strict metamodelling is as it was originally designed, but for the special metamodel $\mathit{LTM}_\perp$, strict metamodelling does not constrain anything because every element is at the same level.
-
-Coping with strict metamodelling alone does not solve all problems.
-While the limitation of not being able to model executable models across levels was removed, these executable models still directly interact with the underlying data structure.
-This is still a lingering aspect of the physical dimension, which we tackle next.
- \caption{Different modelling hierarchies for the model \textit{my\_PN}, as seen through two different linguistic views.}
- \label{fig:different_hierarchies}
-\end{figure*}
-
-\subsection{Abstracting Implementation Details}
-The 1-to-1 mapping between the physical metamodel and $\mathit{LTM}_\perp$ made it possible to linguistically access the physical dimension.
-But the physical dimension is still part of the implementation, and could therefore change in subsequent versions.
-This would bring us to language evolution, as $\mathit{LTM}_\perp$, and possibly $\mathit{conformance}_\perp$, would also have to be updated, together with all saved models.
-While some advances are made to language evolution in order to do these changes automatically, we don't want to expose users to these problems.
-
-Users should therefore not be bothered with the internals of the tool, not even the physical data representation.
-And while users do need access to a physical-like representation, it can certainly be a different one than that which was implemented, as long as there exists a mapping between them.
-$\mathit{LTM}_\perp$ is thus merely a wrapper, or an abstraction of the actual data structure being used.
-Modifications on instances of $\mathit{LTM}_\perp$ are mapped over to changes in the physical dimension, and vice versa.
-This can be done by having the actually implemented data structure implement an interface as if it were conforming to $\mathit{LTM}_\perp$.
-This requires a mapper between $LTM_\perp$ and the physical metamodel, which is similar to physical mappers~\cite{MULTI_Modelverse}.
-Now, however, the mapping is only defined for a single metamodel, instead of for each metametamodel individually, greatly relieving users.
-This is the mapping shown in Figure~\ref{fig:change_physical}.
-
-Decoupling the implementation of algorithms from the actual internal data structure makes it possible to perform drastic changes internally (\textit{e.g.}, switching between database technologies), without any change whatsoever to the explicit models of model management operations, nor to $\mathit{LTM}_\perp$ or $\mathit{conformance}_\perp$.
-Related to this, different tools can implement exactly the same algorithms, which were explicitly modelled, even if their implementation language and internal data structure is completely different.
-They only need to agree on $\mathit{LTM}_\perp$ and the corresponding $\mathit{conformance}_\perp$, and an explicitly modelled action language to go along with it.
-All other implementation choices become truely that: choices made in the implementation that don't affect functionality at all.
- \caption{Changing the physical metamodel with something else, as long as there is still a mapping to $LTM_\perp$. SQL metamodel not expanded due to space constraints.}
- \label{fig:change_physical}
-\end{figure}
-
-\subsection{Overview}
-%TODO elaborate
-We now relate back to the problems we initially observed.
-The strong reliance on the physical dimension were caused by both pragmatic reasons (\textit{i.e.}, developers are more familiar with programming languages) and theoretical limitations (\textit{i.e.}, strict metamodelling prevents a model from referencing two different levels).
-While we can't do much about the pragmatic reasons, we have used multi-conformance to offer a different view on the model: instead of being an instance of a user-defined metamodel, it becomes an instance of $\mathit{LTM}_\perp$.
-Using the $\mathit{conformance}_\perp$ relation, strict metamodelling does not constrain the user anymore because all model elements reside at the same level.
-
-Similarly, the physical implementation and mapping to $\mathit{LTM}_\perp$ were decoupled from the linguistic metamodel, making it possible to alter the implementation without affecting $\mathit{LTM}_\perp$ or its instances at all.
-As a simple example of our approach, we present here the implementation of an instantiation model management operation.
-The operation is invoked on an element in a metamodel that has to be instantiated, using the existing model to which the instantiation should be added.
-
-Current tools implement this using code written in the implementation language and hardcoded in the tool, even though their tool supports a neutral language (\textit{e.g.}, OCL).
-A normal neutral language is unfit for this purpose for several reasons:
-\begin{enumerate}
-\item Any representable model in the tool might become subject to instantiation, so we don't want to define the operation over the linguistic metamodel defined by the user.
- If we were to do this, only instances of that exact metamodel could ever be instantiated.
- Creating a new metamodel would also require users to reimplement all instantiation operations over and over again, to make them applicable to the model used.
- Most of the time, instantiation is very similar, so a default should be provided.
- By implementing this operation at the physical level, tools avoid this problem as they now work on the internal representation, which is identical for all models.
-\item Instantiating a model element is an invasive operation, which can greatly disturb the linguistic dimension by, for example, breaking conformance to the linguistic metamodel.
- Implementing instantiation based on the linguistic metamodel, defined by the user, would therefore also be unwise, as conformance might break halfway through the operation, making the function not applicable anymore.
-\item Strict metamodelling prevents users from crossing between levels.
- Even if the previous two problems were to be solved, a single model (the instantiation algorithm) cannot have links to both the model (to add the instantiated element), the metamodel (to read out the element to instantiate), and even the metametamodel (to find subtyping information).
-\end{enumerate}
-
-With our approach, each of these reasons is solved as follows:
-\begin{enumerate}
-\item Instead of shifting the algorithm to the physical dimension to get access to the physical representation, we shift the physical representation to the linguistic dimension as $\mathit{LTM}_\perp$.
- This way, the low-level representation of the model is also an explicit instance, for which each possible model conforms to one and the same metamodel: $\mathit{LTM}_\perp$.
- If the retyping operation is defined using $\mathit{conformance}_\perp$, it will be applicable to every possible model.
-\item The type of a model is not visible in normal circumstances, as it is part of the conformance check.
- It is indeed even dangerous to change the type of a model, while operating on that specific type.
- By operating on a different type, however, of which it is known that the model will always conform to it, there are no risks involved at all.
- While it might be possible that some of the other previous conformance relations are broken (\textit{e.g.}, to a user-defined metamodel), $\mathit{conformance}_\perp$ is not invalidated by the operation as it holds by definition.
-\item As previously shown, our approach just changes views to $\mathit{conformance}_\perp$, in which strict metamodelling is still valid, but it doesn't actually restrict anything, since every element is at the same level.
-\end{enumerate}
-
-The algorithm is related to how models are represented internally: all models are subgraphs of a single coherent graph.
-This format of model representation is itself already level-crossing, as there are edges for both navigation and instantiation.
-As it contains level-crossing links, it is an invalid model when viewed through an ordinary linguistic typing relation.
-It is, however, viewable and even modifiable using $\mathit{conformance}_\perp$, as the model completely complies to $LTM_\perp$.
-
-During the execution of the algorithm, the model is viewed not through the usual conformance relation, but through the $\mathit{conformance}_\perp$ relation.
-As such, the model can be modified as if it were merely a graph, without any additional semantics or imposed restrictions.
-Apart from just allowing any kind of structural change, inconsistencies in the usual conformance relation are also possible: cardinalities, multiplicities, potencies, and so on, can all be invalidated as their semantics is not checked at this level.
-Operations defined by the user, using the normal linguistic conformance relation, will just reinterpret the graph to the usual linguistic dimension, thus again checking all additional constraints such as cardinalities.
-
-We use this code to instantiate a new petri net place, as specified by the petri nets metamodel.
-The example is visualized in Figure~\ref{fig:example}.
-Figure~\ref{fig:example_A} indicates the problem with the instantiation algorithm: it accesses itself and three different modelling levels:
-the model level to write out the instantiated model,
-the metamodel level to read out the allowed attributes and all constraints,
-and the metametamodel level to know about inheritance links and how to handle them.
-Accessed elements are highlighted in the figure, indicating that the algorithm requires access (and thus, links) to all these levels.
-It is therefore impossible to add it at either of these levels: adding it to one level would cause violations for the other levels.
-By taking the $\mathit{conformance}_\perp$ view, the modelling hierarchy changes from Figure~\ref{fig:example_A} to Figure~\ref{fig:example_B}, in which there are no level-crossing links anymore.
-In Figure~\ref{fig:example_B}, all access are again highlighted, but are now within the same level in the modelling hierarchy.
-There is therefore no longer any violation of strict metamodelling.
- \caption{Two different ontological views on the same model. The elements accessed by the algorithm are shown in light blue. Only $\mathit{conformance}_\perp$ complies with strict metamodelling.}
- \label{fig:example}
-\end{figure*}
-
-The complete procedure is shown in Figure~\ref{fig:overview}:
-first the $\mathit{conformance}_\perp$ view is taken on the model, where it is shown as a graph instead of a petri net model and metamodel.
-Second, this graph is traversed and the requested changes are performed.
-Finally, the modified graph model is again interpreted using the original conformance relation, where users use their own metamodel and corresponding type mapping to interpret the graph.
- \caption{Overview of the complete procedure: (1) reinterpret the model as instance of $\mathit{LTM}_\perp$, (2) execute the algorithm on the graph representation, (3) reinterpret the model again using the initial metamodel. All steps happen on the background and the user only sees the composite operation.}
-First, our approach builds upon the support for multiple linguistic types.
-While we have used our approach~\cite{MultiConformance}, another possible direction is through by a-posteriori typing~\cite{aposteriori}.
-In a-posteriori typing, a model is constructed with a single \textit{constructive} type~\cite{constructiveType}, which cannot be changed.
-When a model is used in a different context, however, multiple additional types can be added afterwards (\textit{a posteriori}) through the use of concepts~\cite{concepts}.
-These additional types don't influence the original constructive type, but can make the model applicable for use in other algorithms.
-Supporting our $\mathit{conformance}_\perp$ relation through the use of a-posteriori typing should be similar.
-The constructive type could simply be part of $\mathit{LTM}_\perp$, with all ``real'' linguistic types specified as a posteriori types.
-Our approach varies a bit though, since we don't make the constructive type a special kind of type: the $\mathit{conformance}_\perp$ is just another relation like any other.
-The OCA~\cite{OCA} is rather similar to our approach, as it identified the distinction between two conformance relations.
-But whereas the OCA shifts one of these relations to the implementation level, we merge the physical type model into the linguistic dimension.
-We therefore still completely comply to the OCA: we have both a linguistic dimension (used for user modelling), and a physical dimension (used during tool building).
-Parts of our physical dimension are, however, exposed to the linguistic dimension, such that all operations from the physical dimension also become available in the linguistic dimension.
-With the OCA it is not necessary to support multiple linguistic types for a single model, which is a necessary requirement when shifting more parts to the linguistic dimension.
-
-Second, strict metamodelling has been the subject of several debates, both in favor~\cite{StrictProfilesWhyAndHow,ConceptsForComparing}, and against~\cite{LevelAgnostic,XMF-Mosaic}.
-People against strict metamodelling argue that strict metamodelling makes specific models impossible, as we have also shown in this paper.
-Their solutions, however, often completely throw away all notions of strict metamodelling.
-And while we agree that strict metamodelling can be overly restrictive, it certainly has its advantages in protecting ordinary users and simplifying algorithms.
-So in contrast to tools like XMF-Mosaic~\cite{XMF-Mosaic}, who completely flatten the modelling hierarchy, we still enforce strict metamodelling, though users can switch to the ``unrestricted mode'' by taking on a different linguistic type model.
-Since the unrestricted mode is at a much lower level of abstraction than the usual linguistic metamodels, users will now have more powerful tools at their disposal, and are able to circumvent strict metamodelling in a controlled way.
-
-Third, many tools rely explicitly on the implementation level.
-For example, MMINT~\cite{MMINT}, MetaDepth~\cite{MetaDepth}, DISTIL~\cite{DISTIL}, AToM$^3$~\cite{AToM3}, and AToMPM~\cite{AToMPM} all explicitly allow users to inject code, for example as parts of models, or to extend the capabilities of the tool.
-This code is not explicitly modelled, and is simply injected in the actual application code that is being executed.
-There is thus no checking as to what is happening and if the inserted application code is actually valid code, since it is only treated as mere text by the tool.
-This code is subsequently only checked by the compiler or parser of the language that is being used, further delaying user feedback.
-And since this code is dependent on both the application interface (API), and the implementation language, and the internal data structures, the code is not portable at all.
-Furthermore, it does away with the notion of ``model everything explicitly'', as it introduces unmodeled aspects in the models and even in the tool.
-The importance of the physical dimension was previously highlighted~\cite{TechnologicalSpaces,TechnologicalSpaces2}, where the physical storage was mentioned as a technological space.
-Different ways of representing this data were presented, though each of these can, with our approach, be abstracted away as an implementation detail.
-
-Similarly, megamodel management~\cite{MegamodelManagement} is often implemented purely at the implementation level instead of explicitly modelled.
-And while there is some work on making generic model management possible~\cite{TypingModelManagement,GenericityForModelManagement}, these approaches often remain specific to the problem under study.
-One of the shortcoming of current (meta-)modelling tools is their strong reliance on their implementation level.
-While it does offer its benefits, certainly for tool developers, it seriously impedes portability of models.
-Model management operations are handcoded in the implementation language of the tool, making it difficult for users to grasp their semantics.
-Furthermore, model management operations themselves have strong reliance on the internal data structures used by the tool, making comparison of algorithms, even at a conceptual level, difficult.
-In this paper, we analyze the reasons and effects for this strong reliance on the implementation level.
-We offer a solution which allows the explicit modelling of model management operations in a strict metamodelling framework.
-Even those operations that require access to multiple levels in the modelling hierarchy are supported.
-To aid in this effort, we furthermore break the strong link between model management operations and the data structures in use by the (meta-)modelling tool.
-Our technique is illustrated through the explicit modelling of a retyping operation on a petri net.
-None of the current plethora of (meta-)modelling tools include a complete model of themselves.
-Such a model, a precise specification of the tool's syntax and semantics, allows for introspection and reflection.
-This enables features such as debugging.
-Without such a model, it is harder to decompose a tool into components for distribution, reason about efficiency, and reuse components of existing implementations.
-In this technical report, we present the foundations of the Modelverse, a self-describable environment for multi-paradigm modelling (supporting multi-formalism and multi-abstraction modelling and explicitly modelled processes).
-The foundations describe a class of Modelverse realizations, which satisfy our identified set of requirements.
-Conceptually, all information in the Modelverse is stored in a graph, and model management operations transform this graph.
-Parts of the graph also describes action constructs which, amongst others, can be used to define (linguistic) conformance relations, the basis for multi-layer multi-level modelling.
-We define a set of requirements for a Modelverse.
-These requirements, or axioms, will be used during our formalization to motivate our decisions.
-Although implementation-related requirements are not needed for our formalization, they are mentioned as it is something every implementation should conform to.
-
-After an explanation of what each axiom represents, we give an overview of how all these axioms are related to each other.
-
-\section{\axiomForeverRunning}
-The Modelverse should always be able to continue running.
-As such, no modifications to the behaviour should require a restart, except for changes to the (minimal) kernel (and thus the action language semantics).
-An (authorized) user should be able to alter all core concepts, with changes automatically applied for all connected Modelverse Interfaces.
-
-Forever running also implies that the Modelverse runs as a service, separate from the MvI program, which is used by the user, but also on a different machine.
-A more drastic interpretation is that it should be parallelized and distributed, as to cope with possible hardware failure.
-We do not require this more drastic interpretation, though it is certainly a feature to take into account in an implementation evaluation.
-
-The forever running does not apply to the MvI, of course, as the MvI is a tool ran on the system of the end-user.
-It is the whole of MvK and MvS that should run as if it is running forever.
-
-\section{\axiomScalability}
-The Modelverse should be scalable in terms of computation, memory, number of users, number of models, and the size of individual models.
-Related to the previous axiom, scalability should still be maintained even if the Modelverse is forever running.
-Combined with scalability is performance: even if operations are scalable in terms of complexity, the total time taken by execution should also be as low as possible.
-
-Due to our split in multiple components, we can also split up our scalability requirements over these components:
-\begin{itemize}
-\item The MvI needs to be scalable in performance, of course, though the size of models will be relatively small compared to those processed by the MvK or MvS, because the models being worked on will always be submodels of the \textit{complete} Modelverse model.
-More important for the MvI is the scalability in the size of the model for visualization and presentation.
-Depending on the domain, an implementation might provide further methods for abstraction of components.
-\item The MvK needs to be scalable in performance, again, but mainly in the processing of action code constructs.
-An MvK instance should be easily parallelizable up to the ``1 MvK per user'' threshold.
-Beyond that limit, multiple MvKs would have to cooperatively work on a single block of action code, which is likely to hamper performance.
-An MvK also needs to be scalable in the number of users it is able to handle.
-\item The MvS needs to be scalable in performance, mainly in terms of the size of the complete Modelverse state.
-It is non-trivial to distribute or parallelise, as operations are small and atomic, and all data needs to be shared between users.
-The MvS should therefore be offloaded as much as possible, shifting all computation to the MvK.
-This reduces the functionality of the MvS to that of a simple, but high-performance, data structure library.
-Again, it should be scalable in the number of, possibly simultaneous, requests made, which differs from the total number of users.
-\end{itemize}
-
-\section{\axiomMinimalContent}
-A minimal amount of content should be available in the Modelverse by default.
-The content consists of the models necessary for bootstrapping, but also some default formalisms, such as \textsf{Petri Nets}, \textsf{Parallel DEVS}, \textsf{Statecharts}, \textsf{FTG+PM}~\cite{FTG+PM},~\ldots
-
-For bootstrapping, the Modelverse contains a model of itself, which can then be compiled to a binary, executable outside of the Modelverse, or interpreted by the currently running MvK.
-From this viewpoint, the Modelverse will be similar to Squeak~\cite{Squeak}, which is a Smalltalk interpreter written in Smalltalk.
-
-Apart from formalisms, some models should also be present in the Modelverse.
-These include the Formalism Transformation Graph (FTG), and the corresponding Process Model (PM), forming the FTG+PM.
-The FTG model can be automatically constructed from the formalisms that are automatically detected in the Modelverse.
-Combined with detecting the formalisms, it should also be possible to automatically detect all transformations defined between these formalisms, thus completing the FTG.
-The PM model will be the driving force of the MvK and defines which operations to execute.
-It can therefore be written in an action language, which defines the behaviour of the MvK, and thus the communication with the user.
-
-\section{\axiomModelEverything}
-Every element in the Modelverse needs to be explicitly modelled, using the most appropriate formalism.
-This does not only include the typical elements, such as the models and metamodels, but should also go down to the level of the primitives such as Integer and Float.
-This will allow for stronger model transformations, as they can transform (and access) literally everything.
-
-Ultimately, a model of the Modelverse should also be present in the Modelverse, which closes the loop.
-In the end, a compiled version needs to be used for pragmatic reasons, though this compiled version can be (automatically) compiled from the model that lives in the Modelverse.
-
-Features like debugging, introspection, reflection, and self-modifiability will come from this axiom, as every part of execution is accessible for both reading and writing.
-
-\section{\axiomHumanInteraction}
-All interaction with the human user of the Modelverse needs to be explicitly modelled.
-This includes timed behaviour of the Modelverse (\textit{e.g.}, time-out of requests), or even the complete communication protocol.
-It is actually the MvI which will communicate with the Modelverse, though it will be guided by the user.
-
-It should also be taken into account that the MvK will be (mainly) used by humans, and as such should be usable.
-While most of this will be handled by the MvI, which provides the tool to the user, the fact that a human is behind all of it should be taken into account.
-Possible applications for this are for performance evaluation: a human user has completely different (and likely slower) access patterns than an automated tool.
-The predefined constructs and design of the system should also be usable by humans, specifically those that are non-experts in design of the Modelverse.
-Enforcing strict metamodelling is part of the solution, as this offers users (and tools) a limited scope to worry about~\cite{StrictMetamodelling}.
-
-\section{\axiomTestDriven}
-Development on the Modelverse should happen using the model of the Modelverse, which can be simulated, and placed in a variety of circumstances which are hard to replicate in real-life situations.
-A similar approach was taken by~\cite{DEVSinDEVS}, where a \textsf{DEVS} model was made of a distributed \textsf{DEVS} simulation kernel.
-Modelling allowed them to replicate, among others, sudden disconnects, high latency connections, or different network topologies.
-Furthermore, detailed, and perfectly deterministic, performance insights can be gained by the simulation of the model.
-Certainly for parallel execution, this gives us deterministic thread interleavings, which can be crucial to debugging and performance analysis.
-
-Functionality also needs to be checked as exhaustively as possible.
-Certainly for the first axiom, critical bugs should be avoided as much as possible.
-Because the Modelverse will have to communicate with a variety of tools, its interface will also have to be tested for conformance with the specifications.
-
-\section{\axiomMultiView}
-The Modelverse should support different views on the same model.
-Examples include hiding parts of a model, or aggregating different elements into a composite element.
-This gives rise to consistency management, as changes in one view will have to be propagated to all other views.
-
-Multi-view should be handled at all components, as each component needs to allow it.
-The MvI needs to provide operations to use the different views, the MvK needs to update the views and keep them consistent, and the MvS needs to provide these operations efficiently.
-The MvS is least concerned with multi-view, as it sits at a lower level.
-
-\section{\axiomMultiFormalism}
-The Modelverse should support models which combine different formalisms.
-Models should therefore be able to have a metamodel which is the combination of multiple (meta)models.
-Inter-formalism links should also be possible, even if those cannot be typed within the respective formalism.
-While the semantics of such a link depends on the domain, and therefore has to be provided by the user, the Modelverse should allow such links to be created and used.
-Consequently, links between models should also be possible, which can then act as the type for those inter-formalism links.
-
-Related, a single model should be able to have multiple metamodels.
-A model could therefore be typed by a metamodel, but would also have to conform to a bigger metamodel, which contains the original metamodel as one of its elements.
-This allows the reuse of models, even if the context surrounding the metamodel has changed.
-
-\section{\axiomMultiAbstraction}
-The Modelverse should support systems which are expressed using a set of models, all at a different level of abstraction.
-Consistency management will again have to be handled here.
-
-As was the case for multi-view, each component needs to think about multi-abstraction separately.
-The exception is again the MvS, as it is at a lower level.
-However, it can still (internally) use optimizations, knowing that some requests will be related to multi-abstraction.
-
-\section{\axiomMultiUser}
-The Modelverse should be able to serve multiple Modelverse Interfaces simultaneously.
-A main concern to this is fairness between users: a user cannot wait for its turn infinitely long.
-If a single user therefore uses all computational power, at the expense of other users, the code executed by this user will have to be automatically paused, marked as ``low priority'', or terminated.
-
-User Access Control is related to this, as users should be able to configure the Read/Write/Execute status of their models.
-As such, groups of users, with specific privileges, should also be supported.
-
-If their access control allows it, users should also be able to read the state of the execution of other users.
-This will allow for debugging with multiple users: user \textit{A} can execute code, with user \textit{B} being an automated debugging bot, which examines the state of user \textit{A}.
-
-\section{\axiomInteroperability}
-Different implementations of the Modelverse and its interface should be possible.
-These implementations should all be able to communicate with each other, as long as they follow the same specification.
-This is one of our main goals for specifying the interfaces between components.
-
-Additionally, because the semantics of action code and its corresponding execution context is defined, different MvK's should be able to continue each other's execution, or interpret the execution context of other tools.
-This can come in handy with different tools (\textit{e.g.}, a debugger, a compiler, or an interpreter) which might be developed independently, though are able to understand each other's information.
-
-\section{Interconnections}
-All of these axioms are related in some way, as the graph in Figure~\ref{fig:axioms_overview} shows.
-We now continue by explaining the links between all concepts, using their label:
-\begin{enumerate}
-\item As the Modelverse will be forever running, there is a need for garbage collection or periodical maintenance to guarantee a decent performance.
-\item Having everything explicitly modelled allows us to create a self-modifiable Modelverse, which helps us with the forever running axiom.
-\item In the presence of multiple users, it is necessary to have the Modelverse running as a service, which implies that it should run forever.
-\item Using the performance tests, combined with the MvK being modelled explicitly, it becomes possible to assess the scalability of the Modelverse algorithms under specific workloads.
-\item Scalability is deeply connected with interoperability, as there is often a trade-off: increasing interoperability will decrease scalability and vice versa.
-\item Having everything modelled explicitly requires the presence of at least a few basic formalisms. Ultimately, it also includes having a model of the Modelverse in the minimal content of the Modelverse.
-\item By modelling everything, we will inevitably also have to model the interaction with the human.
-\item The performance tests will use a performance model of the Modelverse, which is contained in the Modelverse. To that end, the Modelverse will simulate its own performance.
-\item Multi-view requires the ability to model everything, as we will have to model all different views separately.
-\item By modelling everything explicitly, we also need to model links between different formalisms, which is a requirement for multi-formalism models.
-\item Interoperability between different Modelverse components becomes easier if each component is modelled explicitly, as it clearly defines the expected semantics.
-\item Interoperability is an essential part of human interaction, as otherwise it would be impossible for both of them to communicate.
-\item Multi-view and multi-formalism are related due to a view being possibly expressed in a different formalism.
-\item Multi-view and multi-abstraction are related, as different views might be at different levels of abstraction.
-In this paper, we described the Modelverse: a self-describable multi-paradigm modelling tool.
-Several axioms were presented, which served as guidelines while making decisions on the specification of the models.
-Our architecture was briefly presented, showing the distinction between the Interface (MvI), Kernel (MvK), and State (MvS).
-
-We presented a model of the Modelverse, which defines how an implementation has to behave.
-The model covers both the way data is represented (in the MvS), and the semantics of its action language constructs (in the MvK).
-
-Concerning data representation, we leave open how the graph could be physically implemented.
-This allows for a variety of implementations, allowing the developer to choose between available technologies.
-And as all implementations will be interoperable, users can try out different implementations and check whether it better matches with their goals.
-
-Concerning the action language, we described the execution context representation, and how language primitives modify this execution context.
-This needs to be explicitly specified if multiple tools need to interoperate on the same piece of execution data.
-For example, an external debugger can now access all internal execution data, as its representation has been specified.
-For performance, we allow implementations to ignore updates to the execution context, allowing for optimized execution or primitive operations.
-This allows users to achieve higher efficiency, for example through compiled functions, although limiting debugability.
-
-Tools can create and use additional elements in the execution context, which can be interpreted by compatible tools.
-However, tools have no obligation to support all these additional elements.
-An example is additional debugging information, such as tracing information.
-
-By splitting up the components of the Modelverse, and requiring that all parts need to be explicitly modelled, we arrived at different notions of conformance.
-We distinguished between a conformance closer to the physical level (conformance$_\perp$), and a linguistic type of conformance closer to the user level (conformance$_L$).
-Whereas the physical notion allows users to circumvent strict metamodelling, by switching to a graph representation, linguistic conformance allows the MvK, and ultimately the user through the MvI, to reason about the model in a level that is close to the problem domain.
-
-In future work, we will create a reference implementation of this specification.
-Apart from the reference implementation, multiple variations of components will be created, each with a different goal.
-
-After the creation of the reference implementation, the implementation will be scaled up to a distributed and parallel version.
-
-Multiple Modelverse Interfaces will also be created, each with a different kind of user in mind.
-First, a textual HUTN interface will be created.
-Afterwards, a graphical tool will be created.
-
-Our different notions of conformance will also be further extended with the introduction of ontological conformance.
-This would allow us to have three different kinds of mappings: physical, linguistic, and ontological~\cite{BrunoOntology}.
-Model management operations (\textit{e.g.}, conformance checking, or versioning) frequently act upon both the model and the metamodel, or should be applicable on all models, independently of their metamodel.
-As such, they are often in conflict with the principle of strict metamodelling.
-We try to find a balance between strict metamodelling (\axiomHumanInteraction), and the principle of modelling everything explicitly (\axiomModelEverything).
-
-By introducing multiple definitions of conformance, we can keep strict metamodelling while still implementing such model management functions.
-The basic idea is to allow a single model to conform to multiple metamodels.
-The conceptual graph, representing the model, is interpreted depending on the metamodel being used.
-Examples of metamodels might be a domain-specific metamodel (\textit{e.g.}, a Petri Net metamodel), or a more physically-oriented metamodel (\textit{e.g.}, a Graph metamodel).
-Depending on the interpretation given to the levels, different level hierarchies are constructed.
-It is these level hierarchies that impose the restrictions on strict metamodelling.
-Fig.~\ref{fig:conformance} presents some different notions of conformance that can be devised on the Modelverse.
-While the amount of conformance relations can vary, each model will have a mapping to the PTM, which is required to physically represent the model.
-And it will have a mapping to the Linguistic Type Model (LTM), using conformance$_\perp$.
-This relates to \axiomMultiView, as a single model can be seen from different views, and relates to \axiomInteroperability, as it allows for the uniform representation of all data.
-
-\section{Graph conformance}
-As all our data is (conceptually) represented using a graph, the graph instance can also be interpreted as a linguistic instance of a graph metamodel.
-Because all defined CRUD operations constrain the result to a well-formed graph, all models in the Modelverse conform to this metamodel by construction.
-Every model represented in the Modelverse is conceptually representable as a graph.
-Knowing this, the complete Modelverse can be flattened to a single level, which conforms to the graph formalism.
-Within this single level, all operations and links between elements are non-level crossing, and are therefore correctly typed (by the graph metamodel).
-Note however, that these methods are unable to guarantee conformance to any linguistic metamodel, apart from the graph metamodel.
-As such, all multi-formalism models can also be represented using this single metamodel, thus partially addressing \axiomMultiFormalism and \axiomMultiAbstraction.
-Multiple users (\axiomMultiUser) can also use this view to collaborate on a single model, while having different interpretations of it.
-
-\section{Linguistic conformance}
-Finally there is the linguistic conformance between the model and the metamodel, which is necessary to complete the support for our axioms (\axiomHumanInteraction).
-It is the highest level, and offers the most features to the user, but is also the most fragile.
-Linguistic conformance cannot be guaranteed at all, and requires continuous checking to make sure it is enforced for the desired model and metamodel.
-In contrast, conformance$_\perp$ was guaranteed by design.
-
-Because a conformance$_L$ view is only a specific view on a model, a single model can conform to multiple metamodels.
-The function $conforms : \mathcal{G} \times \mathcal{G} \times 2^{IDS \rightarrow IDS} \rightarrow \mathbb{B}$ is defined to determine linguistic conformance, and can be implemented by the user.
-It takes three parameters: two graphs --- a model and a metamodel, both subgraphs of the MvS graph --- and a mapping between them.
-This mapping encapsulates all typing information, thus typing is completely separated from the model and metamodels.
-Since multiple mappings can be stored, multiple typing relations are supported.
-During syntax-directed editing, a mapping will be constructed (and used) with the information provided by the user.
-Retyping can be done by modifying the mapping, and checking conformance afterwards.
-
-We define a possible conformance$_L$ relation, to be seen as an example of our approach.
-This relation bases itself on the top level model: the Model at the MetaCircular Level (MMCL).
-In this model, we have three basic elements: a \textit{Class} (mapped to nodes), an \textit{Association} (connecting classes; mapped to edges), and \textit{Inheritance} between classes (mapping to the union type).
-Using \textit{Inheritance}, the \textit{Association} becomes a special kind of \textit{Class} in our MMCL.
-We call our example conformance relation \textit{conformance$_\alpha$} to indicate that it is one of many possible implementations.
-
-First, we define a subfunction which defines transitive closure of inheritance links, where $A \leq B$ means that $A$ is a (possibly indirect) subclass of $B$.
-$A \xrightarrow{{} \inheritancearrow {}} B$ means that there is an association, typed by the \textit{Inheritance} link, from $A$ to $B$.
-
-\begin{align*}
-A \xrightarrow{{} \inheritancearrow {}} B &\Rightarrow A \leq B \\
-A \leq C \land C \leq B &\Rightarrow A \leq B \\
-A == B &\Rightarrow A \leq B \\
-\end{align*}
-
-A conformance relation for the primitive elements is defined, constraining the provided map.
-N_T(N_V(x)) == N_V(y) & if~x, y \in dom(N_V) \land (x,y) \in m \\ % Primitives
-True & if~x, y \in N \land x, y \not\in dom(N_V) \land (x,y) \in m \\ % Nodes
-conforms(x_s, y_s, m) \land conforms(x_t, y_t, m) & if~(x, y), (x_s, y_s), (x_t, y_t) \in m \land (x_s, x, x_t), (y_s, y, y_t) \in E \\ % Edges
-False & else \\ % Else
-\end{array}
-\right.
-\end{gather*}
-
-The first line is for nodes with a primitive value: a node $x$ conforms to a node $y$ if both nodes have a value, with the type of the value of node $x$ being the value of node $y$.
-The second line is for nodes without a primitive value: a node $x$ conforms to a node $y$ if such a mapping exists in the provided mapping.
-Neither node is allowed to have a primitive value.
-The third line is for edges: an edge $x$ conforms to an edge $y$ if their sources and targets conform to each other.
-It is thus basically a recursive call.
-However, there is no possibility for an infinite loop, because of our restriction on the IDs of edges: the source and target ID are always smaller than the ID of the edge.
-The final line is for all other cases (\textit{e.g.}, comparing nodes to edges, or primitives to non-primitives), in which case there is no conformance possible.
-
-Finally, $conforms_{\alpha,G}: \mathcal{G} \times \mathcal{G} \times 2^{IDS \rightarrow IDS} \rightarrow \mathbb{B}$ is the actual conformance function being called.
-It tries to find a mapping between the specified model and metamodel, for which the conforms function holds.
-
-\begin{gather*}
-conforms_{\alpha,G}(M, MM, map) = True \\
-\Leftrightarrow \\
-map' = \lbrace (a, b) \; \vert \; a \in IDS_M, b \in IDS_{MM} \rbrace \\
-\forall n \in N_M : \exists n' \in N_{MM} . conforms(n, n', map') \\
-\forall e \in IDS_{E, M} : \exists e' \in IDS_{E, MM} . conforms(e, e', map') \\
-\begin{lstlisting}[caption=HUTN$_\perp$ construction of the MMCL,float,label=listing:mmcl0]
-Node Class()
-Value Type(Type)
-Value String(String)
-Edge Attribute_ (Class, Type)
-Edge AttributeAttrs (Attribute_, Type)
-Edge Attribute (Class, Type)
-Edge Name (Attribute, String)
-Edge Association (Class, Class)
-Edge Inheritance (Class, Class)
-Edge inherit_association (Association, Class)
-Edge inherit_attribute (Attribute_, Class)
-\end{lstlisting}
-
-\begin{lstlisting}[caption=HUTN$_L$ construction of the MMCL,float,label=listing:mmcl1]
-Class Class()
-Type Type(Type)
-Type String(String)
-Attribute_ Attribute_ (Class, Type)
-Attribute_ AttributeAttrs (Attribute_, Type)
-Attribute_ Attribute (Class, Type)
-AttributeAttrs Name (Attribute, String)
-Association Association (Class, Class)
-Association Inheritance (Class, Class)
-Inheritance (Association, Class)
-Inheritance (Attribute_, Class)
-\end{lstlisting}
-
-We present an encoding of our MMCL, in Listing~\ref{listing:mmcl0}, using the HUTN language respecting conformance$_\perp$.
-The action code in this language is translated to an abstract syntax graph in the Modelverse, by a HUTN compiler.
-The HUTN compiler lives in a Modelverse Interface (MvI).
-
-Using this MMCL, we can now re-encode it, as in Listing~\ref{listing:mmcl1}, now using the HUTN language with conformance$_L$.
-Alternatively, it is possible to directly use the definition in Listing~\ref{listing:mmcl1}, as elements can directly be typed by themselves in the Modelverse.
-
-Finally, we encode our conformance checking algorithm, in Listing~\ref{listing:conforms}, using the HUTN action language.
-With this example we show
-(1) an example of modelling, as the action code is a model, and thus an element of the Modelverse;
-(2) an example of our action code;
-(3) the possibility for reflection and introspection, as the conformance check can also run on itself, to check whether or not it conforms to some kind of metamodel; and
-(4) the possibility for metamodelling, as type hierarchies can be built using the provided conformance function.