123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236 |
- \section{Explicit Type/Instance Relations}
- \label{sec:explicit_conformance}
- We now present our solution to the previously identified problems.
- As all problems were caused by the hardcoded algorithms, our approach explicitly models the type mapping, the instantiation algorithm ,and the conformance check algorithm.
- For this, we introduce (1) a semantics-free representation of models (and subsequently metamodels), (2) an explicit type mapping model, (3) an explicit instantiation algorithm and corresponding (4) conformance algorithm in an executable modelling language.
- We also show how this approach naturally allows for multiple metamodels.
- Finally, we present the advantages and disadvantages of this approach, when compared to hardcoded type/instance relations.
- We use a simple petri net model and corresponding metamodel, shown in Figure~\ref{fig:pn_model} and Figure~\ref{fig:pn_metamodel}, respectively, to illustrate our approach.
- \begin{figure}
- \centering
- \includegraphics[width=0.5\columnwidth]{figures/pn_model.pdf}
- \caption{Petri net model (concrete syntax for readability) that will be encoded.}
- \label{fig:pn_model}
- \end{figure}
- \begin{figure}
- \centering
- \includegraphics[width=0.8\columnwidth]{figures/pn_metamodel.pdf}
- \caption{Metamodel (concrete syntax for readability) of Figure~\ref{fig:pn_model}.}
- \label{fig:pn_metamodel}
- \end{figure}
- \subsection{Models}
- Since all semantics need to be shifted to the type/instance relation, the model representation becomes essentialy semantics-free.
- As there are no longer any attributes with special purpose (and should thus always be there), nor are there special kinds of elements (such as inheritance links), the representation boils down to a simple graph.
- Because now the complete structure of the model can be described only through nodes and edges, existing graph databases can be reused.
- Apart from the applicability of previously defined algorithms, this lowers the burden of users trying to understand how and what data is stored.
- As there is also no longer any distinction between the semantics of models (cannot be instantiated) and metamodels (can be instantiated), both are reduced to the same representation.
- This further unifies the core implementation of a (meta-)modelling tool, and avoids the problems previously identified by storing a model twice~\cite{ConceptsForComparing}.
- For our example, Figure~\ref{fig:pn_model_bottom} shows how Figure~\ref{fig:pn_model} is represented in the Modelverse as a graph.
- The graph contains only structural information on what the instance will look like, and doesn't contain any attributes that have semantical meaning in the context of the type/instance relation.
- Note also that the instance model does not contain names on the links, but only their values.
- It is the metamodel, and corresponding typing relation, which gives the name to the attributes, making them identifiable.
- This was done for several reasons, but most importantly, this allowed the graph structure to be more restrictive, as it explicitly stores the names of the allowable attributes.
- Additionally, this is very similar to how most general purpose object-oriented programming languages work.
- Furthermore, it makes the stored data more independent of the names used in the metamodel, such that, for example, different names for the same attribute can be used interchangeably (\textit{e.g.}, due to translation or different terminology).
- \begin{figure}
- \centering
- \includegraphics[width=0.5\columnwidth]{figures/pn_model_bottom.pdf}
- \caption{Petri net model representation in the Modelverse.}
- \label{fig:pn_model_bottom}
- \end{figure}
- \subsection{Type Mapping}
- To explicitly represent the typing relation, they need to be represented as a model.
- They are not to be included within the model itself (\textit{e.g.}, as some kind of association) for three reasons:
- \begin{enumerate}
- \item Making the distinction between normal links (instances of associations) and typing links becomes hard.
- This bears significant similarities to the problem we were trying to avoid with the inheritance links being of a special kind.
- As such, creating direct type links directly needs to be avoided as they would otherwise qualify as normal links.
- Unless, of course, our instantiation and conformance algorithms can cope with this.
- \item Only a single type is possible if it is linked directly to the elements.
- It would be possible to create multiple type links starting from a single element, for example have two outgoing type links for a single element.
- These two different type links, however, would be ambiguous, as it is unknown which mapping they belong to.
- Type links are frequently interrelated: if there are two elements, with both having two possible types, a total of four different combinations become possible.
- As this depends on the situation, it is impossible to create general assumptions on this, and should therefore be avoided.
- \item Type links should ideally be stored in a seperate model, such that they can have their own constrained metamodel.
- If these links were part of the original model (as outgoing links), they would need to be part of the metamodel of the model.
- With type mappings as seperate models, they can have a simple metamodel, which largely resembles a kind of dictionary.
- \end{enumerate}
- To circumvent these problems, we have opted to create seperate type mapping models.
- These models are rather similar to a dictionary, where the keys are the model, and the values are the type of the model.
- And while they resemble a dictionary, they are explicitly modelled, and thus user-accessible.
- Users can thus open this type mapping just like any other model, and modify it if desired.
- In particular, model transformations can now also query the type of elements, or create elements of specific types, by manually modifying this dictionary.
- Figure~\ref{fig:pn_typing_bottom} presents an excerpt of a possible typing relation between the petri net model and metamodel.
- Most importantly, the typing relation can be accessed as a single node (root node of all typing links), making it easy to use a different one.
- \begin{figure}
- \centering
- \includegraphics[width=0.8\columnwidth]{figures/pn_typing_bottom}
- \caption{Petri net typing relation with dashed lines (excerpt).}
- \label{fig:pn_typing_bottom}
- \end{figure}
- \subsection{Instantiation Algorithm}
- When an instance is made of a metamodel, several things need to happen.
- Most importantly, the instance itself needs to be created.
- Furthermore, however, the instance needs to be registered in the type mapping, effectively updating two seperate models.
- Also, the operation needs to be checked for validity: is it even possible to perform this operation?
- Updating the type mapping is intimately related with the representation of the type mapping, which we have, up to now, proposed as inherent to our approach.
- Any possible relation, however, is possible with our approach.
- As such, a type mapping in itself is not mandatory, but it is only an artifact of our example type/instance relation.
- Type/instance relations without a type mapping, or with a very different one, are possible.
- Therefore, information on the type mapping, such as its representation and encoding, is necessarily included in the instantiation algorithm.
- Similarly, information on the semantics of additional constraints is necessary.
- The instantiation algorithm needs to be aware of the things that need to be checked when a new instance is created, such as potency, cardinality, multiplicity, and so on.
- Attributes can also be read out from the metamodel (and possibly supertypes of the found type), and presented to the user.
- Users would then, instead of manually specifying the name of the elements to create, be provided with a list they need to fill in, similar to AToMPM~\cite{AToMPM}.
- All these operations need to be explicitly defined in the instantiation algorithm, where they are available for users to look up or modify.
- Depending on the front-end the user uses, different instantiation algorithms might be ideal.
- Indeed, a textual front-end that runs in batch should not prompt the user, whereas an interactive visual modelling environment should prompt users if information is missing.
- For our example, this means that the instantiation algorithm will only ask users which element they want to instantiate (\textit{e.g.}, \textit{Place}), and give it (optionally) a name (\textit{e.g.}, \textit{p1}) for later reference in the model.
- Afterwards, users can specify attributes to instantiate (\textit{e.g.}, \textit{tokens}), for which the algorithm will automatically resolve the types from the metamodel and subsequently check for conformance to the required type.
- \subsection{Conformance Algorithm}
- Finally, an algorithm needs to be devised which takes a previously defined model, metamodel, and mapping between them, and determines whether or not the model conforms.
- This algorithm will, for each element in the model, check whether the type mapping points to an element in the specified metamodel.
- Additional constraints, such as potency and cardinalities, also need to be checked.
- For each edge, the source and target are checked: the source (target) of the model needs to be an instance of the source (target) of the edge in the metamodel.
- To determine whether an element is an instance of another element, we consult the type mapping.
- In addition to ``direct types'', it is possible for an element to be a subtype.
- Inheritance links are therefore followed during the conformance check, finding out the relation between the found type, and the expected type.
- Recall that there was no longer any way of identifying the inheritance link at the physical level, as it was just another association.
- For this reason, this specific conformance algorithm takes an additional parameter: the \textit{inheritance} association.
- This is the \textit{type} of each inheritance link, of which the instances are the actual inheritance links.
- The conformance algorithm therefore only takes a single inheritance association as parameter, and can automatically find all of its instances, the actual inheritance links.
- Inheritance semantics is provided by the conformance algorithm, which knows that following inheritance links is allowed when finding instances.
- The conformance algorithm also searches for constraints to execute, multiplicities and cardinalities to check, and potencies to update.
- All semantics is now explicitly modelled in the conformance algorithm, resulting in several degrees of freedom.
- For example, it becomes possible for users to encode any of these restrictions wherever they seem best suited.
- We avoid the need for every model to have a mandatory attribute, like \textit{potency}, as this is up to the conformance algorithm to decide.
- Other alternatives are equally valid, as long as they are explicitly modelled in this algorithm.
- Our approach offers much more flexibility to the users, and allows for models better suited for the problems they are trying to solve.
- For our example, this means that the conformance algorithm will read out all elements of the model and check whether they are typed correctly: is the \textit{tokens} attribute indeed an \textit{integer}, is there no edge going directly between two \textit{place}s, etc.
- \subsection{Multiple Metamodels}
- With all pieces into place, we can now discuss the possibility for multiple metamodels.
- As each aspect of the type/instance relation is explicitly modelled, and thus accessible by the user, models can conform in different ways.
- For example, users might provide a different type mapping, a different metamodel, or a different conformance algorithm altogether.
- We will only provide a simple example, which was already hinted at, where a single Petri net model conforms to two distinct metamodels: normal place/transition nets and place/transition nets with inhibitor arcs.
- The conformance algorithm, when passed with two different type mappings and corresponding metamodel, will state that a petri net without inhibitor arcs, conforms to both metamodels.
- This is shown in Figure~\ref{fig:pn_multi_conformance}.
- A petri net containing at least one inhibitor arc will only conform to the metamodel with inhibitor arcs.
- Further differences between the metamodels are possible (even structurally, by using a different conformance check), though these are not shown here to prevent confusion.
- One of the remaining problems is one of consistency: both type mappings are updated independently, and should also be maintained seperately.
- As a result, if users add an additional place to the model, they would have to update both type mappings.
- If a type mapping was not, or incorrectly, updated, subsequent conformance checks will fail until the problem is resolved.
- \begin{figure}
- \centering
- \includegraphics[width=0.8\columnwidth]{figures/pn_multi_conformance}
- \caption{A single petri net model conforming to two different metamodels simultaneously. The top metamodel is without inhibitor arc, whereas the bottom one has an inhibitor arc. Names of attributes also vary slightly.}
- \label{fig:pn_multi_conformance}
- \end{figure}
- \subsection{Advantages and Disadvantages}
- As with everything, our approach implies some disadvantages, mainly related to usability:
- \begin{enumerate}
- \item \textit{Explicit management}.
- While the explicit modelling of the type/instance relation has its advantages, users might be bothered with the additional complexity.
- We believe, however, that this complexity can be hidden (though accessible) from users who do not require it (novice users), and only accessible by advanced users.
- Whereas other tools simply offer a built-in instantiation and conformance function, users now have full control over this, since it is just another function.
- This does not necessarily need to be a disadvantage: the conformance function is considered as any other function, and can also be used as such.
- Use of specific APIs is thus avoided, and users have less need for documentation on what exactly is this function and how it is implemented, as they are already familiar with how to use it (just like any other function), and the semantics can easily be seen by opening the relevant model (instead of wading through the source code of the tool).
- \item \textit{Managing multiple concepts of type/instance relations}.
- The concept of allowing for multiple types of type/instance relations is an attractive one, but can also stand in the way of users.
- We must acknowledge that most users will probably never need to manage the use of different kinds of type/instance relations.
- This is a trade-off: do we limit the functionality of our tool, such that it is easy to use for all users, or do we open all aspects of the tool, potentially confusing many users?
- Again, we believe that an adequate interface will help users in managing this complexity.
- \item \textit{Tool is no longer a ``real'' metamodelling tool}.
- What generally identifies a tool as a metamodelling tool, is its support for instantiation and conformance, and optionally support for model management operations.
- By removing all these aspects from the core of the tool, but shifting them a level higher, the tool essentially no longer has support for modelling.
- Instead, tools become simple model interpreters, which will have to interpret the provided instantiation and conformance algorithms to become capable of modelling.
- While there is the danger of becoming too general, this clearly seperates the core of the tool from its additional functionality.
- \item \textit{Efficiency}.
- Up to now, efficiency has not been a significant criteria when evaluating metamodelling tools.
- While it is true that some are more efficient than others, certainly for extremely large models, most tools cope reasonably well with small to medium-sized models.
- Interpretation of one of the core functionalities of the tool, however, is detrimental for performance.
- With naive implementations, tools become too slow to use, even for small models.
- Currently, we see this as one of the primary limitations of our approach, as it likely necessitates much tweaking of model interpretation performance.
- Nonetheless, we find this very similar to Smalltalk~\cite{Smalltalk}, where most functions are also provided as library functions, written again in Smalltalk.
- While Smalltalk itself was not efficient, the Squeak~\cite{Squeak} environment proved that high speedups are possible, even for this kind of languages.
- \end{enumerate}
- We believe that these disadvantages can be dealt with by increasing usability in general.
- A clear syntax greatly aids users in managing this additional complexity.
- Furthermore, sane defaults should be provided, such that users can hide the complexity if they don't need it.
- It should be possible to offer users a simple syntax, in which defaults are used, and a more advanced syntax, in which users have full control.
- From the point of view of efficiency, efficient model interpreters are required, possibly through the use of Just-In-Time compilation (JIT).
- Previous interpreters have seen significant speedups through the use of JIT compilation, such as Squeak~\cite{Squeak} for Smalltalk~\cite{Smalltalk}, a language well-known for its philosophy of making every aspect explicit.
- Compilation of the model might also be possible, such that efficiency becomes comparable to that of hand-crafted code.
- Should these disadvantages be overcome, it offers us several advantages.
- These advantages are related to the previously identified problems of a hardcoded type/instance relation:
- \begin{enumerate}
- \item \textit{Explicitly modelled semantics}.
- By explicitly modelling the semantics of the tool, it becomes independent of the implementation platform and navigable to users.
- Users no longer have the need to consult seperate tool documentation to know the semantics: they are explicitly browsable, just like any other model.
- Furthermore, it becomes managable like any other function, making it susceptible to model transformations or modification.
- Additionally, users only need to know one language: the modelling language.
- Previous tools with an explicitly modelled action language, still required users to work with their implementation language to extend the tool (\textit{e.g.}, through plug-ins or extension points).
- \item \textit{Dynamic type/instance relation}.
- The algorithms and type mapping not only become visible to users from within the tool itself, but they can also be modified dynamically.
- As the function is interpreted, changes are immediately visible to users, stimulating rapid prototyping.
- We do, however, acknowledge that there should be some restriction to this high degree of freedom, in order to prevent absurd situations.
- \item \textit{Use of pre-existing libraries}.
- By removing the need for special elements at the implementation level, well-known data structures can be used, such as graphs.
- There is no longer any need to implement specific kinds of graphs (\textit{e.g.}, with special ``inheritance'' links, or even Typed Attributed Graphs), as every link, even the inheritance and type link, will be an ordinary edge.
- Instead of through the model database, semantics is given by the interpretation of the algorithms.
- Many tools and algorithms exist for managing extremely large graphs, forming a research domain on its own.
- All these tools and algorithms can be used as-is, without any modification or wrappers at all.
- Very minimal wrappers are still required, to have the tools communicate with each other, though these wrappers don't hold any conversion logic, nor do they alter the semantics of the stored graph.
- \item \textit{Multiple possible metametamodels}.
- As there are no longer any ``special'' metametamodels, with hardcoded parts in the core of the tool, any model can potentially become a metametamodel, or even meta-circular.
- Each model that is sufficiently expressible can serve as the new root of a modelling hierarchy.
- The instantiation and conformance algorithms still need references to the model (\textit{e.g.}, to know about the inheritance relation), but it can be fully customized.
- So while some changes are still required, these changes stay within the tool, and don't force the metametamodeller to leave the tool even once.
- Multiple dimensions to conformance exist, as identified in the OCA~\cite{OCA}.
- In this paper, we limit ourselves to the linguistic dimension, but the need for multiple metamodels is even stronger in the ontological dimension, where it relates back to properties a given model satisfies~\cite{Bruno,Ken}.
- \item \textit{Full support for multi-level modelling}.
- Taking the previous advantage a step further, any modelling hierarchy becomes possible, as long as the conformance relation is made to cope with it.
- All attributes that influence conformance, such as potency and cardinality, become explicitly modelled at each level of a multi-level hierarchy.
- Multiple kinds of instantiation semantics can be implemented, for example using potency~\cite{potency}, or the unified version which also applies to edges~\cite{UnifyingApproach}.
- \item \textit{Flexible types}.
- Type mappings are also explicitly stored as models, making it possible to use them like any other.
- Possible use cases of this are to query the types of elements, or to modify the types at runtime.
- Should types be coded somewhere in the core of the tool, this becomes impossible without the use of a dedicated API.
- Similar to our arguments for the explicit modelling of sementics, reusing existing interfaces is more familiar to users than creating new ones.
- \item \textit{Multiple types}.
- In addition to making it possible to have multiple possible metamodels, or even metametamodels, a single element can be typed by several different elements of potentially different metamodels.
- These type relations are stored in seperate type mappings, such that the user can decide which typing relation to use for a specific operation.
- \end{enumerate}
|