2-conformance.tex 17 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182
  1. \section{Type/Instance Relation}
  2. \label{sec:conformance}
  3. A language consists of four main components: an abstract syntax, concrete syntax, semantic domain, and a semantic mapping.
  4. Of chief interest to this paper, is the abstract syntax, which defines the set of allowed constructs in the language.
  5. Generally, this set is described through the use of a metamodel, defining the allowable types to use, their interconnection, multiplicities, and cardinalities.
  6. The metamodel, however, is limited to structural constraints (\textit{e.g.}, in Petrinets a place can have outgoing links only to transitions), and does not consider semantic constraints (\textit{e.g.}, in Petrinets a place cannot have a negative number of tokens).
  7. A metamodel can therefore often be augmented with constraints, expressed using a constraint language such as the Object Constraint Language (OCL).
  8. \subsection{Bidirectionality}
  9. The type/instance relation consists of two relations, in opposite directions: instantiation (creating a new model from an existing metamodel), and conformance (checking an existing model with an existing metamodel).
  10. Both are complementary to each other: a newly instantiated model should, by construction, conform to the metamodel it was instantiated from (at least structurally).
  11. Going from the metamodel to the model, is called instantiation.
  12. Through instantiation, an instance is created that conforms (structurally) to the provided metamodel.
  13. A general instantiation method cannot be created, as it strongly depends on what the conformance relation checks, and how it expects the model to be physically represented.
  14. For example, if the conformance relation supports subtyping, instantiation should instantiate the own attributes, but also all inherited attributes.
  15. From the model to the metamodel, we have the conformance relation, often called ``verify'' in tools (\textit{e.g.}, metaDepth~\cite{MetaDepth} and AToMPM~\cite{AToMPM}).
  16. A model is said to conform to its metamodel if each element of the model is an instance of (conforms to) an element of the metamodel.
  17. What it means for a single element to conform to another, is much more vague.
  18. For example, if the conformance relation supports subtyping, an element might conform to types other than its own type, if there is an inheritance relation between them.
  19. If one wants to explicitly model the type/instance relation, as is the goal in this paper, one needs to explicitly model both the instantiation and the conformance checking functions.
  20. \subsection{Metamodelling Hierarchy}
  21. The metamodelling hierarchy, as popularized by the OMG in the four-layered architecture, makes explicit use of the relation between types and their instances.
  22. This architecture consists of four layers, as shown in Figure~\ref{fig:four_layered_architecture}: the metametamodel ($M3$), the metamodel ($M2$), the model ($M1$), and the real world ($M0$).
  23. Each of these models is said to conform to the model at the layer above, meaning that the lower level is an instance of the higher level.
  24. As a result, $M1$ conforms to $M2$, which conforms to $M3$.
  25. At the top of the hierarchy, $M3$ is made to conform to itself, which is called meta-circularity.
  26. This four-layered architecture is used by most (meta-)modelling tools, with only two levels accessible to users: the $M2$ and $M1$ level.
  27. $M3$ is fixed, and has a close relation to the internals of the tool, as well as the physical representation of models.
  28. $M0$ cannot be modelled in the tool, as this represents the real world instance.
  29. This leaves only $M2$ and $M1$ for modification.
  30. Users can then use $M2$ to define their own custom language, specific to the domain they are interested in.
  31. $M1$ is used to model the actual model that is to be manipulated.
  32. \begin{figure}
  33. \centering
  34. \includegraphics[width=0.3\columnwidth]{figures/four_layered_architecture.pdf}
  35. \caption{Four-layered architecture, exemplified using Petri nets.}
  36. \label{fig:four_layered_architecture}
  37. \end{figure}
  38. Having only two levels at your disposition can be limiting for several applications.
  39. Therefore, multi-level modelling has been introduced, where the number of user-accessible layers is unrestricted.
  40. This raises the question on how to restrict elements several layers deep, which can be done through the use of deep characterization (\textit{e.g.}, through potency~\cite{potency,UnifyingApproach}).
  41. These techniques have an influence on instantiation and conformance semantics, and therefore require specialized tools.
  42. For example, potency will prevent the instantiation of elements whose potency value has reached 0.
  43. The exception being potency *, which indicates unrestricted instantiation until specified~\cite{SupportingConstructiveAndExploratoryModesOfModeling}.
  44. \subsection{Hardcoded Type/Instance relations}
  45. From the previous examples, it becomes clear that instantiation and conformance are more complex than graph (or model) homomorphism.
  46. Apart from finding whether the model structurally conforms to the provided metamodel, additional constraints are imposed on the representation.
  47. Several of the instances of the graph even gain special semantics, deviating from their originally structural role.
  48. For example, a \textit{cardinality} attribute will not merely be structural, but will further constrain the set of allowed instances.
  49. Figure~\ref{fig:special_constructs} shows some more examples of additional constraints.
  50. \begin{figure}
  51. \centering
  52. \includegraphics[width=0.7\columnwidth]{figures/special_constructs.pdf}
  53. \caption{Constructs that further restrict the set of instances.}
  54. \label{fig:special_constructs}
  55. \end{figure}
  56. Their semantics is as follows:
  57. \begin{itemize}
  58. \item \textit{Inheritance}.
  59. A special kind of link between two elements.
  60. It cannot be further instantiated.
  61. The source of the link becomes a subtype of the target during the conformance check.
  62. In the example, this means that any instance of $C$ will also be considered as an instance of $B$, but not the other way around.
  63. Additionally, when instantiation an instance of $C$, all attributes of $B$ also need to be added.
  64. \item \textit{Potency}.
  65. A special attribute which indicates how many levels deep this element can still be instantiated.
  66. When instantiating an element, the potency value is copied and decremented by one.
  67. When the value reaches zero, no further instances can be made.
  68. In the example, this means that both $A$, $B$, and $C$ can only be instantiated once more.
  69. This places restrictions on both instantiation (\textit{i.e.}, refuse to instantiate) and conformance checks (\textit{i.e.}, always find them to be non-conforming).
  70. \item \textit{Cardinalities}.
  71. A special attribute of an association which limits the number of instances of this association for a single element at the other side of the association.
  72. Both a lower and upper bound are possible.
  73. In the example, this means that each instance of $A$ has either $1$ or $2$ connected instances of $B$ through this association.
  74. For each $B$, there is exactly one connected instance of $A$.
  75. Instantiating additional links, when the upper limit is already reached, should be disallowed.
  76. Conversely, a conformance check should flag a model as non-conforming if the constraint is violated, even though it is structurally fine.
  77. \item \textit{Multiplicities}.
  78. A special attribute that indicates how many instances of this element can be present at the level immediately below.
  79. Both a lower and upper bound are possible.
  80. In the example, this means that there will be exactly $2$ instances of $A$.
  81. Similarly to cardinalities, both the instantiation and conformance relation should be aware of these restrictions.
  82. \item \textit{Constraints}.
  83. Previous constructs would limit the structure, whereas this part restricts the instances based on its semantics.
  84. Arbitrary executable models can be coupled to the metamodel, which are evaluated when determining whether the element conforms or not.
  85. In the example, this means that the value of the attribute $b$ of $B$ will always be greater than zero.
  86. The conformance check should, apart from checking the previous constraints, execute this piece of code to determine whether or not the model conforms.
  87. Instantiation does not need to be aware of this, as there is no way to statically know which operations are allowed to satisfy this function.
  88. \end{itemize}
  89. When these constructs are only present structurally, as is the case in most modelling tools, their semantics is non-obvious.
  90. It might even be non-obvious whether or not the attributes have any semantics (within the modelling tool) at all: why would an attribute with a specific name suddenly become part of the restrictions placed on instances?
  91. Somewhere, semantics needs to be given to these constructs: a component of the tool needs to find the attribute, read it out, determine whether or not the model satisfies this requirement, and provide user feedback.
  92. While we acknowledge that these powerful constructs aid in creating a tightly constrained set of possible instances, their inclusion often creates severe problems in the modelling hierarchy.
  93. Because each of these carries its own semantics, informally described above, there needs to be a mechanism to enforce the semantics.
  94. As described above, this semantics is part of the type/instance relation, which is, in most current tools, implicit and hardcoded.
  95. More concretely, this results in the following problems:
  96. \begin{enumerate}
  97. \item \textit{Semantics}.
  98. The exact semantics of these constructs is often unclear, and only found out by reading documentation or through experimentation.
  99. While for some constructs the semantics doesn't vary much between tools, other constructs vary significantly.
  100. And even if the semantics is clearly communicated between both parties, it remains a problem as to how these semantics are applied by the instantiation and conformance checking operations.
  101. For example multiplicities: what if a lower bound is not reached?
  102. Does it become impossible to delete elements, which would cause this lower bound to be violated?
  103. Or would a deletion be allowed, but subsequent conformance checks do fail in case it is still violated at that point in time?
  104. And is it possible to save a model which violates these constraints?
  105. Similarly, is it possible to create additional elements if these would violate the constraints?
  106. Or is it only possible within some kind of a transaction?
  107. This is even the case in object-oriented programming languages, where the semantics of subtyping varies.
  108. For example, C++ offers multiple inheritance, whereas Java only offers single inheritance.
  109. On the completely opposite side of the spectrum, Haskell uses structural subtyping instead of nominal subtyping~\cite{Typing}.
  110. While each of these has its advantages and disadvantages, it should be clear to users which semantics are used.
  111. \item \textit{Static}.
  112. Semantics, even if formally described in the documentation, still remains static.
  113. While this is not a significant problem in general, as a general concensus exists for these attributes, sometimes a slightly different semantics is desired.
  114. For example, users might want to make to temporarily violate a multiplicity constraint if the restriction is too strict.
  115. Similarly, some users might prefer, or even require, different semantics than those implemented.
  116. For example, Java limits inheritance to single inheritance.
  117. Users that require multiple inheritance will have to resort to tricks to implement their models which naturally lend themselves to multiple inheritance.
  118. Should the semantics be modifiable at run-time, users can alter the behaviour to their liking, or just switch implementation.
  119. Users who prefer to be constrained to single inheritance, can then use the single inheritance semantics, whereas others can decide to opt for multiple inheritance.
  120. \item \textit{Special constructs at the implementation level}.
  121. Because some constructs gain a special semantics, there needs to be a way of identifying these constructs.
  122. For some this is easy (\textit{e.g.}, read out an attribute with a pre-defined name, such as \textit{potency}), but for others, this becomes more difficult.
  123. In particular, the inheritance relation is a special case: it is a link, and one would expect it to be implemented as such.
  124. Many frameworks~\cite{VMTS,ModelBasedDSLFrameworks,AToMPM}, however, rely on this (or similar relations) to be a special kind of link, unrelated to a normal association.
  125. And while their underlying model storage hugely mimics existing structures, such as graphs, exceptions need to be made throughout to cope with these constructs.
  126. Furthermore, this additional type causes further problems in the checking of conformance: how is it typed?
  127. Resolving these elements should not be done through hardcoded types at the lowest level.
  128. This prevents the reuse of existing libraries, as a wrapper needs to be written to cope with the special types.
  129. \item \textit{Special constructs at the metamodel level}.
  130. Even if special constructs at the implementation level are avoided, special constructs at the metamodel level are sometimes still used.
  131. This hardcodes the identity of some parts of the metamodel in the instantiation and conformance checking functions.
  132. The metamodel will therefore simply be a normal metamodel, though some associations will gain special importance which are not apparant from the metamodel alone.
  133. It is only the type/instance relation which adds this additional semantics to the link.
  134. Apart from the confusion this might cause to users, it prevents users from using a different metamodel, and even prevents multi-level modelling completely.
  135. This was one of the problems that prevents AToMPM~\cite{AToMPM} from having multi-level metamodelling, or just more than two metametamodels: the inheritance semantics is hardcoded in the core, and only applies to the provided metametamodels.
  136. Similarly, models cannot simply have attributes if their metamodel does not allow for it.
  137. While this is not a problem in the traditional four-level architecture, multi-level modelling quickly runs into this problem: users can only specify a potency if their metamodel explicitly calls for it.
  138. Instead of modifying the metamodel for this, it is possible to encode these \textit{special} constructs as ``explicitly allowed'' in the conformance relation, as was done in the previous (unrelated) version of our tool~\cite{MULTI_modelverse}.
  139. The instantiation algorithm should also be aware of this, as it should, for whatever model, always add in these attributes by default, and furthermore make them mandatory, such that they cannot be removed.
  140. \item \textit{Inflexible type mapping}.
  141. Type mappings store the types of elements.
  142. While they have previously been identified theoretically, most current approaches hide away this important piece of information in the implementation.
  143. Apart from reading out the type, and possibly altering it through some programming interface, no modifications are possible as they reside in the internal data structures of the tool.
  144. By making these type mappings explicit, as an ordinary model, it can be modified as any other model, and in particular through the use of model transformations.
  145. This is one of the limitations of model transformations: the right-hand side cannot create an instances of a metamodel that is only known at run-time.
  146. This problem is currently solved by using model transformation templates~\cite{GenericModelTransformations}, which is still more constraining than our approach, as it doesn't allow for retyping operations.
  147. \item \textit{Single type}.
  148. Finally, as the conformance function and typing information is hardcoded, only a single such relation is possible for a given element.
  149. Sometimes, however, an element can be typed by multiple, possibly unrelated, elements.
  150. An example has already been given in~\cite{APosterioriTyping}, where a model is created with a single (constructive) type, but additional types can be found during execution.
  151. This could however also be related to the use of multiple very similar metamodels.
  152. For example, consider a petri nets metamodel, and a seperate, but identical, petri nets metamodel with inhibitor arcs.
  153. Every petri net instance without inhibitor arc, also conforms to the petri nets metamodel with inhibitor arcs.
  154. Similarly, every petri net without inhibitor arcs, even if it was constructed as an instance of the metamodel with inhibitor arcs, will conform to the original metamodel.
  155. Even though these are unrelated, a model can easily be said to be typed by both of them, depending on the situation in which it is used.
  156. This can have further repercussions in model evolution, where models frequently need to be retyped to slightly different metamodels.
  157. \end{enumerate}