Model transformations

A next step in the Modelverse is the use of model transformations. Model transformations are implemented using RAMification, and are therefore also modelled explicitly. As such, all previous modelling concepts are again useful.

Specification

To create a model transformation, users must specify a set of source metamodels and target metamodels. The model transformation will afterwards get a signature of these models. To create a simple Petri Net simulation transformation, we use the operation transformation_add_MT as follows:

>>> transformation_add_MT({"PN": "formalisms/PetriNets"}, {"PN": "formalisms/PetriNets"}, "models/pn_simulate", open("pn_simulate.mvc", "r").read())

The first dictionary we pass, is the input dictionary: it specifies the name the model elements will get in the LHS, and the expected type of them. Similarly, the output dictionary specifies the name of output elements and their types. The actual metamodel in which the model transformation executes is defined as the merger of the keys in both the input and output dictionary.

Note that in the transformation, all types are renamed by prepending the tag of their signature. For example, all types in the previous model transformation will be renamed by prepending “PN/” (e.g., PN/Place, PN/P2T).

All types are renamed, in order to make multiple inputs with the same type possible. For example, when combining two models, both of the same type, but one of them being the master, it is possible to do the following:

>>> transformation_add_MT({"master": "formalisms/PetriNets", "slave": "formalisms/PetriNets"}, {"result": "formalisms/PetriNets"}, "models/pn_merge", open("pn_merge.mvc", "r").read())

In this case, the LHS can match specifically for elements of the master (e.g., master/Place).

The output dictionary is interesting as well: multiple output models are possible, and these are based on the tags as well. When model elements do not match any tags, an exception is raised. When a model element matches a tag, it is put into that specific output model. More information on tags can be found in the invocation subsection.

We will now continue on how to specify a model transformation through modelling.

RAMification

To support model transformation, the Modelverse makes use of RAMification. In RAMification, the original metamodel is Relaxed, Augmented, and Modified. As such, the new metamodel can be used to define model transformation rules.

This consists of the following three phases:

  1. The metamodel is Relaxed, such that lower cardinalities are no longer applied. Similarly, constraints are removed, and abstract entities can be instantiated. This is done because in model transformations, we only use a specific part of the metamodel.
  2. The metamodel is Augmented, such that new attributes and concepts are added. These new attributes are label and constraint in the LHS, and label and action in the right hand side. New concepts that are added, are the LHS and RHS entity. These are the containers for all elements of the LHS and the RHS, respectively.
  3. The metamodel is Modified, such that existing attributes are renamed and their types are altered. All attributes have to become constraints on that specific attribute in the LHS (name prepended with constraint_), and actions for the new value in the RHS (name prepended with value_). For example, the tokens attribute of a place becomes a constraint function (returning True or False), instead of an attribute of type integer.

RAMification happens in the background in the Modelverse. Users can, of course, open the RAMified metamodel just like any other metamodel.

As RAMification makes a distinction between the LHS and RHS, all entities in the metamodel are effectively duplicated: the LHS entities are prefixed with Pre_, and the RHS entities are prefixed with Post_. Similarly, the names of all attributes are prefixed with constraint_ and value_, respectively.

Implicit merge

As a model transformation considers multiple languages, both for its input and output, it must be possible somewhere to join them. For example, a transformation from PetriNets to a ReachabilityGraph formalism, makes use of entities from both metamodels, in the same model. To allow for this, all used metamodels are implicitly merged before RAMification. Therefore, the metamodel becomes a combination of all metamodels that were originally specified, making it possible to use all of them in a single model.

Note, however, that the metamodels might use similar concepts: both a PetriNet and a ReachabilityGraph have the notion of a Place, but it means something different in both cases. Therefore, the elemens are prepended with the tag that was used to define them. As such, the model transformation has no notion of Place, but only of PetriNets/Place and ReachabilityGraph/Place. In all operations in the transformation, it is necessary to use this notation.

As the same metamodel might be used multiple times, but in different contexts (i.e., with different tags), the metamodels are sometimes added multiple times. Each time, however, a different tag is prepended. This allows model transformations that combine, or alter, models of the same type, while still distinguishing between the two of them. Recall the master/Place discussion from the beginning of this section.

Rule specification

Now we actually get to define a rule. Rules are themselves just models of the RAMified (and merged) metamodel. As such, they are created just like any other model. This is the code parameter of the transformation_add_MT operation, which takes a Modelverse model. An example specification is shown below, which will copy the highest number of tokens between places that have the same name, assuming that only one has a non-zero number of tokens:

LHS {
    Pre_PetriNets/Place {
        label = "pn_place_master"
        constraint_tokens = $
                Boolean function constraint(value : Integer):
                    return value > 0!
            $
    }

    Pre_PetriNets/Place {
        label = "pn_place_slave"
        constraint_tokens = $
                Boolean function constraint(value : Integer):
                    return value == 0!
            $

    constraint = $
        Boolean function constraint(host_model : Element, mapping : Element):
            return value_eq(read_attribute(host_model, mapping["pn_place_master"], "name"), read_attribute(host_model, mapping["pn_place_slave"], "name"))!
            $
}
RHS {
    Pre_PetriNets/Place {
        label = "pn_place_master"
    }
    Pre_PetriNets/Place {
        label = "pn_place_slave"
        value_tokens = $
            Integer function value(host_model : Element, name : Element, mapping : Element):
                return read_attribute(host_model, mapping["pn_place_master"], "tokens")!
            $
    }
}

Some remarks, specifically in relation to users of AToMPM: 1. Unspecified constraint attributes in the LHS are always fulfilled (i.e., result = True in AToMPM notation). 2. Unspecified value attributes in the RHS are always copied as-is (i.e., result = get_attr() in AToMPM notation). 3. Just like in AToMPM, labels are strings and can be used as such. 4. While mapping contains a mapping for the labels to their elements in the host model, all elements of the host model an technically be used, even those not occuring in the LHS. 5. During rewriting, it is possible to access the values of all elements of the host model, including those matched before in the LHS. Newly created elements in the RHS can of course not be referenced. Elements removed in the RHS can no longer be referenced either, though this will likely be updated in future versions.

Schedule

The rule we have previously applied, does not yet do much in itself: it needs to be scheduled. Scheduling consists of defining in which order rules are executed, but also defining how the rule is to be executed: as a query (Query), for one match (Atomic), or for all matches (ForAll). For scheduling purposes, each rule has a onSuccess and onFailure association. onSuccess associations are followed when the rule has been applied successfully (i.e., at least one match was found), and the onFailure association is followed otherwise. Rules can also be composites, in which case they define a schedule themselves.

Each schedule, including the main schedule, has exactly one Initial element, and possibly (some) Success and Failure elements. When the Success (Failure) node is reached, the composite rule is said to succeed (fail). On the topmost schedule, success indicates that the model transformation is to be applied. When the topmost schedule ends in a failure node, the model transformation as a whole is deemed to fail, and all changes are reverted. As such, users are guarenteed that an intermediate model will never be visible, or corrupt previous models.

An example schedule, which applies the previous rule for as long as it matches, is shown below:

Composite composite {
    ForAll copy_tokens {
        LHS {
            ...
        }
        RHS {
            ...
        }
    }
    Success success {}
}

Initial (composite, copy_tokens) {}
OnSuccess (copy_tokens, copy_tokens) {}
OnFailure (copy_tokens, success) {}

Invocation

The actual binding is only done later on, upon invocation, and goes as follows:

>>> transformation_execute_MT("models/pn_simulate", {"PN": "models/my_pn"}, {"PN": "models/my_pn"})

In this case, the model transformation takes the model my_pn as input for the pn_simulate model transformation, and writes out the result in my_pn. As the output model matches the input model, the model transformation is effectively in-place. For out-place transformations, it is possible to specify a different output model, in which case the model is implicitly copied as well:

>>> transformation_execute_MT("pn_simulate", {"PN": "models/my_pn"}, {"PN": "models/my_simulated_pn"})

Model transformation invocation has no effect on the original model in this case. Also, model transformations always happen on a copy of the original model. As such, it is also possible to restore the model to before the transformation. When a model transformation fails (i.e., the schedule ends in a Failure), no output models are written and the models are left untouched.

Signature

After a model transformation is defined, it is easy to forget exactly which parameters it takes, and what were the types of these parameters. Therefore, the transformation_read_signature function can be used to read out the signature of model transformations:

>>> transformation_read_signature("models/pn_simulate")
({"pn": "formalisms/PetriNets"}, {"pn": "formalisms/PetriNets"})
>>> transformation_read_signature("models/pn_merge")
({"master": "formalisms/PetriNets", "slave": "formalisms/Petrinets"}, {"result": "formalisms/PetriNets"})

Querying

Similarly, it is often easy to forget which transformations are supported between a source and target metamodel. Therefore the transformation_between function can be used to query for all transformations that take a certain metamodel as input, and generate another metamodel as output:

>>> transformation_between({"PN": "formalisms/PetriNets"}, {"PN": "formalisms/PetriNets"})
["models/pn_optimize", "models/pn_simulate", "models/pn_rename", "models/pn_combine"]

>>> transformation_between({"PN": "formalisms/PetriNets"}, {"Reachability": "formalisms/ReachabilityGraph"})
["models/pn_analyze"]

Note that this operation does not take into account other input or output metamodels:

>>> transformation_between({"PN": "formalisms/PetriNets"}, {"result": "formalisms/Boolean"})
["models/analyze_query", "models/is_safe", "models/conforms"]