Next: 2. The Proposal
Up: Higher Intelligence in Embedded
Previous: List of Tables
The use of models has found widespread application in systems engineering
as a (semi-)formal method to manage the complexity and heterogeneity
of large scale systems and their design teams and tools
because they are amenable to analysis and synthesis tasks.
For example, models are used for
knowledge representation, requirements engineering [48,63],
structured analysis [24,61],
to manage complexity and achieve high quality of
engineered systems [62],
to handle the heterogeneous nature of
embedded systems [21],
as a high level programming method [23,22,55],
and to bridge conceptual differences between
domains [54].
With the advent of ubiquitous computing,
model-based applications, e.g., in control, diagnosis, and maintenance,
will become pervasive and
ultimately become as proliferated as
embedded computing power [35,12].
To avoid overspecification and attain optimal performance,
new design paradigms based on holistic views
(e.g., mechatronics [57,56])
are a necessity to analyze subtle
interaction between information processing components and the physical
environment as well as between the different design tasks.
This requires tight integration of the separate individual design activities.
However, each of the engineering disciplines involved in system design and
operation have developed domain and problem specific
(often proprietary) formalisms
that match their needs optimally but complicate the integration process.
The goal of this research is to develop and prototype a core of next
generation multi-paradigm modeling
methods and technologies that address this incompatibility
and enable the development of novel applications.
This is a powerful approach that allows
the generation (instantiation) of domain and problem specific
methods, formalisms, and tools and because of a common
meta language, these different instances can be integrated
by combination, layering, heterogeneous refinement,
and multiple views [18,60].
At present this is still very much a topic of on-going
research that breaks down into two types of activities:
(i) heterogeneous modeling [14,28,31,47]
and formalism [6] and tool coupling [15,16], and
(ii) behavior generation [7,29,36,46,51,58].
The first is mainly concerned with the symbiotics (symbols, syntax, and
static semantics) of modeling formalisms, whereas the second addresses
analysis and behavior generation using the dynamic semantics of
such heterogeneous models.
Important issues include but are not limited to the design of system
engineering ontologies [8],
integrated development environment design,
heterogeneous execution models [34],
code synthesis
(software and hardware description language) [49,25,32],
and formal methods [11].
Three orthogonal dimensions of
multi-paradigm modeling are
multi-abstraction modeling, concerned with the
relationship between models at different levels of abstraction,
possibly described in different formalisms,
multi-formalism modeling, concerned with the
coupling of and transformation between models described in
different formalisms, and
meta-modeling, concerned with the description
of model representations and instantiation of domain specific
formalisms [45].
When extended with sophisticated model transformation facilities,
the multi-paradigm modeling
notions can be exploited to facilitate a suite of
technologies and applications that
manipulate a model into a different representation,
possibly changing the abstraction, partitioning, and hierarchical
structure to render it suitable for particular tasks,
i.e., it is operated on the model rather than its
generated information.
Though some model transformation schemes
exist within [13,27,33,59]
and between formalisms [4,5,53,58],
a hiatus in this multi-paradigm modeling effort is the prevalent
need to manually design models in different representations
for analyses, consistency checks, and execution.
The model transformations that are available and current development
efforts tend to focus on the goal of system realization
from design (e.g., automatic code synthesis) while
models embody knowledge, and as such they also form
the core of intelligent applications
(e.g., model-predictive control [1],
model-based diagnosis [10,26,37,39,40,43], and self maintenance).
When extended with sophisticated model transformation facilities,
the multi-paradigm modeling
notions can thus be exploited further to facilitate a suite of
technologies and applications that implement
a form of higher intelligence:
Where present intelligent applications utilize a formal representation of
some form of a process or system to derive information about
its state and predict future behavior, higher intelligence
manipulates this model into a different representation,
possibly changing the abstraction, partitioning, and hierarchical
structure to render it suitable for required tasks,
i.e., it operates on the model rather than its
generated information.
Next: 2. The Proposal
Up: Higher Intelligence in Embedded
Previous: List of Tables
Pieter Mosterman ER
2001-06-19