1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126112711281129113011311132113311341135113611371138113911401141114211431144114511461147114811491150115111521153115411551156115711581159116011611162116311641165116611671168116911701171117211731174117511761177117811791180118111821183118411851186118711881189119011911192119311941195119611971198119912001201120212031204120512061207120812091210121112121213121412151216121712181219122012211222122312241225122612271228122912301231123212331234123512361237123812391240124112421243124412451246124712481249125012511252125312541255125612571258125912601261126212631264126512661267126812691270127112721273127412751276127712781279128012811282128312841285128612871288128912901291129212931294129512961297129812991300130113021303130413051306130713081309131013111312131313141315131613171318131913201321132213231324132513261327132813291330133113321333133413351336133713381339134013411342134313441345134613471348134913501351135213531354135513561357135813591360136113621363136413651366136713681369137013711372137313741375137613771378137913801381138213831384138513861387138813891390139113921393139413951396139713981399140014011402140314041405140614071408140914101411141214131414141514161417141814191420142114221423142414251426142714281429143014311432143314341435143614371438143914401441144214431444144514461447144814491450145114521453145414551456145714581459146014611462146314641465146614671468146914701471147214731474147514761477147814791480148114821483148414851486148714881489149014911492149314941495149614971498149915001501150215031504150515061507150815091510151115121513151415151516151715181519152015211522152315241525152615271528152915301531153215331534153515361537153815391540 |
- %\documentclass[10pt,a4,twocolumn]{article}
- \documentclass[10pt,letter,twocolumn]{article}
- %\input artSCSA4
- \input /home/hv/src/sty/artLetter
- \begin{document}
- \title{\textsf{The Cellular Automata Formalism\\
- and its Relationship to DEVS}}
- \author{Hans L.M. Vangheluwe$^{\dag,\ddag}$
- and Ghislain C. Vansteenkiste$^\ddag$\\ \\
- {\small
- \hfill
- \begin{minipage}{8cm}
- \begin{center}
- $^\dag$ School of Computer Science\\
- McGill University, Montr\'eal, Qu\'ebec, Canada\\
- e-mail: hv@cs.mcgill.ca
- \end{center}
- \end{minipage}
- \hfill
- \begin{minipage}{8cm}
- \begin{center}
- $^\ddag$ Department of Applied Mathematics,\\
- Biometrics, and Process Control (BIOMATH)\\
- Ghent University (RUG), Gent, Belgium
- \end{center}
- \end{minipage}
- \hfill
- }\\ \\
- Keywords: multi-formalism modelling, simulation,
- Cellular Automata, DEVS
- }
- \date{2000\thanks{{\underline{\textbf{Appeared as:}}}
- Hans~L. Vangheluwe and Ghislain~C. Vansteenkiste.
- The cellular automata formalism and its relationship to {DEVS}.
- In Rik Van~Landeghem, editor, {\em $14^{th}$ European Simulation
- Multi-conference (ESM)}, pages 800--810. Society for Computer
- Simulation International (SCS), May 2000. Gent, Belgium.}}
- \maketitle
- % \thispagestyle{empty}
- \section*{\centerline{Abstract}}
-
- %\parindent 0.cm
- %\parskip 0.1cm
- Cellular automata (CA) were originally conceived by Ulam and von
- Neumann in the 1940s to provide a formal framework for investigating
- the behaviour of complex, spatially distributed systems.
- Cellular Automata constitute a {\em dynamic, discrete space,
- discrete time} formalism where space is usually discretized in a
- grid of cells. Cellular automata are still used to study, from first
- principles, the behaviour of a plethora of systems.
- The Discrete EVent system Specification (DEVS) was first introduced by
- Zeigler in 1976 as a rigourous basis for discrete-event modelling.
- At the discrete-event level of abstraction, state variables are
- considered to change only at event-times. Furthermore, the number of
- events occurring in a bounded time-interval must be finite.
- The semantics of well known discrete-event formalisms such as
- Event Scheduling, Activity Scanning, and Process Interaction can
- be expressed in terms of DEVS, making it a common denominator for
- the representation of discrete-event models.
- Due to its roots in traditional sequential formalisms, DEVS offers
- little potential for parallel implementation. Furthermore,
- conflicts between simultaneous internal state transitions and
- external events are resolved by ignoring the internal transitions.
- In 1996, Chow introduced the parallel DEVS (P-DEVS) formalism which
- alleviates these drawbacks.
- Since the end of 1993, the European Commission's ESPRIT Basic
- Research Working Group 8467 (SiE), has identified key areas for
- future research in modelling and simulation. The most urgent need is
- to support multi-formalism modelling to correctly model complex
- systems with components described in diverse formalisms.
- To achieve this goal, formalisms need to be related.
- The Formalism Transformation Graph (FTG) shown in Figure~\ref{fig:ftl}
- \begin{figure*}
- \postscript{figs/ftl.bold.eps}{0.8}
- \caption{Formalism Transformations}
- \label{fig:ftl}
- \end{figure*}
- depicts behaviour-conserving transformations between formalisms.
- In this article, both the Cellular Automata and the DEVS and parallel
- DEVS formalisms are introduced. Then, a mapping between Cellular Automata
- and parallel DEVS is elaborated. This fills in the $CA \rightarrow DEVS$
- edge in the FTG. The mapping describes CA semantics in terms of parallel
- DEVS semantics. As such, it is a specification for automated transformation
- of CA models into parallel DEVS models, thus enabling efficient, parallel
- simulation as well as meaningful coupling with models in other
- formalisms.
- %\end{abstract}
- \section{The Cellular Automata Formalism} \label{sec:CA}
-
- Cellular automata (CA) were originally conceived by Ulam and von
- Neumann in the 1940s to provide a formal framework for investigating
- the behaviour of complex, spatially distributed systems
- \cite{vonNeumann:automata}.
- Cellular Automata constitute a {\em dynamic, discrete space,
- discrete time} formalism.
- Space in Cellular Automata is partitioned into discrete volume
- elements called {\em cells} and time progresses in discrete steps.
- Each cell can be in one of a finite number of states at any given time.
- The ``physics'' of this logical universe is {\em deterministic} and
- {\em local}.
- {\em Deterministic} means that once a local physics and an initial state of
- a Cellular Automaton has been chosen, its future evolution is
- {\em uniquely} determined. {\em Local} means that the state of a
- cell at time $t+1$ is determined only by its own state and the states of
- neighbouring cells at the previous time $t$.
- The {\em operational semantics} of a CA as prescribed in a
- {\em simulation procedure} and implemented in a CA solver dictates
- that values are updated {\em synchronously}: all new values are calculated
- simultaneously.
-
- The local physics is typically determined by an {\em explicit mapping}
- from all possible local states of a predefined neighbourhood template (\eg
- the cells bordering on a cell, including the cell itself), to the state
- of that cell after the next time-step.
- For example, for a 2-state ($\{0,1\}$), 1-D Cellular Automaton
- with a neighbourhood template that includes a cell and its
- immediate neighbours to the left and right, there will be
- $2^3$ possible neighbourhood states \{$000, \ldots, 111$\}.
- For each of these, we must prescribe whether
- a transition of the center cell state to a $1$ or to a $0$ will occur.
- For an 8-state nearest neighbour 2-D Cellular Automaton, there will be
- $8^5$ possible neighbourhood states, and a choice of 8 states to map to
- for each of those.
- The Cellular Automata formalism CA fits the
- general structure of deterministic, causal systems in classical
- systems theory \cite{Wymore:systems,Zeigler:theory_integr}:
-
- \[ CA \equiv \langle T, X, \Omega, S, \delta, Y, \lambda \rangle.\]
-
- The formalism is specified by elaboration of the elements of the
- $CA$ 7-tuple, as described below:
- \par
- The {\em discrete} time base $T$:
- \[
- T = \bb{N}\ \mbox{(or isomorphic with $\bb{N}$)}.
- \]
- The input set $X$:
- \[
- X = \{TIME\_TICK\}.
- \]
- The Cellular Automata formalism can
- easily be extended to non-trivial inputs (see further comments
- on extensions).
-
- The set of all input segments $\omega$:
- \[
- \Omega.
- \]
- An input segment $\omega$ may be restricted to a domain
- ($\subseteq T$) such as $]n,n+1]$:
- \begin{eqnarray*}
- \omega &: &T \rightarrow X,\\
- \omega_{]n,n+1]} &:& ]n,n+1] \rightarrow X.
- \end{eqnarray*}
- The state set $S$ is the product of all the {\em finite} state sets
- $V_i$ (also called {\em cell value sets}) of the individual cells.
- $C$ is the {\em cell index set}.
- \[
- S = \times_{i \in C} V_i.
- \]
- In the usual case of Cellular Automata realized on a $D$-dimensional
- grid, $C$ consists of $D$-tuples of indices from a coordinate set $I$:
- \[
- C = I^D.
- \]
- In the standard Cellular Automata formalism, the cell space is
- assumed {\em homogeneous}: all cell value sets are identical:
- \[
- \forall i \in C, V_i = V.
- \]
- The cell value function $v$ maps a cell index $i$ onto its value
- $v(i)$:
- \[
- v: C \rightarrow V.
- \]
- The total transition function $\delta$ is constructed
- from the transition functions $\delta_i$ for each of the cells.
- \[
- \begin{array}{lcccc}
- \delta & : & \Omega \times S & \rightarrow & S,\\
- & & (\omega_{]n,n+1]}, \times_{i \in C} v(i))
- & \rightarrow &
- \times_{i \in C} \delta_i(i).
- \end{array}
- \]
- The {\em uniformity} of Cellular Automata requires
- the $\delta_i$ to be based on a {\em single} local transition
- function $\delta_l$ for all cells $i$:
- \[
- \forall i \in C, \delta_i(i) = \delta_l(v(\sigma_{NT}(i))),
- \]
- where the various operators and quantities are explained below.
-
- A neighbourhood template $NT$, a vector of {\em finite}
- size $\xi$ containing {\em offsets} from an ``origin'' $0$,
- encodes the relative positions of neighbouring cells influencing
- the future state of a cell.
- Usually, the {\em nearest (adjacent)} neighbours (including the cell itself)
- are used.
- For one-dimensional Cellular Automata, a cell is connected to $r$ local
- neighbours (cells) on either side, where $r$ is a parameter referred to as
- the {\em radius} (thus, there are $2r+1$ cells in each neighbourhood).
- For two-dimensional Cellular Automata, two types of neighbourhoods
- are usually considered:
- \begin{itemize}
- \item The {\em von Neumann} neighbourhood of radius $r$
- \[ NT = \{(k,l) \in C |\ |k| + |l| \leq r\}. \]
- For $r=1$, this yields 5-cells, consisting of a cell along with
- its four immediate non-diagonal neighbours.
- \item The {\em Moore} neighbourhood of radius $r$
- \[ NT = \{(k,l) \in C |\ |k| \leq r \ \textsf{and}\ |l| \leq r\}. \]
- For $r=1$, this yields 9-cells, consisting of the cell
- along with its eight surrounding neighbours.
- \end{itemize}
-
- The {\em local} nature of the Cellular Automata models
- lies in the fact that {\em only} near neighbours influence the
- behaviour of a state, {\em not all} cells as the general form of
- $\delta$ allows. From a modelling point of view, physical arguments
- in disciplines ranging from physics to biology and artificial life
- support this assumption \cite{Wolfram:CA}. From a simulation point of view,
- efficient solvers for the Cellular Automata formalism can be constructed
- which take advantage of locality.
- Note how the $NT[j]$ are offsets between $C$ elements which are
- again elements of $C$ (with an appropriate $C$ such as $\bb{N}^D$).
- \[
- NT \in C^\xi.
- \]
- The function $\sigma_{NT}$ {\em shifts} the neighbourhood template
- $NT$ to be centered over $i$:
- \[
- \begin{array}{lcccl}
- \sigma_{NT} &:& C &\rightarrow& C^\xi,\\
- & & i &\rightarrow& \tau \\
- & & & &
- \mbox{where}\ \forall j \in \{1, \ldots, \xi\}:
- \tau[j] = i + NT[j].
- \end{array}
- \]
- For all possible combinations of size $\xi$ of cell values,
- the local transition function $\delta_l$ prescribes the transition
- to a new value:
- \[
- \begin{array}{lcccc}
- \delta_l &:& V^\xi & \rightarrow & V,\\
- & & [v_1, \ldots, v_\xi] & \rightarrow & v_{new}.
- \end{array}
- \]
- Thanks to the uniformity of Cellular Automata,
- the same $\delta_l$ is used for each element of the cell space.
- The number of possible combinations of cell values is
- $\#V^\xi$ and the number of distinct results of $\delta_l$ is $\#V$
- ($\#$ is the cardinality function).
-
- In the above, $\delta$ was constructed for an elementary time advance
- $n \rightarrow n+1$. The composition (or semigroup) requirement
- for deterministic system models:
- \[\forall t_x \in [t_i, t_f]; s_i \in S,\]
- \[\delta(\omega_{[t_i, t_f]}, s_i) =
- \delta(\omega_{[t_x, t_f]},
- \delta(\omega_{[t_i, t_x]}, s_i))
- \]
- is satisfied by {\em construction} (\ie the definition, combined with
- iteration over $n$).
-
- It is possible to ``observe'' the Cellular Automaton by projecting
- the total state $S$ onto an output set $Y$ by means of an
- output function $\lambda$
- \[\lambda: S \rightarrow Y,\]
- where $Y$ commonly has a structure similar to that of $S$.
-
- We shall now discuss the {\em structure} of the cell space.
- Usually, a $D$-dimensional {\em grid}, centered around the origin,
- is used. Most common choices are $D \in \{1,2,3\}$.
- In one dimension, a linear array of cells is the only possible geometry
- for the grid. In two dimensions, there are three choices:
- \begin{itemize}
- \item \textbf{triangular grid}: has a small number of nearest neighbours
- but is hard to visualize on square grid oriented computers.
- \item \textbf{square grid}: is easy to visualize, but (computationally)
- {\em anisotropic} (\ie a wave propagates faster along the primary
- axes than along the diagonals).
- \item \textbf{hexagonal grid}: has the lowest anisotropy of all grids but
- computer visualization is hard to implement.
- \end{itemize}
- Arbitrary cell space structures are possible (and corresponding cell
- shapes when visualizing), though not practical.
-
- Although the above mathematical formalism is perfectly valid, it can
- not be simulated {\em in practice}. For simulation to become possible, the
- cell space needs to be {\em finite}. In particular, the cell index set
- $C$ must be finite. $L$, the length of the grid becomes finite, leading
- to a cell space of $L^D$ cells.
-
- When a finite cell space is used, the application of the transition
- function at the edges poses a problem as values are needed outside the
- cell space. As shown in Figure~\ref{fig:boundary} for a 2-D cell space,
- \begin{figure}
- \postscript{figs/boundary.eps}{0.9}
- \caption{Boundary Conditions for Finite Cell Space}
- \label{fig:boundary}
- \end{figure}
- {\em boundary conditions} need to be specified in the form of
- cell values {\em outside} the cell space.
- Two common approaches --also used in the specification and
- solution of partial differential equations \cite{Farlow:PDE}-- are
- \begin{itemize}
- \item \textbf{explicit} boundary conditions. Extra cells
- outside the perimeter of the cell space hold boundary values
- (\ie $C$ and $v$ are extended).
- The number of extra border cells is determined by the
- size of (the perimeter of) the cell space as well as by the size
- (and shape) of
- the neighbourhood template.
- Boundary conditions may be time varying.
- \item \textbf{periodic} boundary conditions. The cell space is
- assumed to ``wrap around'': the cells at opposite ends
- of the grid act as each other's neighbours. In 1-D, this results
- in a (2-D) circle, in 2-D, in a (3-D) torus. By construction, the
- boundary conditions are time-varying.
- \end{itemize}
- \subsection{Implementation of a CA Solver}
- \subsubsection{The Solver Structure}
- In Figure~\ref{fig:simproc}
- \begin{figure}[h]
- \hrule
- \vspace{2mm}
- {\sf
- \begin{algorithmic}%[1]
- \STATE $\forall i \in C$: initialise $v(i)$
- \COMMENT {Initialize Cell Space}
- \IF {explicit boundary conditions}
- \STATE $\forall i \in$ boundary($C$): initialize $v(i)$
- \COMMENT {Boundary extension of $v()$}
- \ENDIF
- \IF {periodic boundary conditions}
- \STATE $\forall i \in C \cup$ boundary($C$):
- $v(i) \leftarrow v(i\ \textsf{mod}\ L)$
- \COMMENT {Modulo extension of $v()$; assume $0 \ldots L-1$ indexing}
- \ENDIF
- \FOR {$n := n_s$ to $n_f$}
- %\COMMENT {from start ``time'' $n_s$ to end ``time'' $n_f$}
- \STATE $\forall i \in C$: $v_{new}(i)
- \leftarrow
- \delta_l(v(\sigma_{NT}(i)))$
- \COMMENT {One-step state transition computation}
- \STATE $v \leftarrow v_{new}$
- \COMMENT {Switch value buffers}
- \STATE $n \leftarrow n+1$
- \COMMENT {Time Advancement}
- \ENDFOR
- \end{algorithmic}
- }
- \vspace{2mm}
- \hrule
- \caption{CA Simulation Procedure}
- \label{fig:simproc}
- \end{figure}
- the backbone algorithm of any cellular automata solver is given.
- As the definition requires synchronous calculation,
- whereby new values only depend on old values (and not on
- new values) of neighbouring cells, a second value function
- $v_{new}$ is needed to hold copies of the previous value.
- Note how a value function is usually efficiently implemented as
- a lookup in a value {\em array}.
-
- % \subsubsection{Improving Solver Performance}
- %
- % % Contributions by:
- % % Rudy Rucker <rucker@sjsumcs.SJSU.EDU>
- % % Richard Ottolini <stgprao@st.unocal.COM>
- % % J Dana Eckart <dana@rucs.faculty.cs.runet.edu>
- %
- % The performance of a Cellular Automaton solver is obviously
- % related to an appropriate choice of data structures and algorithms.
- % There are four main techniques \cite{Alife:FAQ} for
- % improving solver performance:
- % \begin{enumerate}
- %
- % \item Lookup table:
- %
- % generally, a cell takes on a value $v_{new}$ which is computed
- % on the basis of information in the cell's neighbourhood.
- % One may attempt to pack the neighbourhood value information bitwise
- % into an integer $neigh$ which can subsequently be used as an {\em index}
- % into a {\em lookup table}. The {\em lookup} table {\em encodes} the
- % local transition function $\delta_l$:
- % \[
- % v_{new}(i) = lookup[neigh].
- % \]
- % The {\em lookup} values are pre-computed from a $\delta_l$
- % specification before simulating the Cellular Automaton.
- % The {\em lookup} vector may also serve as an efficient means
- % of model storage.
- %
- % \item Neighbourhood shifting:
- %
- % in stepping through the cells, one repeatedly computes a cell's
- % $neigh$, then computes the $neigh$ of the next cell, and so on.
- % Because the neighbourhoods overlap, a lot of the
- % information in the next cell's $neigh$ is the same as in the old
- % cell's. With an appropriate representation, it is possible
- % to {\em left shift} out the old info and {\em OR} in the new info.
- %
- % \item Pointer swap:
- %
- % to run a CA, one needs two buffers, one for the current cell space
- % ($v$ at $t$), and one for the updated cell space ($v_{new}$
- % at $t+1$). After the update, the updated cell space should
- % {\em not} be {\em copied} into the current one
- % (though a na\"{\i}ve implementation of line 10 in
- % Figure~\ref{fig:simproc} would do so).
- % Assuming the value functions $v$ and $v_{new}$ are encoded
- % as arrays, swapping pointers is ${\cal O}(L^D)$ faster.
- %
- % \item Assembly language ``inner loop'':
- %
- % when processing a 2-D Cellular Automaton of \textsf{VGA}
- % size, the cell space has approximately $300,000$ cells.
- % This implies $neigh$ will be assembled and {\em lookup}
- % applied about $300,000$ times per time-step (one screen).
- % This means the {\em inner loop} must be as efficient as possible.
- % In particular, coding this in assembly language, reducing the
- % total number of clock cycles of the instructions in the inner
- % loop, can lead to a significant performance gain.
- %
- % \item Sparse \vs dense configurations:
- %
- % if the rule set is known to lead to {\em sparse} configurations,
- % as in the case of the Game of Life with a small initial pattern,
- % one can use {\em sparse matrix techniques}. That is, one can just compute
- % in the vicinity of occupied cells.
- % Generally, these techniques are not compiles as efficiently as a full
- % matrix method, because there is more indirect addressing and
- % branching. However, one can include both a sparse and full matrix
- % method in the same program, and switch when the {\em cross-over
- % density} is reached.
- %
- % \item Periodic boundary conditions:
- %
- % there are two basic methods for handling periodic boundary
- % conditions efficiently:
- % \begin{enumerate}
- % \item Coding for fast modulo arithmetic.
- %
- % The brute force method of doing modulo arithmetic on index variable
- % \verb|i| for a range of $0 \dots R-1$ in \verb|C|\ is
- % \begin{verbatim}
- % (i + offset) % R
- % \end{verbatim}
- % \vspace{-4mm}
- % On some architectures (\eg some Sun SPARC stations)
- % it is actually faster to do
- % \begin{verbatim}
- % register int tmp = i + offset;
- % (tmp >= R) ? tmp - R : tmp
- % \end{verbatim}
- % \vspace{-4mm}
- % if \verb|offset| is positive and similarly if it is negative. \\
- % If \verb|R| is a power of 2, better performance can be obtained by
- % means of
- % \begin{verbatim}
- % (i + offset) & R
- % \end{verbatim}
- % \vspace{-4mm}
- % when offset is positive and
- % \begin{verbatim}
- % (i + offset + R) & R
- % \end{verbatim}
- % \vspace{-4mm}
- % when offset is negative.
- % \item Using a larger array and copying the boundary cells
- % between iterations
- % \end{enumerate}
- % \end{enumerate}
- \subsection{Examples}
- The \href{http://www.santafe.edu/}{Santa Fe Institute} \cite{CAweb}
- hosts a plethora of Cellular Automata examples.
- The site is mainly devoted to the study of Artificial Life,
- one of the prominent uses of Cellular Automata.
- Artificial Life research tries to explain and reproduce, ab-initio,
- many physical and biological phenomena.\\
- \subsubsection{Simple 2-state, 1-D Cellular Automaton}
- Figure~\ref{fig:simplecell} demonstrates the simulation procedure
- \begin{figure}
- \postscript{figs/simplecell.eps}{1.0}
- \caption{2-state, 1-D Cellular Automaton of length 4}
- \label{fig:simplecell}
- \end{figure}
- for a simple 2-state ($\{0,1\}$), 1-D Cellular Automaton of length 4
- with periodic Boundary Conditions and initial condition $1011$.
- The local transition function (a 1-D version
- of the Game of Life) is \\
- \begin{tabular}{lll}
- $\delta_l$\ :\ & $000 \rightarrow 0$ & $100 \rightarrow 0$ \\
- & $001 \rightarrow 0$ & $101 \rightarrow 1$ \\
- & $010 \rightarrow 0$ & $110 \rightarrow 1$ \\
- & $011 \rightarrow 1$ & $111 \rightarrow 0$ \\
- \end{tabular}\\
- For a 1-D Cellular Automaton, ``animation'' of the cell space can
- be visualized by colour coding the cell values
- (here: $0$ by white and $1$ by black) and by mapping $t$
- onto the vertical axis which leads to the 2-D image in
- Figure~\ref{fig:simplecell}.\\
- \subsubsection{The Game of Life}
-
- Developed by Cambridge mathematician John Conway and
- popularized by Martin Gardner in his Mathematical Games
- column in Scientific American in 1970 \cite{Gardner:Life}, the
- game of Life is one of the simplest examples of a Cellular Automaton.
- Each cell is either alive ($1$) or dead ($0$). To determine its status
- for the next time step, each cell counts the number of neighbouring cells
- which are alive. If the cell is alive and has 2 or 3 alive
- neighbours, then the cell is alive during the next time step.
- With fewer alive neighbours, a living cell dies of loneliness,
- with more, it dies of overcrowding.
- Many interesting patterns and behaviours have been investigated
- over the years.
- An example of a high-level {\em cellang}
- \cite{Eckart:cellular} specification (\ie $\delta_l$ is written
- implicitly) of the Cellular Automata model
- is given in Figure~\ref{fig:life}.
- \begin{figure}
- \hrule
- \vspace{2mm}
- \begin{verbatim}
- # 2 dimensional game of life
-
- 2 dimensions of 0..1
-
- sum := [ 0, 1] + [ 1, 1] + [ 1, 0] +
- [-1, 1] + [-1, 0] + [-1, -1] +
- [ 0, -1] + [ 1, -1]
-
- cell := 1 when (sum = 2 & cell = 1) | sum = 3
- := 0 otherwise
- \end{verbatim}
- \vspace{-0.7cm}
- \hrule
- \caption{``cellang'' specification of Conway's game of Life}
- \label{fig:life}
- \end{figure}
- In combination with boundary conditions and an initial condition,
- this specification allows for model solving.
- \subsection{Formalism Extensions}
-
- The Cellular Automata formalism can easily be extended
- in different ways:
- \begin{enumerate}
- \item Addition of {\em inputs}. The formalism as presented above
- is {\em autonomous}: there are no (non-trivial)
- inputs into the system.
- An intuitively appealing way of adding inputs is to associate
- an input with each cell. The input set $X$ will thus have
- a structure similar to that of the state set $S$.
- \item The requirement of having the same cell value set for each
- cell can be relaxed to obtain {\em heterogeneous} Cellular
- Automata whereby not necessarily $\forall i \in C: V_i = V$.
- The homogeneous case can always be emulated by constructing
- $V$ as the union of all individual cell value sets:
- \[
- V = \bigcup_{i \in C} V_i.
- \]
- \item The requirement of having the same local transition function
- $\delta_l$ for each cell can be relaxed to obtain {\em non-uniform}
- Cellular Automata.
- % Obviously, in that case, it is no longer possible
- % to use the performance-enhancing techniques described above.
- \item As is demonstrated in Figure~\ref{fig:life}, a modelling
- language may allow for high-level representations.
- Agents %\cite{CAagents}
- are a typical example of such a high-level
- construct. Here, $\delta_l$ is no longer specified explicitly.
- \end{enumerate}
- The ``grid of cells'' discretized representation of space can also
- be used for continuous models.
- In particular, the local dynamics of cells can be described by
- System Dynamics models rather than finite state automata.
- Simulation will be carried out by transforming
- the model to a flat DAE model and subsequently (numerically)
- solving that continuous model, given initial conditions.
- % implementation is however synchronous
- \section{The DEVS Formalism} \label{sec:DEVS}
- The DEVS formalism was conceived by Zeigler
- \cite{Zeigler:DEVS,zeigler:theory} to provide a rigorous common basis for
- discrete-event modelling and simulation.
- For the class of formalisms denoted as {\em discrete-event}
- \cite{Nance:TimeState}, system models are described at an abstraction
- level where the time base is continuous ($\mathbb{R}$), but during a bounded
- time-span, only a {\em finite number} of relevant {\em events} occur.
- These events can cause the state of the system to change.
- In between events, the state of the system does {\em not} change.
- This is unlike {\em continuous} models in which the state of the
- system may change continuously over time.
- Just as Cellular Automata, the DEVS formalism fits the
- general structure of deterministic, causal systems in classical
- systems theory mentioned in section~\ref{sec:CA}.
- DEVS allows for the description of system behaviour at two levels.
- At the lowest level, an {\em atomic DEVS} describes the autonomous
- behaviour of a discrete-event system as a sequence of deterministic
- transitions between sequential states as well as how it reacts to
- external input (events) and how it generates output (events).
- At the higher level, a {\em coupled DEVS} describes a system as
- a {\em network} of coupled components. The components can be atomic
- DEVS models or coupled DEVS in their own right. The connections denote
- how components influence each other. In particular, output events
- of one component can become, via a network connection, input events
- of another component. It is shown in \cite{Zeigler:DEVS} how the
- DEVS formalism is {\em closed under coupling}: for each coupled DEVS,
- a {\em resultant} atomic DEVS can be constructed. As such, any DEVS
- model, be it atomic or coupled, can be replaced by an equivalent
- atomic DEVS. The construction procedure of a resultant atomic DEVS is
- also the basis for the implementation of an {\em abstract simulator}
- or solver capable of simulating any DEVS model.
- As a coupled DEVS may have coupled DEVS components,
- {\em hierarchical} modelling is supported.
- In the following, the different aspects of the DEVS formalism are
- explained in more detail.
- \subsection{The atomic DEVS Formalism}
- The atomic DEVS formalism is a structure describing the different
- aspects of the discrete-event behaviour of a system:
- \[
- atomic DEVS \equiv
- \langle
- S, ta, \delta_{int}, X, \delta_{ext}, Y, \lambda
- \rangle.
- \]
- The {\em time base} $T$ is continuous and is not mentioned
- explicitly
- \[
- T = \mathbb{R}.
- \]
- The {\em state set} $S$ is the set of admissible {\em sequential
- states}: the DEVS dynamics consists of an ordered sequence
- of states from $S$.
- Typically, $S$ will be a {\em structured} set (a product set)
- \[
- S = \times_{i=1}^n S_i.
- \]
- This formalizes multiple ($n$) {\em concurrent} parts of a system.
- It is noted how a structured state set is often synthesized from
- the state sets of concurrent components in a coupled DEVS model.
-
- The time the system {\em remains} in a sequential state
- before making a transition to the next sequential state
- is modelled by the {\em time advance function}
- \[
- ta: S \rightarrow \mathbb{R}^{+}_{\mbox{$0,+\infty$}}.
- \]
- As time in the real world always advances, the image of $ta$ must be
- non-negative numbers. $ta = 0$ allows for the representation of
- {\em instantaneous} transitions: no time elapses before transition
- to a new state. Obviously, this is an abstraction of reality which
- may lead to simulation {\em artifacts} such as infinite instantaneous
- loops which do not correspond to real physical behaviour.
- If the system is to stay in an end-state $s$ {\em forever},
- this is modelled by means of $ta(s) = +\infty$.
- The internal transition function
- \[
- \delta_{int}: S \rightarrow S
- \]
- models the transition from one state to the next sequential state.
- $\delta_{int}$ describes the behaviour of a Finite State Automaton;
- $ta$ adds the progression of time.
- It is possible to {\em observe} the system output. The output set $Y$
- denotes the set of admissible {\em outputs}.
- Typically, $Y$ will be a {\em structured} set (a product set)
- \[
- Y = \times_{i=1}^l Y_i.
- \]
- This formalizes multiple ($l$) output ports. Each port is identified
- by its unique index $i$.
- In a user-oriented modelling language, the indices would be derived
- from unique port {\em names}.
- The output function
- \[
- \lambda: S \rightarrow Y \cup \{\phi\}
- \]
- maps the internal state onto the output set. Output events are {\em
- only} generated by a DEVS model at the time of an {\em internal}
- transition. At that time, the state {\em before} the transition
- is used as input to $\lambda$. At all other times, the non-event
- $\phi$ is output.
- To describe the {\em total state} of the system at each point in
- time, the sequential state $s \in S$ is not sufficient. The
- {\em elapsed} time $e$ since the system made a transition to the
- current state $s$ needs also to be taken into account to construct the
- total state set
- \[
- Q = \{(s,e)| s \in S, 0 \leq e \leq ta(s)\}
- \]
- The elapsed time $e$ takes on values ranging from $0$ (transition
- just made) to $ta(s)$ (about to make transition to the next
- sequential state). Often, the {\em time left} $\sigma$ in a state
- is used:
- \[
- \sigma = ta(s) - e
- \]
- Up to now, only an {\em autonomous} system was described: the system
- receives no external inputs. Hence, the {\em input set} $X$ denoting
- all admissible input values is defined.
- Typically, $X$ will be a {\em structured} set (a product set)
- \[
- X = \times_{i=1}^m X_i
- \]
- This formalizes multiple ($m$) input ports. Each port is identified
- by its unique index $i$. As with the output set, port indices may
- denote {\em names}.
- The set $\Omega$ contains all admissible input segments $\omega$
- \[
- \omega: T \rightarrow X \cup \{\phi\}
- \]
- In discrete-event system models, an input segment generates an input
- {\em event} different from the {\em non-event} $\phi$ only at a finite
- number of instants in a bounded time-interval.
- These {\em external events}, inputs $x$ from $X$ cause the system to
- interrupt its autonomous behaviour and react in a way prescribed
- by the external transition function
- \[
- \delta_{ext}: Q \times X \rightarrow S
- \]
- The reaction of the system to an external event depends on the
- sequential state the system is in, the particular input {\em and}
- the elapsed time. Thus, $\delta_{ext}$ allows for the description of
- a large class of behaviours typically found in discrete-event
- models (including synchronization, preemption, suspension
- and re-activation).
- When an input event $x$ to an atomic model is not listed in the
- $\delta_{ext}$ specification, the event is {\em ignored}.
-
- In Figure~\ref{fig:DEVStraj},
- \begin{figure}
- \postscript{figs/devs.external.eps}{1}
- \caption{State Trajectory of a DEVS-specified Model}
- \label{fig:DEVStraj}
- \end{figure}
- an example state trajectory is given for an atomic DEVS model.
- In the figure, the system made an internal transition to
- state $s2$. In the absence of external input events, the
- system stays in state $s2$ for a duration $ta(s2)$. During
- this period, the elapsed time $e$ increases from $0$ to
- $ta(s2)$, with the total state $= (s2,e)$. When the elapsed time
- reaches $ta(s2)$, first an output is generated: $y2=\lambda(s2)$,
- then the system transitions instantaneously to the new state
- $s4=\delta_{int}(s2)$. In autonomous mode, the system would stay
- in state $s4$ for $ta(s4)$ and then transition (after generating
- output) to $s1=\delta_{int}(s4)$. Before $e$ reaches $ta(s4)$
- however, an external input event $x$ arrives. At that time, the
- system forgets about the scheduled internal transition and
- transitions to $s3=\delta_{ext}((s4,e),x)$. Note how an external
- transition does not give rise to an output. Once in state $s3$,
- the system continues in autonomous mode.
- \subsection{The coupled DEVS Formalism}
- The coupled DEVS formalism describes a discrete-event
- system in terms of a network of coupled components.
- \[
- coupled DEVS \equiv
- \langle
- X_{self}, Y_{self}, D, \{M_i\}, \{I_i\}, \{Z_{i,j}\}, select
- \rangle
- \]
- The component $self$ denotes the coupled model itself.
- $X_{self}$ is the (possibly structured) set of allowed external
- inputs to the coupled model.
- $Y_{self}$ is the (possibly structured) set of allowed (external)
- outputs of the coupled model. $D$ is a set of unique component
- references (names). The coupled model itself is referred to by
- means of $self$, a unique reference not in $D$.
- The set of {\em components} is
- \[
- \{M_i | i \in D\}.
- \]
- Each of the components must be an atomic DEVS
- \[
- M_i = \langle
- S_i, ta_i, \delta_{int,i},
- X_i, \delta_{ext,i}, Y_i, \lambda_i
- \rangle, \forall i \in D.
- \]
- The set of {\em influencees} of a component,
- the components influenced by $i \in D \cup \{self\}$,
- is $I_i$. The set of all influencees describes the coupling
- network structure
- \[
- \{I_i | i \in D \cup \{self\}\}.
- \]
- For modularity reasons, a component (including $self$) may not
- influence components outside its scope --the coupled model--,
- rather only other components of the coupled model,
- or the coupled model $self$:
- \[
- \forall i \in D \cup \{self\}: I_i \subseteq D \cup \{self\}.
- \]
- This is further restricted by the requirement that none of the
- components (including $self$) may influence themselves directly as this
- could cause an instantaneous dependency cycle (in case of a $0$
- time advance inside such a component) akin to an algebraic loop
- in continuous models:
- \[
- \forall i \in D \cup \{self\}: i \notin I_i.
- \]
- Note how one can always encode a self-loop ($i \in I_i$)
- in the internal transition function.
- To translate an output event of one component (such as a departure of
- a customer) to a corresponding input event (such as the arrival of
- a customer) in influencees of that component,
- {\em output-to-input translation functions} $Z_{i,j}$ are defined:
- \[
- \{Z_{i,j} | i \in D \cup \{self\}, j \in I_i\},
- \]
- \[
- \begin{array}{rcll}
- Z_{self,j} &: &X_{self} \rightarrow X_j &, \forall j \in D,\\
- Z_{i,self} &: &Y_i \rightarrow Y_{self} &, \forall i \in D,\\
- Z_{i,j} &: &Y_i \rightarrow X_j &, \forall i,j \in D.
- \end{array}
- \]
- Together, $I_i$ and $Z_{i,j}$ completely specify the coupling
- (structure and behaviour).
- As a result of coupling of concurrent components, multiple state
- transitions may occur at the same simulation time.
- This is an artifact of the discrete-event abstraction and may lead to
- behaviour not related to real-life phenomena.
- A logic-based foundation to study the {\em semantics} of these
- artifacts was introduced by Radiya and Sargent
- \cite{Radiya:logicFound}.
- In sequential simulation systems, such transition {\em collisions}
- are resolved by means of some form of {\em selection} of which of
- the components' transitions should be handled first. This corresponds
- to the introduction of priorities in some simulation languages.
- The coupled DEVS formalism explicitly represents a $select$ function
- for {\em tie-breaking} between simultaneous events:
- \[
- select: 2^D \rightarrow D
- \]
- $select$ chooses a unique component from any non-empty
- subset $E$ of $D$:
- \[
- select(E) \in E.
- \]
- The subset $E$ corresponds to the set of all components having
- a state transition simultaneously.
-
- % \subsection{Closure of DEVS under coupling} \label{sec:DEVSclosure}
- %
- % As mentioned at the start of section~\ref{sec:DEVS},
- % it is possible to construct a {\em resultant}
- % atomic DEVS model for each coupled DEVS. This
- % {\em closure under coupling} of atomic DEVS models makes {\em any}
- % coupled DEVS equivalent to an atomic DEVS. By induction,
- % any {\em hierarchically} coupled DEVS can thus be flattened to an
- % atomic DEVS. As a result, the requirement that each of the components
- % of a coupled DEVS be an atomic DEVS can be relaxed to be atomic {\em
- % or} coupled as the latter can always be replaced by an equivalent
- % atomic DEVS.
- %
- % The core of the closure procedure is the selection of the most
- % {\em imminent} (\ie soonest to occur) event from all the components'
- % scheduled events \cite{Zeigler:DEVS}. In case of simultaneous
- % events, the $select$ function is used. In the sequel,
- % the resultant construction is described.
- %
- % From the coupled DEVS
- % \[
- % \langle
- % X_{self}, Y_{self}, D, \{M_i\}, \{I_i\}, \{Z_{i,j}\}, select
- % \rangle,
- % \]
- % with all components $M_i$ atomic DEVS models
- % \[
- % M_i = \langle
- % S_i, ta_i, \delta_{int,i}, X_i, \delta_{ext,i},
- % Y_i, \lambda_i
- % \rangle, \forall i \in D
- % \]
- % the atomic DEVS
- % \[
- % \langle
- % S, ta, \delta_{int}, X, \delta_{ext}, Y, \lambda
- % \rangle
- % \]
- % is constructed.
- %
- % The resultant set of sequential states
- % is the product of the total state sets of all the components
- % \[
- % S = \times_{i \in D} Q_i,
- % \]
- % where
- % \[
- % Q_i = \{(s_i, e_i)| s \in S_i, 0 \leq e_i \leq ta_i(s_i)\},
- % \forall i \in D.
- % \]
- % The time advance function $ta$
- % \[
- % ta: S \rightarrow \mathbb{R}^{+}_{\mbox{$0,+\infty$}}
- % \]
- % is constructed by selecting the {\em most imminent} event time, of all
- % the components. This means, finding the smallest time {\em remaining}
- % until internal transition, of all the components
- % \[
- % ta(s) = min \{\sigma_i = ta_i(s_i) - e_i|i \in D\}.
- % \]
- % A number of {\em imminent} components may be scheduled for a
- % {\em simultaneous} internal transition. These components are
- % collected in a set
- % \[
- % IMM(s) = \{i \in D| \sigma_i=ta(s)\}.
- % \]
- % From $IMM$, a set of elements of $D$, {\em one} component $i^*$
- % is chosen by means of the $select$ {\em tie-breaking} function of
- % the coupled model
- % \[
- % \begin{array}{lcccc}
- % select &:& 2^D & \rightarrow & D\\
- % & & IMM(s) & \rightarrow & i^*
- % \end{array}
- % \]
- % %
- % % in parallel DEVS: no select, rather process a bag of events
- % %
- % Output of the selected component is generated {\em before}
- % it makes its internal transition. Note also how, as in a
- % Moore machine %\cite{FSA}
- % , input does not directly influence output.
- % In DEVS models, {\em only} an internal transition produces output.
- % An input can only influence/generate output via an internal
- % transition similar to the presence of
- % {\em memory} in the form of integrating elements in continuous models.
- % Allowing an external transition to produce output could lead to
- % infinite instantaneous loops. This is equivalent to algebraic loops
- % in continuous systems.
- % The output of the component is translated into coupled model output
- % by means of the coupling information
- % \[
- % \lambda(s) = Z_{i^*,self}(\lambda_{i^*}(s_{i^*})),
- % \textrm{if}\ self \in I_{i^*}.
- % \]
- % If the output of $i^*$ is not connected to the output of the coupled
- % model, the non-event $\phi$ can be generated as output of the coupled
- % model. As $\phi$ literally stands for no event, the output can also
- % be ignored without changing the meaning (but increasing performance
- % of simulator implementations).
- %
- % The internal transition function transforms the different
- % parts of the total state as follows:
- % \[
- % \delta_{int}(s) = ( \ldots, (s'_j,e'_j), \ldots),\ \textrm{where}
- % \]
- % \[
- % \begin{array}{lcll}
- % (s'_j,e'_j)
- % & = & (\delta_{int,j}(s_j),0)
- % & , \textrm{for}\ j=i^*,\\
- % & = & (\delta_{ext,j}(s_j,e_j+ta(s),Z_{i^*,j}(\lambda_{i^*}(s_{i^*}))),0)
- % & , \textrm{for}\ j \in I_{i^*},\\
- % & = & (s_j, e_j+ta(s))
- % & , \textrm{otherwise}.
- % \end{array}
- %% \begin{array}{lcl}
- %% (s'_j,e'_j)
- %% &=&(\delta_{int,j}(s_j),0)
- %% ,\textrm{for}\ j=i^*,\\
- %% &=&(\delta_{ext,j}(s_j,e_j+ta(s),Z_{i^*,j}(\lambda_{i^*}(s_{i^*}))),0)
- %% ,\textrm{for}\ j \in I_{i^*},\\
- %% &=&(s_j, e_j+ta(s))
- %% ,\textrm{otherwise}.
- %% \end{array}
- % \]
- % The selected imminent component $i^*$ makes an internal transition
- % to sequential state $\delta_{int,i^*}(s_{i^*})$.
- % Its elapsed time is reset to $0$.
- % All the influencees of $i^*$ change their state due to an external
- % transition prompted by an input which is the output-to-input
- % translated output of $i^*$, with an elapsed time adjusted
- % for the time advance $ta(s)$. The influencees' elapsed time is reset to $0$.
- % Note how $i^*$ is not allowed to be an influencee of $i^*$ in DEVS.
- % The state of all other components is not affected and their elapsed time
- % is merely adjusted for the time advance $ta(s)$.
- %
- % The external transition function transforms the different
- % parts of the total state as follows:
- % \[
- % \delta_{ext}(s,e,x) = ( \ldots, (s'_i,e'_i), \ldots),\ \textrm{where}
- % \]
- % \[
- % \begin{array}{lcll}
- % (s'_i,e'_i)
- % & = & (\delta_{ext,i}(s_i,e_j+e,Z_{self,i}(x)),0)
- % & , \textrm{for}\ i \in I_{self},\\
- % & = & (s_i, e_j+e)
- % & , \textrm{otherwise}.
- % \end{array}
- % \]
- % An incoming external event is routed, with an adjustment for elapsed time,
- % to each of the components connected to the coupled model input (after
- % the appropriate input-to-input translation).
- % For all those components, the elapsed time is reset to $0$.
- % All other components are not affected and only the elapsed time
- % is adjusted.
- %
- % Some limitations of DEVS are that
- % \begin{itemize}
- % \item a conflict due to simultaneous internal and external events
- % is resolved by ignoring the internal event. It should be
- % possible to explicitly specify behaviour in case of conflicts;
- % \item there is limited potential for parallel implementation;
- % \item the $select$ function is an artificial legacy of the semantics
- % of traditional sequential simulators based on an event list;
- % \item it is not possible to describe variable structure.
- % \end{itemize}
- % Some of these are compensated for in parallel DEVS (see further).
- \subsection{Implementation of a DEVS Solver}
-
- The algorithm in Figure~\ref{fig:DEVSsolver} is based on
- the closure under coupling construction
- % of section~\ref{sec:DEVSclosure}
- and can be used as a specification of a --possibly parallel--
- implementation of a DEVS solver or ``abstract simulator''
- \cite{Zeigler:DEVS, Kim:distrDEVS}.
- \begin{figure*}
- {\sf
- \begin{center}
- \begin{tabular}{lll}\toprule
- \textbf{message} $m$&
- \multicolumn{1}{c}{\textbf{simulator}} &
- \multicolumn{1}{c}{\textbf{coordinator}}\\
- \midrule
- \addlinespace
- $(*, from, t)$
- & \multicolumn{2}{c}{simulator correct only if $t = t_N$}\\
- \addlinespace
- & $y \leftarrow \lambda(s)$
- & \textbf{send} $(*, self, t)$ to $i^*$, where\\
- & \textbf{if} $y \neq \phi:$
- & \ \ \ \ $i^* = select(imm\_children)$\\
- & \ \ \ \ \textbf{send} $(\lambda(s), self, t)$ to $parent$
- & \ \ \ \ $imm\_children = \{i \in D| M_i.t_N = t\}$\\
- % $t_N$ should be $= t^* = min \{m.t_N| m \in M\}
- & $s \leftarrow \delta_{int}(s)$
- & $active\_children \leftarrow active\_children \cup \{i^*\}$\\
- & $t_L \leftarrow t$
- & \\
- & $t_N \leftarrow t_L + ta(s)$
- & \\
- & \textbf{send} $(done, self, t_N)$ to $parent$
- & \\
- \addlinespace
- \midrule
- \addlinespace
- $(x, from, t)$
- & \multicolumn{2}{c}{simulator correct only if $t_L \leq t \leq t_N$
- (ignore $\delta_{int}$ to resolve a $t = t_N$ {\em conflict})}\\
- \addlinespace
- & $e \leftarrow t - t_L$
- & $\forall i \in I_{self}:$\\
- & $s \leftarrow \delta_{ext}(s,e,x)$
- & \ \ \ \ \textbf{send} $(Z_{self,i}(x), self, t)$ to $i$\\
- & $t_L \leftarrow t$
- & \ \ \ \ $active\_children \leftarrow active\_children \cup \{i\}$\\
- & $t_N \leftarrow t_L + ta(s)$
- & \\
- & \textbf{send} $(done, self, t_N)$ to $parent$
- & \\
- \addlinespace
- \midrule
- \addlinespace
- $(y, from, t)$
- & & $\forall i \in I_{from} \setminus \{self\}:$\\ % $from = i^*$
- & & \ \ \ \ \textbf{send} $(Z_{from,i}(y), from, t)$ to $i$\\
- & & \ \ \ \ $active\_children \leftarrow active\_children \cup \{i\}$\\
- & & \textbf{if} $self \in I_{from}:$\\
- & & \ \ \ \ \textbf{send} $(Z_{from, self}(y), self, t)$ to $parent$\\
- \addlinespace
- \midrule
- \addlinespace
- $(done, from, t)$
- & & $active\_children \leftarrow active\_children \setminus \{from\}$\\
- & & \textbf{if} $active\_children = \emptyset$:\\
- & & \ \ \ \ $t_L \leftarrow t$\\
- & & \ \ \ \ $t_N \leftarrow min \{M_i.t_N| i \in D\}$\\
- % & & \ \ \ \ put children with minimum $t_N$ in $imm\_children$\\ ????
- & & \ \ \ \ \textbf{send} $(done, self, t_N)$ to $parent$\\
- \addlinespace
- \bottomrule
- \end{tabular}
- \end{center}
- }
- \caption{DEVS Simulation Procedure}
- \label{fig:DEVSsolver}
- \end{figure*}
- In an atomic DEVS solver, the last event time $t_L$ as well as the
- local state $s$ are kept. In a coordinator, only the last event time $t_L$
- is kept. The next-event-time $t_N$ is sent as output of either
- solver. It is possible to also keep $t_N$ in the solvers. This
- requires consistent (recursive) initialization of the $t_N$s.
- If kept, the $t_N$ allows one to check whether the solvers are
- appropriately synchronized.
- The operation of an abstract simulator involves handling four
- types of messages.
- The $(x, from, t)$ message carries external input information.
- The $(y, from, t)$ message carries external output information.
- The $(*, from, t)$ and $(done, from, t_N)$ messages are used for
- scheduling (synchronizing) the abstract simulators.
- In these messages, $t$ is the simulation time
- and $t_N$ is the next-event-time.
- The $(*, from, t)$ message indicates an internal event $*$ is due.
- When a coordinator receives a $(*, from, t)$ message, it selects
- an imminent component $i^*$ by means of the tie-breaking function
- $select$ specified for the coupled model and routes the message to
- $i^*$. Selection is necessary as there may be more than one imminent
- component (with minimum next remaining time).\\
- When an atomic simulator
- receives a $(*, from, t)$ message, it generates an output message
- $(y, from, t)$ based on the old state $s$.
- It then computes the new state by means of the internal
- transition function. Note how in DEVS, output messages
- are only produced while executing internal events.
- When a simulator outputs a $(y, from, t)$ message, it is sent
- to its parent coordinator.
- The coordinator sends the output, after appropriate
- output-to-input translation, to each of the influencees of $i^*$
- (if any). If the coupled model itself is an influencee of $i^*$,
- the output, after appropriate output-to-output translation, is sent
- to the the coupled model's parent coordinator.
- When a coordinator receives an $(x, from, t)$ message from its parent
- coordinator, it routes the message, after appropriate input-to-input
- translation, to each of the affected components. \\
- When an atomic simulator receives an $(x, from, t)$ message, it executes
- the external transition function of its associated atomic model.\\
- After executing an $(x, from, t)$ or $(y, from, t)$ message, a
- simulator sends a $(done, from, t_N)$ message to its parent
- coordinator to prepare a new schedule. When a coordinator has
- received $(done, from, t_N)$ messages from all its components, it sets
- its next-event-time $t_N$ to the minimum $t_N$ of all its components and
- sends a $(done, from, t_N)$ message to its parent coordinator.
- This process is recursively applied until the top-level coordinator or
- {\em root coordinator} receives a $(done, from, t_N)$ message.
- As the simulation procedure is synchronous, it does not support
- a-synchronously arriving (real-time) external input.
- Rather, the environment or Experimental Frame should also be
- modelled as a DEVS component.
- To run a simulation experiment, the {\em initial conditions} $t_L$
- and $s$ must first be set in {\em all} simulators of the hierarchy.
- If $t_N$ is kept in the simulators, it must be recursively set too.
- Once the initial conditions are set, the main loop described in
- Figure~\ref{fig:mainDEVSloop} is executed.
- \begin{figure}
- {\sf
- \begin{center}
- \begin{tabular}{l}\toprule
- \addlinespace
- $t \leftarrow t_N$ of topmost coordinator\\
- \textbf{repeat until} $t = t_{end}$\\
- \ \ \ \ \textbf{send} $(*, meta, t)$ to topmost coupled model $top$\\
- \ \ \ \ \textbf{wait} for $(done, top, t_N)$\\
- \ \ \ \ $t \leftarrow t_N$\\
- \addlinespace
- \bottomrule
- \end{tabular}
- \end{center}
- }
- \caption{DEVS Simulation Procedure Main Loop}
- \label{fig:mainDEVSloop}
- \end{figure}
- % The above algorithm can be simplified for sequential implementation.
- % Figure~\ref{fig:seqDEVSsolver} is a specification for an
- % object-oriented sequential implementation.
- % Atomic and coupled DEVS simulators are implemented as {\em classes}.
- % Subclassing (inheritance) is used to add model information.
- % Instantiated {\em objects} correspond to each of the (sub-)model's
- % and their abstract simulators.
- % In a sequential implementation, sending a message to an object
- % is implemented by calling the object's $receive()$ method.
- % This method returns the response message
- % after sequential processing of the message.
- % Messages are of the general form $(type, from, t)$.
- % \begin{figure*}
- % {\sf
- % \begin{center}
- % \begin{tabular}{lll}\toprule
- % \textbf{receive}$(m)$&
- % \multicolumn{1}{c}{\textbf{simulator}} &
- % \multicolumn{1}{c}{\textbf{coordinator}}\\
- % \midrule
- % \addlinespace
- % $(*, from, t)$
- % & \multicolumn{2}{c}{simulator correct only if $t = t_N$}\\
- % \addlinespace
- % & $y \leftarrow \lambda(s)$
- % & $(y, i^*, t_N) \leftarrow i^*.receive((*, self, t))$, where\\
- % & $s \leftarrow \delta_{int}(s)$
- % & \ \ \ \ $i^* = select(imm\_children),$\\
- % & $t_L \leftarrow t$
- % & \ \ \ \ $imm\_children = \{i \in D| M_i.t_N = t\}$\\
- % % $t_N$ should be $= t^* = min \{m.t_N| m \in M\}
- % & $t_N \leftarrow t_L + ta(s)$
- % & $t_L \leftarrow t$\\
- % & \textbf{return} $(y, self, t_N)$
- % & \textbf{if} $y = \phi$:\\
- % & & \ \ \ \ \textbf{return} $(done, self, t_N)$ \\
- % & & \textbf{else}\\
- % & & \ \ \ \ $\forall i \in I_{i^*} \setminus \{self\}:$\\
- % & & \ \ \ \ \ \ \ \ $(done, i, i.t_N) \leftarrow
- % i.receive((Z_{from,i}(y), i^*, t))$\\
- % & & \ \ \ \ $t_N \leftarrow min \{i.t_N| i \in D\}$\\
- % & & \ \ \ \ \textbf{if} $self \in I_{from}:$ \\
- % & & \ \ \ \ \ \ \ \ \textbf{return} $(Z_{i^*, self}(y), self, t_N)$\\
- % & & \ \ \ \ \textbf{else}\\
- % & & \ \ \ \ \ \ \ \ \textbf{return} $(done, self, t_N)$ \\
- % \addlinespace
- % \midrule
- % \addlinespace
- % $(x, from, t)$
- % & \multicolumn{2}{c}{simulator correct only if $t_L \leq t \leq t_N$
- % (ignore $\delta_{int}$ to resolve a $t = t_N$ {\em conflict})}\\
- % \addlinespace
- % & $e \leftarrow t - t_L$
- % & $\forall i \in I_{self}:$\\
- % & $s \leftarrow \delta_{ext}(s,e,x)$
- % & \ \ \ \ $(done, n, t_N)
- % \leftarrow i.receive((Z_{self,i}(x), self, t))$\\
- % & $t_L \leftarrow t$
- % & $t_L \leftarrow t$\\
- % & $t_N \leftarrow t_L + ta(s)$
- % & $t_N \leftarrow min \{i.t_N| i \in D\}$\\
- % & \textbf{return} $(done, self, t_N)$
- % & \textbf{return} $(done, self, t_N)$\\
- % \addlinespace
- % \bottomrule
- % \end{tabular}
- % \end{center}
- % }
- % \caption{Sequential DEVS Simulation Procedure}
- % \label{fig:seqDEVSsolver}
- % \end{figure*}
- % Simulation is achieved by repeatedly sending $(*, meta, t_N)$ messages
- % to the root coordinator as in Figure~\ref{fig:mainDEVSloop}.
- % % HV: add detailed description
- % % HV: implement and test in Python
- \section{The parallel DEVS Formalism}
- As DEVS is a formalization and generalization of sequential
- discrete-event simulator semantics, it does not allow for drastic
- parallelization. In particular, simultaneously occurring internal
- transitions are serialized by means of a tie-breaking $select$
- function. Also, in case of {\em collisions} between simultaneously
- occurring internal transitions and external input, DEVS
- ignores the internal transition and applies the external transition
- function. Chow \cite{Chow:parallelDEVS} proposed the parallel DEVS
- (P-DEVS) formalism which alleviates some of the DEVS drawbacks.
- In an atomic P-DEVS
- \[
- atomic\ P-DEVS \equiv
- \langle
- S, ta, \delta_{int}, X, \delta_{ext}, \delta_{conf}, Y, \lambda
- \rangle,
- \]
- the model can explicitly define collision behaviour by using
- a so-called {\em confluent} transition function $\delta_{conf}$.
- Only $\delta_{ext}$, $\delta_{conf}$, and $\lambda$ are different
- from DEVS.
- The external transition function
- \[
- \delta_{ext}: Q \times X^b \rightarrow S
- \]
- now accepts a {\em bag} of simultaneous inputs, elements of
- the input set $X$, rather than a single input.
-
- The confluent transition function
- \[
- \delta_{conf}: S \times X^b \rightarrow S
- \]
- describes the state transition when a scheduled internal state
- transition and simultaneous external inputs collide.
- An atomic P-DEVS model can generate multiple simultaneous outputs
- \[
- \lambda: S \rightarrow Y^b
- \]
- in the form of a bag of output values from the output set $Y$.
- As conflicts are handled explicitly in the confluent transition
- function, the $select$ tie-breaking function can be eliminated
- from the coupled DEVS structure:
- \[
- coupled\ P-DEVS \equiv
- \langle
- X_{self}, Y_{self}, D, \{M_i\}, \{I_i\}, \{Z_{i,j}\}\}
- \rangle.
- \]
- In this structure, all components $M_i$ are atomic P-DEVS
- \[
- M_i =
- \langle
- S_i, ta_i, \delta_{int,i}, X_i, \delta_{ext,i},
- \delta_{conf,i}, Y_i, \lambda_i
- \rangle, \forall i \in D.
- \]
- For the proof of closure under coupling of P-DEVS as well as
- for the description of an efficient parallel abstract simulator,
- see \cite{Chow:parallelDEVS}.
-
- \section{Formalism Transformation}
- Cellular Automata are a simple form of discrete-event models,
- of DEVS in particular.
- Describing a Cellular Automaton as a simple atomic DEVS is thus
- straightforward. Thanks to the general nature of DEVS,
- all extensions of the Cellular Automata formalism mentioned before
- can also be mapped onto DEVS.
- It is more rewarding however to map a CA onto a coupled model,
- whereby every CA cell's dynamics is represented as an
- atomic model and the dependency between one cell and
- its neighbourhood is represented by the coupled model's
- coupling information.
- In what follows, a CA is mapped onto a coupled P-DEVS as this
- mapping is more elegant than that onto the original DEVS.
- In addition, the resultant parallel DEVS holds more
- potential for parallel implementation.
- The coupled parallel DEVS representation presented
- here, corresponds to the Cellular Automaton specification
- in section~\ref{sec:CA}:
- \[
- P-DEVS-CA = \langle
- X_{self}, Y_{self}, D, \{M_i\}, \{I_i\}, \{Z_{i,j}\}
- \rangle.
- \]
- As the Cellular Automaton in section~\ref{sec:CA} did not incorporate
- external input,
- \[
- X_{self} = \emptyset.
- \]
- Typical output of the Cellular Automaton consists of all the
- cell values
- \[
- Y_{self} = \times_{i \in D} V.
- \]
- Components in the coupled parallel DEVS model correspond to
- CA cells. Indexing uses the CA index set:
- \[
- D = C.
- \]
- The $\{M_i\}$ are atomic P-DEVS components, described in more detail later.
- The set of influencees of a component/cell $i$ is constructed
- by means of the CA's neighbourhood template $NT$.
- The $NT$ contains influencer rather than influencee information.
- Thus, the offset information needs to be mirrored with respect to the origin
- (\ie its inverse with respect to addition of offsets needs to
- be calculated) to obtain the influencees of component $i$:
- \[
- I_i = \{j \in C |j=i-offset,\ offset \in
- \{NT[k] |k = 1, \ldots, \xi\}
- \setminus \{0\}
- \}
- \]
- As DEVS does not allow $i \in I_i$, the offset $0$, if
- present in $NT$, is not included.
- A state transition of a CA cell usually does depend on the
- old state of the cell itself. This is encoded in the atomic
- P-DEVS's (confluent) transition function rather than by means of
- an external self-coupling.
- Note how $j=i-offset \in C$ may need to be relaxed depending
- on the particular boundary conditions. Cells outside $C$ may need to
- be considered (and initialized).
- As there is no external input, the input-to-input translation
- $Z_{self,i}$ is not needed.
- The $i,j$ output-to-input translation converts the output of a cell
- (\ie the cell's value) into a tuple containing that value and
- the offset between the two cells:
- \[
- \begin{array}{lcccc}
- Z_{i,j} &:& Y_i &\rightarrow& X_j\\
- & & s_i &\rightarrow& (s_i, i-j).
- \end{array}
- \]
- The output-to-output translation translates the output of each
- cell into a tuple with the output in the position corresponding
- to the cell's index:
- \[
- \begin{array}{lcccc}
- Z_{i,self} &:& Y_i &\rightarrow& Y_{self}\\
- & & s_i &\rightarrow& (\ldots, s_i, \ldots).
- \end{array}
- \]
- Individual cells are mapped onto atomic P-DEVS components:
- \[
- M_i = \langle
- S_i, ta_i, \delta_{int,i},
- X_i, \delta_{ext,i}, \delta_{confl,i}, Y_i, \lambda_i
- \rangle, \forall i \in D.
- \]
- Values of the components/cells are those from the cell value set
- \[
- S_i = V.
- \]
- The time advance is set to the same arbitrary non-zero value $\Delta$
- for all cells to allow for synchronous operation:
- \[
- ta_i(s_i) = \Delta.
- \]
- The internal transition function does not modify the component's
- state
- \[
- \delta_{int}(s_i) = s_i.
- \]
- The output function sends out a set containing only the cell value:
- \[
- \lambda(s_i) = \{s_i\},\ \textrm{where}\ s_i \in Y_i = V.
- \]
- The external transition function is not used as there is
- no global external input into the CA. Due to the synchronous
- operation, whereby internal transitions and external inputs
- will always collide, {\em only} the confluent transition function
- is used.
- \[
- \delta_{conf,i}(s_i, e_i, x_i^b) = \delta_l(v_i),
- \]
- where $v_i$ is a vector with the same dimensions as the
- neighbourhood template $NT$ with values
- \[
- % \begin{array}{rcll}
- % v_i[\eta] &=& s_i,
- % &\textrm{for the}\ \eta\ \textrm{for which}\ NT[\eta]=0; \\
- % v_i[proj_{offset}(x)] &=& proj_{value}(x),
- % &\forall x \in x_i^b.
- % \end{array}
- \begin{array}{rcl}
- v_i[\eta] &=& s_i,\
- \textrm{for the}\ \eta\ \textrm{for which}\ NT[\eta]=0; \\
- v_i[proj_{offset}(x)] &=& proj_{value}(x),\
- \forall x \in x_i^b.
- \end{array}
- \]
- Messages communicate values of neighbouring cells, as well as
- offsets from the current cell:
- \[
- X_i^b = \{(v,offset)| v \in V, offset \in C\}.
- \]
- As an example of the above mapping, Murato coded the Game of Life
- CA in his \verb|a-DEVS-0.2| \cite{Nutaro:aDEVS} parallel DEVS
- implementation.
- \section{Conclusions}
- Since the end of 1993, the European Commission's ESPRIT Basic
- Research Working Group 8467 (SiE) \cite{SiE:simulation},
- has identified key areas for future research in modelling and simulation.
- The most urgent need was deemed for multi-formalism modelling, to
- correctly model complex systems with components described in
- diverse formalisms.
- To correctly model such systems, formalisms need to be related.
- The Formalism Transformation Graph (FTG) shown in Figure~\ref{fig:ftl}
- depicts meaningful mappings between formalisms.
- In this article, a mapping between Cellular Automata and parallel
- DEVS was elaborated. This fills in the $CA \rightarrow DEVS$ edge
- in the FTG. The mapping describes CA semantics in terms of parallel
- DEVS semantics and is a basis for automated transformation of CA
- models into parallel DEVS models, thus enabling efficient, parallel
- simulation as well as meaningful coupling with models in other
- formalisms.
- % \vfill
- % \pagebreak
- % From here, should be 2 column wide
- % \begin{minipage}{\textwidth}
- \bibliography{bib/abbrev,bib/simulation,bib/math,bib/hv.journals}
- % \bibliographystyle{alpha}
- \bibliographystyle{plain}
- % \end{minipage}
- \end{document}
|