Show the code
+using Plots
+= -5:3
+ x f(x) = max(2*x+2,x+3,1)
+plot(x,f.(x),label="",thickness_scaling = 2)
+= [-2,1]
+ xc = f.(xc)
+ yc scatter!(xc,yc,markercolor=[:red,:red],label="",thickness_scaling = 2)
diff --git a/.nojekyll b/.nojekyll index 018737f..3631f45 100644 --- a/.nojekyll +++ b/.nojekyll @@ -1 +1 @@ -1afc63ff \ No newline at end of file +f63be0ef \ No newline at end of file diff --git a/classes_reset.html b/classes_reset.html index f203062..06b1d55 100644 --- a/classes_reset.html +++ b/classes_reset.html @@ -782,49 +782,49 @@
Note, once again, the amazingly small number of steps
@@ -2324,107 +2324,107 @@Once again, we can see that the method correctly simulates the system coming to a complete stop due to friction.
diff --git a/des_automata.html b/des_automata.html index 03257e6..0d5439d 100644 --- a/des_automata.html +++ b/des_automata.html @@ -2143,16 +2143,16 @@x_initial = rand(0:k, n) = [0, 1, 1, 2]
-output(dtr) = [0, 1, 0, 1]
-(update!(dtr), output(dtr)) = ([0, 0, 1, 1], [0, 0, 1, 0])
-(update!(dtr), output(dtr)) = ([0, 0, 0, 1], [0, 0, 0, 1])
-(update!(dtr), output(dtr)) = ([0, 0, 0, 0], [1, 0, 0, 0])
-(update!(dtr), output(dtr)) = ([1, 0, 0, 0], [0, 1, 0, 0])
-(update!(dtr), output(dtr)) = ([1, 1, 0, 0], [0, 0, 1, 0])
+x_initial = rand(0:k, n) = [0, 4, 0, 4]
+output(dtr) = [0, 1, 1, 1]
+(update!(dtr), output(dtr)) = ([0, 0, 4, 0], [1, 0, 1, 1])
+(update!(dtr), output(dtr)) = ([1, 0, 0, 4], [0, 1, 0, 1])
+(update!(dtr), output(dtr)) = ([1, 1, 0, 0], [0, 0, 1, 0])
+(update!(dtr), output(dtr)) = ([1, 1, 1, 0], [0, 0, 0, 1])
+(update!(dtr), output(dtr)) = ([1, 1, 1, 1], [1, 0, 0, 0])
([1, 1, 0, 0], [0, 0, 1, 0])
+([1, 1, 1, 1], [1, 0, 0, 0])
We can see that although initially the there can be more tokens, after a few iterations the algorithm achieves the goal of having just one token in the ring.
diff --git a/hybrid_equations.html b/hybrid_equations.html index 848d9ee..60f74ed 100644 --- a/hybrid_equations.html +++ b/hybrid_equations.html @@ -1189,57 +1189,57 @@Max-plus algebra, also written as (max,+) algebra (and also known as tropical algebra/geometry and dioid algebra), is an algebraic framework in which we can model and analyze a class of discrete-event systems, namely event graphs, which we have previously introduced as a subset of Petri nets. The framework is appealing in that the models than look like state equations \bm x_{k+1} = \mathbf A \bm x_k + \mathbf B \bm u_k for classical linear dynamical systems. We call these max-plus linear systems, or just MPL systems. Concepts such as poles, stability and observability can be defined, following closely the standard definitions. In fact, we can even formulate control problems for these models in a way that mimicks the conventional control theory for LTI systems, including MPC control.
+But before we get to these applications, we first need to introduce the (max,+) algebra itself. And before we do that, we recapitulate the definition of a standard algebra.
+Confusingly enough, algebra is used both as a name of both a branch of mathematics and a special mathematical structure. In what follows, we use the term algebra to refer to the latter.
+Algebra is a set of elements equipped with
+Inverse elements can also be defined, namely
+If the inverse wrt multiplication exists for every (nonzero) element, the algebra is called a field, otherwise it is called a ring.
+Prominent examples of a ring are integers and polynomials. For integers, it is only the numbers 1 and -1 that have integer inverses. For polynomials, it is only zero-th degree polynomials that have inverses qualifying as polynomials too. An example from the control theory is the ring of proper stable transfer functions, in which only the non-minimum phase transfer functions with zero relative degree have inverses, and thus qualify as units.
+Prominent example of a field is the set of real numbers.
+Elements of the (max,+) algebra are real numbers, but it is still a ring and not a field since the two operations are defined differently.
+The new operations of addition, which we denote by \oplus to distinguish it from the standard addition, is defined as \boxed{ + x\oplus y \equiv \max(x,y). + } +
+The new operation of multiplication, which we denote by \otimes to distinguish it from the standard multiplication, is defined as \boxed{ + x\otimes y \equiv x+y}. +
+Indeed, there is no typo here, the standard addition is replaced by \otimes and not \oplus.
+Indeed, we can also define the (min,+) algebra. But for our later purposes in modelling we prefer the (max,+) algebra.
+Strictly speaking, the (max,+) algebra is a broader set than just \mathbb R. We need to extend the reals with the minus infinity. We denote the extended set by \mathbb R_\varepsilon \boxed{ +\mathbb R_\varepsilon \coloneqq \mathbb R \cup \{-\infty\}}. +
+The reason for the notation is that a dedicated symbol \varepsilon is assigned to this minus infinity, that is, \boxed +{\varepsilon \coloneqq -\infty.} +
+It may yield some later expressions less cluttered. Of course, at the cost of introducing one more symbol.
+We are now going to see the reason for this extension.
+The neutral element with respect to \oplus, the zero, is -\infty. Indeed, for x \in \mathbb R_\varepsilon +x \oplus \varepsilon = x, + because \max(x,-\infty) = x.
+The neutral element with respect to \otimes, the one, is 0. Indeed, for x \in \mathbb R_\varepsilon +x \otimes \varepsilon = x, + because x+0=x.
+The notation is rather nonsymmetric here. We now have a dedicated symbol \varepsilon for the zero element in the new algebra, but no dedicated symbol for the one element in the new algebra. It may be a bit confusing as “the old 0 is the new 1”. Perhaps similarly as we introduced dedicated symbols for the new operations of addition of multiplication, we should have introduced dedicated symbols such as ⓪ and ①, which would lead to expressions such as x⊕⓪=x and x ⊗ ① = x. In fact, in some software packages they do define something like mp-zero
and mp-one
to represent the two special elements. But this is not what we will mostly encounter in the literature. Perhaps the best attitude is to come to terms with this notational asymetry… After all, I myself was apparently not even able to figure out how to encircle numbers in LaTeX…
The inverse element with respect to \oplus in general does not exist! Think about it for a few moments, this is not necessarily intuitive. For which element(s) does it exist? Only for \varepsilon.
+This has major consequences, for example, x\oplus x=x.
+Can you verify this statement? How is is related to the fact that the inverse element with respect to \oplus does not exist in general?
+This is the key difference with respect to a conventional algebra, wherein the inverse element of a wrt conventional addition is -a, while here we do not even define \ominus.
+Formally speaking, the (max,+) algebra is only a semi-ring.
+The inverse element with respect to \otimes does not exist for all elements. The \varepsilon element does not have an inverse element with respect to \otimes. But in this aspect the (max,+) algebra just follows the conventional algebra, beucase 0 has no inverse there either.
+Having defined the fundamental operations and the fundamental elements, we can proceed with other operations. Namely, we consider powers. Fot an integer r\in\mathbb Z, the rth power of x, denoted by x^{\otimes^r}, is defined, unsurprisingly as x^{\otimes^r} \coloneqq x\otimes x \otimes \ldots \otimes x.
+Observe that it corresponds to rx in the conventional algebra x^{\otimes^r} = rx.
+But then the inverse element with respect to \otimes can also be determined using the (-1)th power as x^{\otimes^{-1}} = -x.
+This is not actually surprising, is it?
+There are few more implications. For example,
+x^{\otimes^0} = 0.
There is also no inverse element with respect to \otimes for \varepsilon, but it is expected as \varepsilon is a zero wrt \oplus. Furthermore, for r\neq -1, if r>0 , then \varepsilon^{\otimes^r} = \varepsilon, if r<0 , then \varepsilon^{\otimes^r} is undefined, which are both expected. Finally, \varepsilon^{\otimes^0} = 0 by convention.
+It is the same as that for the conventional algebra:
+Having covered addition, multiplication and powers, we can now define (max,+) polynomials. In order to get started, consider the the univariate polynomial p(x) = a_{n}\otimes x^{\otimes^{n}} \oplus a_{n-1}\otimes x^{\otimes^{n-1}} \oplus \ldots \oplus a_{1}\otimes x \oplus a_{0}, + where a_i\in \mathbb R_\varepsilon and n\in \mathbb N.
+By interpreting the operations, this translates to the following function \boxed +{p(x) = \max\{nx + a_n, (n-1)x + a_{n-1}, \ldots, x+a_1, a_0\}.} +
+Example 1 (1D polynomial) Consider the following (max,+) polynomial +p(x) = 2\otimes x^{\otimes^{2}} \oplus 3\otimes x \oplus 1. +
+We can interpret it in the conventional algebra as +p(x) = \max\{2x+2,x+3,1\}, + which is a piecewise linear (actually affine) function.
+using Plots
+= -5:3
+ x f(x) = max(2*x+2,x+3,1)
+plot(x,f.(x),label="",thickness_scaling = 2)
+= [-2,1]
+ xc = f.(xc)
+ yc scatter!(xc,yc,markercolor=[:red,:red],label="",thickness_scaling = 2)
Example 2 (Example of a 2D polynomial) Nothing prevents us from defining a polynomial in two (and more) variables. For example, consider the following (max,+) polynomial +p(x,y) = 0 \oplus x \oplus y. +
+using Plots
+= -2:0.1:2;
+ x = -2:0.1:2;
+ y f(x,y) = max(0,x,y)
+= f.(x',y);
+ z wireframe(x,y,z,legend=false,camera=(5,30))
+xlabel!("x")
+ylabel!("y")
+zlabel!("f(x,y)")
Example 3 (Another 2D polynomial) Consider another 2D (max,+) polynomial +p(x,y) = 0 \oplus x \oplus y \oplus (-1)\otimes x^{\otimes^2} \oplus 1\otimes x\otimes y \oplus (-1)\otimes y^{\otimes^2}. +
+using Plots
+= -2:0.1:2;
+ x = -2:0.1:2;
+ y f(x,y) = max(0,x,y,2*x-1,x+y+1,2*y-1)
+= f.(x',y);
+ z wireframe(x,y,z,legend=false,camera=(15,30))
+xlabel!("x")
+ylabel!("y")
+zlabel!("p(x,y)")
Piecewise affine (PWA) functions will turn out a frequent buddy in our course.
+…
+What is attractive about the whole (max,+) framework is that it also extends nicely to matrices. For matrices, whose elements are in \mathbb R_\varepsilon, we define the operations of addition and multiplication identically as in the conventional case, we just use different definitions of the two basic scalar operations. (A\oplus B)_{ij} = a_{ij}\oplus b_{ij} = \max(a_{ij},b_{ij}) +\begin{aligned} +(A\otimes B)_{ij} &= \bigoplus_{k=1}^n a_{ik}\otimes b_{kj}\\ +&= \max_{k=1,\ldots, n}(a_{ik}+b_{kj}) +\end{aligned} +
+(max,+) zero matrix \mathcal{E}_{m\times n} has all its elements equal to \varepsilon, that is, +\mathcal{E}_{m\times n} = +\begin{bmatrix} +\varepsilon & \varepsilon & \ldots & \varepsilon\\ +\varepsilon & \varepsilon & \ldots & \varepsilon\\ +\vdots & \vdots & \ddots & \vdots\\ +\varepsilon & \varepsilon & \ldots & \varepsilon +\end{bmatrix}. +
+(max,+) identity matrix I_n has 0 on the diagonal and \varepsilon elsewhere, that is, +I_{n} = +\begin{bmatrix} +0 & \varepsilon & \ldots & \varepsilon\\ +\varepsilon & 0 & \ldots & \varepsilon\\ +\vdots & \vdots & \ddots & \vdots\\ +\varepsilon & \varepsilon & \ldots & 0 +\end{bmatrix}. +
+The zerothe power of a matrix is – unsurprisingly – the identity matrix, that is, A^{\otimes^0} = I_n.
+The kth power of a matrix, for k\in \mathbb N\setminus\{0\}, is then defined using A^{\otimes^k} = A\otimes A^{\otimes^{k-1}}.
+Consider A\in \mathbb R_\varepsilon^{n\times n}. For this matrix, we can define the precedence graph \mathcal{G}(A) as a weighted directed graph with the vertices 1, 2, …, n, and with the arcs (j,i) with the associated weights a_{ij} for all a_{ij}\neq \varepsilon. The kth power of the matrix is then
++(A)^{\otimes^k}_{ij} = \max_{i_1,\ldots,i_{k-1}\in \{1,2,\ldots,n\}} \{a_{ii_1} + a_{i_1i_2} + \ldots + a_{i_{k-1}j}\} + for all i,j and k\in \mathbb N\setminus 0.
+Example 4 (Example) +A = +\begin{bmatrix} +2 & 3 & \varepsilon\\ +1 & \varepsilon & 0\\ +2 & -1 & 3 +\end{bmatrix} +\qquad +A^{\otimes^2} = +\begin{bmatrix} +4 & 5 & 3\\ +3 & 4 & 3\\ +5 & 5 & 6 +\end{bmatrix} +
+Eigenvalues and eigenvectors constitute another instance of a straightforward import of concepts from the conventional algebra into the (max,+) algebra – just take the standard definition of an eigenvalue-eigenvector pair and replace the conventional operations with the (max,+) alternatives +A\otimes v = \lambda \otimes v. +
+A few comments:
+We can also define and solve linear equations within the (max,+) algebra. Considering A\in \mathbb R_\varepsilon^{n\times n},\, b\in \mathbb R_\varepsilon^n, we can formulate and solve the equation +A\otimes x = b. +
+In general no solution even if A is square. However, often we can find some use for a subsolution defined as +A\otimes x \leq b. +
+Typically we search for the maximal subsolution instead, or subsolutions optimal in some other sense.
+Example 5 (Greatest subsolution) +A = +\begin{bmatrix} +2 & 3 & \varepsilon\\ +1 & \varepsilon & 0\\ +2 & -1 & 3 +\end{bmatrix}, +\qquad +b = +\begin{bmatrix} +1 \\ 2 \\ 3 +\end{bmatrix} +
++x = +\begin{bmatrix} +-1\\ -2 \\ 0 +\end{bmatrix} +
++A \otimes x = +\begin{bmatrix} +1\\ 0 \\ 1 +\end{bmatrix} +\leq b +
+With this introduction to the (max,+) algebra, we are now ready to move on to the modeling of discrete-event systems using the max-plus linear (MPL) systems.
+ + +Gurobi does provide such support for indicator constraints, the increasingly popular free&open-source HiGHS does not, …
@@ -737,9 +737,10 @@model.addConstr((delta == 1) >> (x + y - 1.0 <= 0.0))
-The other direction of implication is not supported by optimization solvers (at least not that we know of). Therefore it has no support in optimization modellers such as JuMP either. In order to handle it, besides resorting to the Big-M method, we can also use the equivalent formulation
+ The other direction of implication is not supported by optimization solvers. Therefore it has no support in optimization modellers such as JuMP either. In order to handle it, besides resorting to the Big-M method, we can also use the equivalent formulation
[\delta = 0] \rightarrow [f(\bm x) \geq \epsilon],
where some small \epsilon \geq 0 had to be added to turn the strict inequality into a non-strict one. Note, however, that it is not universally recommendable to rely on the support of the mixed integer solver for the indicator constraints. The reason is that sometimes the solvers may decide to reformulate the indicator constraints using the Big-M method by themselves, in which case we have no direct control over the choice of M. and a piecewise quadratic cost function J^\ast(x)
@@ -857,51 +857,51 @@ In this chapter we introduce another formalism for modelling discrete event systems (DES) – Petri nets. Petri nets offer an alternative perspective on discrete even systems compared to automata. And it is good to have alternatives, isn’t it? For some purposes, one framework can be more appropriate than the other. Furthermore, the ideas behind Petri nets even made it into international standards. Either directly or through the derived GRAFCET language, which in turn served as the basis for the Sequential Function Chart (SFC) language for PLC programming. See the references. Last but not least, an elegant algebraic framework based on the so-called (max,+) algebra has been developed for a subset of Petri nets (so-called event graphs) and it would be a shame not to mention it in our course (in the next chapter). Similarly as in the case of automata, a Petri net (PN) can be defined as a tuple of sets and functions: \boxed{PN = \{\mathcal{P}, \mathcal{T}, \mathcal{A}, w\},} where Similarly as in the case of automata, Petri nets can be visualized using graphs. But this time, we need to invoke the concept of a weighted bipartite graph. That is, a graph with two types of nodes: The nodes of different kinds are connected by arcs (arrowed curves). A integer weights are associated with arcs. Alternatively, for a smaller weight (2, 3, 4), the weight can be graphically encoded by drawing multiple arcs. Example 1 (Simple Petri net) We consider just two places, that is, \mathcal{P} = \{p_1, p_2\}, and one transition, that is, \mathcal{T} = \{t\}. The set of arcs is \mathcal{A} = \{\underbrace{(p_1, t)}_{a_1}, \underbrace{(t, p_2)}_{a_2}\}, and the associated weights are w(a_1) = w((p_1, t)) = 2 and w(a_2) = w((t, p_2)) = 1. The Petri net is depicted in Fig. 1. Example 2 (More complex Petri net) An important concept that we must introduce now is that of marking. It is a function that assigns an integer to each place x: \mathcal{P} \rightarrow \mathbb{N}. The vector composed of the values of the marking function for all places \bm x = \begin{bmatrix}x(p_1)\\ x(p_2)\\ \vdots \\ x(p_n) \end{bmatrix} can be viewed as the state vector (although the Petri nets community perhaps would not use this terminology and stick to just marking). Marked Petri net is then a Petri net augmented with the marking MPN = \{\mathcal{P}, \mathcal{T}, \mathcal{A}, w,x\}. Marked Petri net can also be visualized by placing tokens (dots) into the places. The number of tokens in a place corresponds to the value of the marking function for that place. Example 3 (Marked Petri net) Consider the Petri net from Example 1. The marking function is x(p_1) = 2 and x(p_2) = 1, which assembled into a vector gives \bm x = \begin{bmatrix}1\\ 0 \end{bmatrix}. The marked Petri net is depicted in Fig. 3. For another marking, namely \bm x = \begin{bmatrix}2\\ 1 \end{bmatrix}, the marked Petri net is depicted in Fig. 4. Finally, here comes the enabling (pun intended) component of the definition of a Petri net – enabled transition. A transition t_j does not just happen – we say fire – whenever it wants, it can only fire if it is enabled, and the marking is used to determine if it is enabled. Namely, the transition is enabled if the value of the marking function for each input place is greater than or equal to the weight of the arc from that place to the transition. That is, the transition t_j is enabled if
+x(p_i) \geq w(p_i,t_j)\quad \forall p_i \in \mathcal{I}(t_j).
+ The enabled transition can fire, but it doesn’t have to. We will exploit this in timed PN. Example 4 (Enabled transition) See the PN in Example 3: in the first marked PN the transition cannot fire, in the second it can. We now have a Petri net as a conceptual model with a graphical representation. But in order to use it for some quantitative analysis, it is useful to turn it into some computational form. Preferrably a familiar one. This is done by defining a state transition function. For a Petri net with n places, the state transition function is
+f: \mathbb N^n \times \mathcal{T} \rightarrow \mathbb N^n,
+ which reads that the state transition fuction assignes a new marking (state) to the Petri net after a transition is fired at some given marking (state). The function is only defined for a transition t_j iff the transition is enabled. If the transition t_j is enabled and fired, the state evolves as
+\bm x^+ = f(\bm x, t_j),
+ where the individual components of \bm x evolve according to \boxed{
+ x^+(p_i) = x(p_i) - w(p_i,t_j) + w(t_j,p_i), \; i = 1,\ldots,n.}
+ This has a visual interpretation – a fired transition moves tokens from the input to the output places. Example 5 (Moving tokens around) Consider the PN with the initial marking (state) \bm x_0 = \begin{bmatrix}2\\ 0\\ 0\\ 1 \end{bmatrix} (at discrete time 0), and the transition t_1 enabled We admit the notation here is confusing, because we use the lower index 0 in \bm x_0 to denote the discrete time, while the lower index 1 in t_1 to denote the transition and the lower indices 1 and 2 in p_1 and p_2 just number the transitions and places, respectively. We could have chosen something like \bm x(0) or \bm x[0], but we dare to hope that the context will make it clear. Now we assume that t_1 is fired The state vector changes to \bm x_1 = [1, 1, 1, 1]^\top, the discrete time is 1 now. As a result of this transition, note that t_1, t_2, t_3 are now enabled. In the example we can see for the first time, that the number of tokens need not be preserved. Now fire the t_2 transition The state vector changes to \bm x_2 = [1, 1, 0, 2]^\top, the discrete time is 2 now. Good, we can see the idea. But now we go back to time 1 (as in Fig. 5) to explore the alternative evolution. With the state vector \bm x_1 = [1, 1, 1, 1]^\top and the transitions t_1, t_2, t_3 enabled, we fire t_3 this time. The state changes to \bm x_2 = [0, 1, 0, 0]^\top, the discrete time is 2. Apparently the PN evolved into at a different state. The lesson learnt with this example is that the order of firing of enabled transitions matters. The dependence of the state evolution upon the order of firing the transitions is not surprising. Wwe have already encountered it in automata when the active event set for a given state contains more then a single element. We have started talking about states and state transitions in Petri nets, which are all concepts that we are familiar with from dynamical systems. Another such concept is reachability. We explain it through an example. Example 6 (Not all states are reachable) The Petri net is initial in the state [2,1]^\top. The only reachable state is [0,2]^\top. By the way, note that the weight of the arc from the place p_1 to the transition t is 2, so both tokens are removed from the place p_1 when the transition t fires. But then the arc to the place p_2 has weight 1, so only one token is added to the place p_2. The other token is “lost”. Here we introduce two tools for analysis of reachability of a Petri net. We have already commented on this before, but we emphasize it here. Indeed, it can be that
+\sum_{p_i\in\mathcal{O}(t_j)}w(t_j,p_i) < \sum_{p_i\in\mathcal{I}(t_j)} w(p_i,t_j)
+ or
+\sum_{p_i\in\mathcal{O}(t_j)}w(t_j,p_i) > \sum_{p_i\in\mathcal{I}(t_j)} w(p_i,t_j)
+ With this reminder, we can now hightlight several patters that can be observed in Petri nets. In the four patters just enumerated, we have seen that the last one – the OR-divergence – is not deterministic. Indeed, consider the following example. Example 8 In other words, we can incorporate a nondeterminism in a model. Recall that something similar can be encountered in automata, if the active event set for a given state contains more than one element (event,transition). We can identify two subclasses of Petri nets: Example 9 (Event graph) Example 10 (State machine) We consider a Petri net with n places and m transitions. The incidence matrix is defined as
+\bm A \in \mathbb{Z}^{n\times m},
+ where
+a_{ij} = w(t_j,p_i) - w(p_i,t_j).
+ Some define the incidence matrix as the transpose of our definition. With the incidence matrix defined above, the state equation for a Petri net can be written as
+\bm x^+ = \bm x + \bm A \bm u,
+ where \bm u is a firing vector for the enabled j-th transition
+\bm u = \bm e_j = \begin{bmatrix}0 \\ \vdots \\ 0 \\ 1\\ 0\\ \vdots\\ 0\end{bmatrix}
+ with the 1 at the j-th position. Note that in [1] they define everything in terms of the transposed quantities, but we prefer sticking to the notion of a state vector as a column. Example 11 (State equation for a Petri net) Consider the Petri net from Example 5, which we show again below in Fig. 8. The initial state is given by the vector
+\bm x_0
+=
+\begin{bmatrix}
+2\\ 0\\ 0\\ 1
+\end{bmatrix}
+ The incidence matrix is
+\bm A = \begin{bmatrix}
+-1 & 0 & -1\\
+1 & 0 & 0\\
+1 & -1 & -1\\
+0 & 1 & -1
+\end{bmatrix}
+ And the state vector evolves according to
+\begin{aligned}
+\bm x_1 &= \bm x_0 + \bm A \bm u_1\\
+\bm x_2 &= \bm x_1 + \bm A \bm u_2\\
+\vdots &
+\end{aligned}
+ We repeat once again just to make sure: the lower index corresponds to the discrete time. Although Petri nets can be used to model a vast variety of systems, below we single out one particular class of systems that can be modelled by Petri nets – queueing systems. The general symbol is shown in Fig. 9. We can associate the transitions with the events in the queing system: We can now start drawing the Petri net by drawin the bars corresponding to the transitions. Then in between every two bars, we draw a circle for a place. The places can be associated with three bold-face letters above, namely: \quad \mathcal{P} = \{Q, I, B\}, that is, queue, idle, busy. The transition a is an input transition – the tokens are added to the system through this transition. Note how we consider the token in the I place. This is not only to express that the server is initially idle, ready to serve as soon as a customer arrives to the queue, it also ensures that no serving of a new customer can start before the serving of the current customer is completed. The initial state: [0,1,0]^\top. Consider now a particular trace (of transitions/events) \{a,s,a,a,c,s,a\}. Verify that this leads to the final state [2,0,1]^\top. We can keep adding features to the model of a queing system. In particular, These are incorporated into the Petri net in Fig. 10. In the Petri net, d is an output transition – the tokens are removed from the system. Example 12 (Beverage vending machine) Below we show a Petri net for a beverage vending machine. While building it, we find it useful to identify the events/transitions that can happen in the system. We do not cover these extensions in our course. But there is one particular extension that we do want to cover, and this amounts to introducing time into Petri nets, leading to timed Petri nets, which we will discuss in the next chapter.E
+
E
+
Petri nets
+Definition of a Petri net
+
+
+
+
+
+Additional definitions
+
+
+
+
+Marking and marked Petri nets
+Visualization of marked Petri net using tokens
+Enabling and firing of a transition
+State transition function
+Reachability
+Reachability tree and graph
+Number of tokens need not be preserved
+AND-convergence, AND-divergence
+
+OR-convergence and OR-divergence
+
+Nondeterminism in a PN
+Subclasses of Petri nets
+
+
+Event graph
+
+
+State machine
+
+
+Incidence matrix
+State equation for a Petri net
+Queueing systems modelled by PN
+
+
+Some more extensions
+
+
+Some extensions of basic Petri nets
+
+
+References
Stability via multiple Lyapunov functions
│ └─ Convex.NegateAtom (affine; real)
│ └─ …
├─ PSD constraint (convex)
- │ └─ 2×2 real variable (id: 526…537)
+ │ └─ 2×2 real variable (id: 232…843)
⋮
diff --git a/verification_barrier.html b/verification_barrier.html
index c51417b..3e1bd7d 100644
--- a/verification_barrier.html
+++ b/verification_barrier.html
@@ -822,7 +822,7 @@
+└ @ SciMLBase ~/.julia/packages/SciMLBase/tWwhl/src/integrator_interface.jl:623
┌ Warning: At t=4.423118290940107, dt was forced below floating point epsilon 8.881784197001252e-16, and step error estimate = 1.139033855908175. Aborting. There is either an error in your model specification or the true solution is unstable (or the true solution can not be represented in the precision of Float64).
-└ @ SciMLBase ~/.julia/packages/SciMLBase/AkGLd/src/integrator_interface.jl:623