Skip to content

Commit fd8578a

Browse files
committed
Remove unwanted comments
1 parent bb6b5d4 commit fd8578a

File tree

1 file changed

+52
-52
lines changed

1 file changed

+52
-52
lines changed

lectures/markov_chains_I.md

Lines changed: 52 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ kernelspec:
1111
name: python3
1212
---
1313

14-
+++ {"user_expressions": []}
14+
1515

1616
# Markov Chains: Basic Concepts and Stationarity
1717

@@ -36,10 +36,10 @@ In addition to what's in Anaconda, this lecture will need the following librarie
3636
:class: warning
3737
If you are running this lecture locally it requires [graphviz](https://www.graphviz.org)
3838
to be installed on your computer. Installation instructions for graphviz can be found
39-
[here](https://www.graphviz.org/download/)
39+
[here](https://www.graphviz.org/download/)
4040
```
4141

42-
+++ {"user_expressions": []}
42+
4343

4444
## Overview
4545

@@ -74,7 +74,7 @@ import matplotlib as mpl
7474
from itertools import cycle
7575
```
7676

77-
+++ {"user_expressions": []}
77+
7878

7979
## Definitions and examples
8080

@@ -137,7 +137,7 @@ dot.edge("sr", "sr", label="0.492")
137137
dot
138138
```
139139

140-
+++ {"user_expressions": []}
140+
141141

142142
Here there are three **states**
143143

@@ -180,11 +180,11 @@ We can collect all of these conditional probabilities into a matrix, as follows
180180

181181
$$
182182
P =
183-
\begin{bmatrix}
183+
\begin{bmatrix}
184184
0.971 & 0.029 & 0 \\
185185
0.145 & 0.778 & 0.077 \\
186186
0 & 0.508 & 0.492
187-
\end{bmatrix}
187+
\end{bmatrix}
188188
$$
189189

190190
Notice that $P$ is a stochastic matrix.
@@ -220,11 +220,11 @@ Given the above information, we can write out the transition probabilities in ma
220220
```{math}
221221
:label: p_unempemp
222222
223-
P =
224-
\begin{bmatrix}
223+
P =
224+
\begin{bmatrix}
225225
1 - \alpha & \alpha \\
226226
\beta & 1 - \beta
227-
\end{bmatrix}
227+
\end{bmatrix}
228228
```
229229

230230
For example,
@@ -255,26 +255,26 @@ We'll cover some of these applications below.
255255
(mc_eg3)=
256256
#### Example 3
257257

258-
Imam and Temple {cite}`imampolitical` categorize political institutions into three types: democracy (D), autocracy (A), and an intermediate state called anocracy (N).
258+
Imam and Temple {cite}`imampolitical` categorize political institutions into three types: democracy (D), autocracy (A), and an intermediate state called anocracy (N).
259259

260-
Each institution can have two potential development regimes: collapse (C) and growth (G). This results in six possible states: DG, DC, NG, NC, AG, and AC.
260+
Each institution can have two potential development regimes: collapse (C) and growth (G). This results in six possible states: DG, DC, NG, NC, AG, and AC.
261261

262-
The lower probability of transitioning from NC to itself indicates that collapses in anocracies quickly evolve into changes in the political institution.
262+
The lower probability of transitioning from NC to itself indicates that collapses in anocracies quickly evolve into changes in the political institution.
263263

264264
Democracies tend to have longer-lasting growth regimes compared to autocracies as indicated by the lower probability of transitioning from growth to growth in autocracies.
265265

266266
We can also find a higher probability from collapse to growth in democratic regimes
267267

268268
$$
269269
P :=
270-
\begin{bmatrix}
270+
\begin{bmatrix}
271271
0.86 & 0.11 & 0.03 & 0.00 & 0.00 & 0.00 \\
272272
0.52 & 0.33 & 0.13 & 0.02 & 0.00 & 0.00 \\
273273
0.12 & 0.03 & 0.70 & 0.11 & 0.03 & 0.01 \\
274274
0.13 & 0.02 & 0.35 & 0.36 & 0.10 & 0.04 \\
275275
0.00 & 0.00 & 0.09 & 0.11 & 0.55 & 0.25 \\
276276
0.00 & 0.00 & 0.09 & 0.15 & 0.26 & 0.50
277-
\end{bmatrix}
277+
\end{bmatrix}
278278
$$
279279

280280
```{code-cell} ipython3
@@ -297,7 +297,7 @@ for start_idx, node_start in enumerate(nodes):
297297
value = P[start_idx][end_idx]
298298
if value != 0:
299299
G.add_edge(node_start,node_end, weight=value, len=100)
300-
300+
301301
pos = nx.spring_layout(G, seed=10)
302302
fig, ax = plt.subplots()
303303
nx.draw_networkx_nodes(G, pos, node_size=600, edgecolors='black', node_color='white')
@@ -327,7 +327,7 @@ The set $S$ is called the **state space** and $x_1, \ldots, x_n$ are the **state
327327

328328
A **distribution** $\psi$ on $S$ is a probability mass function of length $n$, where $\psi(i)$ is the amount of probability allocated to state $x_i$.
329329

330-
A **Markov chain** $\{X_t\}$ on $S$ is a sequence of random variables taking values in $S$
330+
A **Markov chain** $\{X_t\}$ on $S$ is a sequence of random variables taking values in $S$
331331
that have the **Markov property**.
332332

333333
This means that, for any date $t$ and any state $y \in S$,
@@ -390,9 +390,9 @@ In these exercises, we'll take the state space to be $S = 0,\ldots, n-1$.
390390

391391
### Writing our own simulation code
392392

393-
To simulate a Markov chain, we need
393+
To simulate a Markov chain, we need
394394

395-
1. a stochastic matrix $P$ and
395+
1. a stochastic matrix $P$ and
396396
1. a probability mass function $\psi_0$ of length $n$ from which to draw a initial realization of $X_0$.
397397

398398
The Markov chain is then constructed as follows:
@@ -416,7 +416,7 @@ cdf = np.cumsum(ψ_0) # convert into cumulative distribution
416416
qe.random.draw(cdf, 5) # generate 5 independent draws from ψ
417417
```
418418

419-
+++ {"user_expressions": []}
419+
420420

421421
We'll write our code as a function that accepts the following three arguments
422422

@@ -449,7 +449,7 @@ def mc_sample_path(P, ψ_0=None, ts_length=1_000):
449449
return X
450450
```
451451

452-
+++ {"user_expressions": []}
452+
453453

454454
Let's see how it works using the small matrix
455455

@@ -458,15 +458,15 @@ P = [[0.4, 0.6],
458458
[0.2, 0.8]]
459459
```
460460

461-
+++ {"user_expressions": []}
461+
462462

463463
Here's a short time series.
464464

465465
```{code-cell} ipython3
466466
mc_sample_path(P, ψ_0=[1.0, 0.0], ts_length=10)
467467
```
468468

469-
+++ {"user_expressions": []}
469+
470470

471471
It can be shown that for a long series drawn from `P`, the fraction of the
472472
sample that takes value 0 will be about 0.25.
@@ -483,7 +483,7 @@ X = mc_sample_path(P, ψ_0=[0.1, 0.9], ts_length=1_000_000)
483483
np.mean(X == 0)
484484
```
485485

486-
+++ {"user_expressions": []}
486+
487487

488488
You can try changing the initial distribution to confirm that the output is
489489
always close to 0.25 (for the `P` matrix above).
@@ -501,7 +501,7 @@ X = mc.simulate(ts_length=1_000_000)
501501
np.mean(X == 0)
502502
```
503503

504-
+++ {"user_expressions": []}
504+
505505

506506
The `simulate` routine is faster (because it is [JIT compiled](https://python-programming.quantecon.org/numba.html#numba-link)).
507507

@@ -513,7 +513,7 @@ The `simulate` routine is faster (because it is [JIT compiled](https://python-pr
513513
%time mc.simulate(ts_length=1_000_000) # qe code version
514514
```
515515

516-
+++ {"user_expressions": []}
516+
517517

518518
#### Adding state values and initial conditions
519519

@@ -536,15 +536,15 @@ mc.simulate(ts_length=4, init='unemployed')
536536
mc.simulate(ts_length=4) # Start at randomly chosen initial state
537537
```
538538

539-
+++ {"user_expressions": []}
539+
540540

541541
If we want to see indices rather than state values as outputs as we can use
542542

543543
```{code-cell} ipython3
544544
mc.simulate_indices(ts_length=4)
545545
```
546546

547-
+++ {"user_expressions": []}
547+
548548

549549
(mc_md)=
550550
## Distributions over time
@@ -621,7 +621,7 @@ Hence the following is also valid.
621621
X_t \sim \psi_t \quad \implies \quad X_{t+m} \sim \psi_t P^m
622622
```
623623

624-
+++ {"user_expressions": []}
624+
625625

626626
(finite_mc_mstp)=
627627
### Multiple step transition probabilities
@@ -657,20 +657,20 @@ Suppose that the current state is unknown --- perhaps statistics are available o
657657

658658
We guess that the probability that the economy is in state $x$ is $\psi_t(x)$ at time t.
659659

660-
The probability of being in recession (either mild or severe) in 6 months time is given by
660+
The probability of being in recession (either mild or severe) in 6 months time is given by
661661

662662
$$
663663
(\psi_t P^6)(1) + (\psi_t P^6)(2)
664664
$$
665665

666-
+++ {"user_expressions": []}
666+
667667

668668
(mc_eg1-1)=
669669
### Example 2: Cross-sectional distributions
670670

671-
The distributions we have been studying can be viewed either
671+
The distributions we have been studying can be viewed either
672672

673-
1. as probabilities or
673+
1. as probabilities or
674674
1. as cross-sectional frequencies that a Law of Large Numbers leads us to anticipate for large samples.
675675

676676
To illustrate, recall our model of employment/unemployment dynamics for a given worker {ref}`discussed above <mc_eg1>`.
@@ -720,11 +720,11 @@ P = np.array([[0.4, 0.6],
720720
ψ @ P
721721
```
722722

723-
+++ {"user_expressions": []}
723+
724724

725725
Notice that `ψ @ P` is the same as `ψ`
726726

727-
+++ {"user_expressions": []}
727+
728728

729729
Such distributions are called **stationary** or **invariant**.
730730

@@ -765,7 +765,7 @@ distribution.
765765

766766
We will come back to this when we introduce irreducibility in the next lecture
767767

768-
+++ {"user_expressions": []}
768+
769769

770770
### Example
771771

@@ -822,7 +822,7 @@ $$
822822

823823
See, for example, {cite}`sargent2023economic` Chapter 4.
824824

825-
+++ {"user_expressions": []}
825+
826826

827827
(hamilton)=
828828
#### Example: Hamilton's chain
@@ -836,7 +836,7 @@ P = np.array([[0.971, 0.029, 0.000],
836836
P @ P
837837
```
838838

839-
+++ {"user_expressions": []}
839+
840840

841841
Let's pick an initial distribution $\psi_0$ and trace out the sequence of distributions $\psi_0 P^t$ for $t = 0, 1, 2, \ldots$
842842

@@ -900,13 +900,13 @@ First, we write a function to draw initial distributions $\psi_0$ of size `num_d
900900
def generate_initial_values(num_distributions):
901901
n = len(P)
902902
ψ_0s = np.empty((num_distributions, n))
903-
903+
904904
for i in range(num_distributions):
905905
draws = np.random.randint(1, 10_000_000, size=n)
906906
907907
# Scale them so that they add up into 1
908908
ψ_0s[i,:] = np.array(draws/sum(draws))
909-
909+
910910
return ψ_0s
911911
```
912912

@@ -929,14 +929,14 @@ def plot_distribution(P, ts_length, num_distributions):
929929
# Get the path for each starting value
930930
for ψ_0 in ψ_0s:
931931
ψ_t = iterate_ψ(ψ_0, P, ts_length)
932-
932+
933933
# Obtain and plot distributions at each state
934934
for i in range(n):
935935
axes[i].plot(range(0, ts_length), ψ_t[:,i], alpha=0.3)
936936
937937
# Add labels
938938
for i in range(n):
939-
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
939+
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
940940
label = fr'$\psi^*({i})$')
941941
axes[i].set_xlabel('t')
942942
axes[i].set_ylabel(fr'$\psi_t({i})$')
@@ -948,7 +948,7 @@ def plot_distribution(P, ts_length, num_distributions):
948948
The following figure shows
949949

950950
```{code-cell} ipython3
951-
# Define the number of iterations
951+
# Define the number of iterations
952952
# and initial distributions
953953
ts_length = 50
954954
num_distributions = 25
@@ -962,7 +962,7 @@ plot_distribution(P, ts_length, num_distributions)
962962

963963
The convergence to $\psi^*$ holds for different initial distributions.
964964

965-
+++ {"user_expressions": []}
965+
966966

967967
#### Example: Failure of convergence
968968

@@ -979,7 +979,7 @@ num_distributions = 30
979979
plot_distribution(P, ts_length, num_distributions)
980980
```
981981

982-
+++ {"user_expressions": []}
982+
983983

984984
(finite_mc_expec)=
985985
## Computing expectations
@@ -1010,8 +1010,8 @@ where
10101010
algebra, we'll think of as the column vector
10111011

10121012
$$
1013-
h =
1014-
\begin{bmatrix}
1013+
h =
1014+
\begin{bmatrix}
10151015
h(x_1) \\
10161016
\vdots \\
10171017
h(x_n)
@@ -1058,15 +1058,15 @@ $\sum_t \beta^t h(X_t)$.
10581058
In view of the preceding discussion, this is
10591059

10601060
$$
1061-
\mathbb{E}
1061+
\mathbb{E}
10621062
\left[
1063-
\sum_{j=0}^\infty \beta^j h(X_{t+j}) \mid X_t
1063+
\sum_{j=0}^\infty \beta^j h(X_{t+j}) \mid X_t
10641064
= x
10651065
\right]
10661066
= x + \beta (Ph)(x) + \beta^2 (P^2 h)(x) + \cdots
10671067
$$
10681068

1069-
By the {ref}`Neumann series lemma <la_neumann>`, this sum can be calculated using
1069+
By the {ref}`Neumann series lemma <la_neumann>`, this sum can be calculated using
10701070

10711071
$$
10721072
I + \beta P + \beta^2 P^2 + \cdots = (I - \beta P)^{-1}
@@ -1082,11 +1082,11 @@ Imam and Temple {cite}`imampolitical` used a three-state transition matrix to de
10821082
10831083
$$
10841084
P :=
1085-
\begin{bmatrix}
1085+
\begin{bmatrix}
10861086
0.68 & 0.12 & 0.20 \\
10871087
0.50 & 0.24 & 0.26 \\
10881088
0.36 & 0.18 & 0.46
1089-
\end{bmatrix}
1089+
\end{bmatrix}
10901090
$$
10911091
10921092
where rows, from top to down, correspond to growth, stagnation, and collapse.

0 commit comments

Comments
 (0)