Skip to content

Commit a5bcf87

Browse files
committed
fix_warning
1 parent 4516a82 commit a5bcf87

File tree

1 file changed

+24
-19
lines changed

1 file changed

+24
-19
lines changed

lectures/heavy_tails.md

Lines changed: 24 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -280,7 +280,7 @@ The bottom subfigure shows 120 independent draws from [the Cauchy
280280
distribution](https://en.wikipedia.org/wiki/Cauchy_distribution), which is
281281
heavy-tailed.
282282

283-
```{code-cell} python3
283+
```{code-cell} ipython3
284284
n = 120
285285
np.random.seed(11)
286286
@@ -333,7 +333,7 @@ The exponential distribution is a light-tailed distribution.
333333

334334
Here are some draws from the exponential distribution.
335335

336-
```{code-cell} python3
336+
```{code-cell} ipython3
337337
n = 120
338338
np.random.seed(11)
339339
@@ -382,7 +382,7 @@ is Pareto-distributed with minimum $\bar x$ and tail index $\alpha$.
382382
Here are some draws from the Pareto distribution with tail index $1$ and minimum
383383
$1$.
384384

385-
```{code-cell} python3
385+
```{code-cell} ipython3
386386
n = 120
387387
np.random.seed(11)
388388
@@ -426,7 +426,7 @@ This function goes to zero as $x \to \infty$, but much slower than $G_E$.
426426

427427
Here's a plot that illustrates how $G_E$ goes to zero faster that $G_P$.
428428

429-
```{code-cell} python3
429+
```{code-cell} ipython3
430430
x = np.linspace(1.5, 100, 1000)
431431
fig, ax = plt.subplots()
432432
alpha = 1.0
@@ -435,11 +435,11 @@ ax.plot(x, x**(- alpha), label='Pareto', alpha=0.8)
435435
ax.legend()
436436
plt.show()
437437
```
438+
438439
Here's a log-log plot of the same functions, which makes visual comparison a
439440
bit easier.
440441

441-
442-
```{code-cell} python3
442+
```{code-cell} ipython3
443443
fig, ax = plt.subplots()
444444
alpha = 1.0
445445
ax.loglog(x, np.exp(- alpha * x), label='exponential', alpha=0.8)
@@ -461,13 +461,12 @@ The sample countpart of the CCDF function is the **empirical CCDF**.
461461

462462
Given a sample $x_1, \ldots, x_n$, the empirical CCDF is given by
463463

464-
$$ \hat G(x) = \frac{1}{n} \sum_{i=1}^n \1\{x_i > x\} $$
464+
$$ \hat G(x) = \frac{1}{n} \sum_{i=1}^n \mathbb 1\{x_i > x\} $$
465465

466466
Thus, $\hat G(x)$ shows the fraction of the sample that exceeds $x$.
467467

468468
Here's a figure containing some empirical CCDFs from simulated data.
469469

470-
471470
```{code-cell} ipython3
472471
def eccdf(x, data):
473472
"Simple empirical CCDF function."
@@ -559,6 +558,7 @@ readers are of course welcome to explore the code (perhaps after examining the f
559558

560559
```{code-cell} ipython3
561560
:tags: [hide-input]
561+
562562
def empirical_ccdf(data,
563563
ax,
564564
aw=None, # weights
@@ -616,35 +616,35 @@ def empirical_ccdf(data,
616616
return np.log(data), y_vals, p_vals
617617
```
618618

619-
620619
```{code-cell} ipython3
621620
:tags: [hide-input]
621+
622622
def extract_wb(varlist=['NY.GDP.MKTP.CD'],
623-
c='all',
623+
c='all_countries',
624624
s=1900,
625625
e=2021,
626626
varnames=None):
627627
if c == "all_countries":
628-
# keep countries only (no aggregated regions)
628+
# Keep countries only (no aggregated regions)
629629
countries = wb.get_countries()
630-
countries_code = countries[countries['region'] != 'Aggregates']['iso3c'].values
631-
632-
df = wb.download(indicator=varlist, country=countries_code, start=s, end=e).stack().unstack(0).reset_index()
633-
df = df.drop(['level_1'], axis=1).transpose() # set_index(['year'])
634-
if varnames != None:
630+
countries_name = countries[countries['region'] != 'Aggregates']['name'].values
631+
c = "all"
632+
633+
df = wb.download(indicator=varlist, country=c, start=s, end=e).stack().unstack(0).reset_index()
634+
df = df.drop(['level_1'], axis=1).transpose()
635+
if varnames is not None:
635636
df.columns = varnames
636637
df = df[1:]
637638
return df
638639
```
639640

640-
641-
642641
### Firm size
643642

644643
Here is a plot of the firm size distribution taken from Forbes Global 2000.
645644

646645
```{code-cell} ipython3
647646
:tags: [hide-input]
647+
648648
df_fs = pd.read_csv('https://media.githubusercontent.com/media/QuantEcon/high_dim_data/update_csdata/cross_section/forbes-global2000.csv')
649649
df_fs = df_fs[['Country', 'Sales', 'Profits', 'Assets', 'Market Value']]
650650
fig, ax = plt.subplots(figsize=(6.4, 3.5))
@@ -663,6 +663,7 @@ measured by population.
663663

664664
```{code-cell} ipython3
665665
:tags: [hide-input]
666+
666667
df_cs_us = pd.read_csv('https://raw.githubusercontent.com/QuantEcon/high_dim_data/update_csdata/cross_section/cities_us.txt', delimiter="\t", header=None)
667668
df_cs_us = df_cs_us[[0, 3]]
668669
df_cs_us.columns = 'rank', 'pop'
@@ -692,6 +693,7 @@ The data is from the Forbes billionaires list.
692693

693694
```{code-cell} ipython3
694695
:tags: [hide-input]
696+
695697
df_w = pd.read_csv('https://media.githubusercontent.com/media/QuantEcon/high_dim_data/update_csdata/cross_section/forbes-billionaires.csv')
696698
df_w = df_w[['country', 'realTimeWorth', 'realTimeRank']].dropna()
697699
df_w = df_w.astype({'realTimeRank': int})
@@ -724,6 +726,7 @@ Here we show cross-country per capita GDP.
724726

725727
```{code-cell} ipython3
726728
:tags: [hide-input]
729+
727730
# get gdp and gdp per capita for all regions and countries in 2021
728731
729732
variable_code = ['NY.GDP.MKTP.CD', 'NY.GDP.PCAP.CD']
@@ -734,7 +737,9 @@ df_gdp1 = extract_wb(varlist=variable_code,
734737
s="2021",
735738
e="2021",
736739
varnames=variable_names)
740+
```
737741

742+
```{code-cell} ipython3
738743
fig, axes = plt.subplots(1, 2, figsize=(8.8, 3.6))
739744
740745
for name, ax in zip(variable_names, axes):
@@ -778,7 +783,7 @@ For example, it fails for the Cauchy distribution.
778783
Let's have a look at the behavior of the sample mean in this case, and see
779784
whether or not the LLN is still valid.
780785

781-
```{code-cell} python3
786+
```{code-cell} ipython3
782787
from scipy.stats import cauchy
783788
784789
np.random.seed(1234)

0 commit comments

Comments
 (0)