-
Notifications
You must be signed in to change notification settings - Fork 45
/
Copy pathTutorial_3_Frequency.Rmd
332 lines (249 loc) · 15.7 KB
/
Tutorial_3_Frequency.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
---
title: 'Tutorial 3: Frequency analysis'
author: "Andreas Niekler, Gregor Wiedemann"
date: "`r format(Sys.time(), '%Y-%m-%d')`"
output:
pdf_document:
toc: yes
html_document:
number_sections: yes
theme: united
toc: yes
toc_float: yes
highlight: tango
csl: springer.csl
bibliography: references.bib
---
```{r klippy, echo=FALSE, include=TRUE}
klippy::klippy()
options(width = 98)
```
This tutorial introduces frequency analysis with basic R functions. We further introduce some text preprocessing functionality provided by the `quanteda` package.
1. Text preprocessing
2. Time series
3. Grouping of semantic categories
4. Heatmaps
```{r init, results='hide', eval=FALSE, message=FALSE, warning=FALSE}
options(stringsAsFactors = FALSE)
require(quanteda)
library(dplyr)
```
```{r init2, message=F, echo=F, warning=FALSE}
options(stringsAsFactors = FALSE, digits = 3)
library(quanteda)
library(dplyr)
```
# Text preprocessing
Like in the previous tutorial we read the CSV data file containing the State of the union addresses. This time, we add two more columns for the year and the decade. For the year we select a sub string of the four first characters from the `date` column of the data frame (e.g. extracting "1990" from "1990-02-12"). For the decade we select a sub string of the first three characters and paste a 0 to it. In later parts of the exercise we can use these columns for grouping data.
```{r readata, results='hide', message=FALSE, warning=FALSE}
textdata <- read.csv("data/sotu.csv", sep = ";", encoding = "UTF-8")
# we add some more metadata columns to the data frame
textdata$year <- substr(textdata$date, 0, 4)
textdata$decade <- paste0(substr(textdata$date, 0, 3), "0")
```
Then, we create a corpus object again.
This time, we also apply **lemmatization** to the corpus. Lemmatization reduces (potentially) inflected word forms to its lexical baseform to unify similar semantic forms to the same text representation. For instance, 'presidents' becomes 'president' and 'singing' becomes 'sing'.
Moreover, we apply different preprocessing steps to the corpus text. `remove_punct` leaves only alphanumeric characters in the text. `remove_numbers` removes numeric characters and `remove_symbols` removes all characters in the Unicode "Symbol" [S] class. Lowercase transformation is performed (need to be applied before lemma replacement!), and finally a set of English stop-words is removed.
```{r preprocesscorpus, cache=TRUE}
sotu_corpus <- corpus(textdata$text, docnames = textdata$doc_id)
# Build a dictionary of lemmas
lemma_data <- read.csv("resources/baseform_en.tsv", encoding = "UTF-8")
# Create a DTM
corpus_tokens <- sotu_corpus %>%
tokens(remove_punct = TRUE, remove_numbers = TRUE, remove_symbols = TRUE) %>%
tokens_tolower() %>%
tokens_replace(lemma_data$inflected_form, lemma_data$lemma, valuetype = "fixed") %>%
tokens_remove(pattern = stopwords())
```
Effects of this preprocessing can be printed in the console.
```{r results='hide'}
print(paste0("1: ", substr(paste(corpus_tokens[1],collapse = " "), 0, 400), '...'))
```
We see that the text is now a sequence of text features corresponding to the selected methods (including lemmatization).
From the preprocessed corpus, we create a new DTM.
```{r generateDTM}
DTM <- corpus_tokens %>%
dfm()
```
The resulting DTM should have `r nrow(DTM)` rows and `r ncol(DTM)` columns.
# Time series
We now want to measure frequencies of certain terms over time. Frequencies in single decades are plotted as line graphs to follow their trends over time. First, we determine which terms to analyze and reduce our DTM to this these terms.
```{r reduceDTM1, results='hide', message=FALSE, warning=FALSE}
terms_to_observe <- c("nation", "war", "god", "terror", "security")
DTM_reduced <- as.matrix(DTM[, terms_to_observe])
```
The reduced DTM contains counts for each of our `r length(terms_to_observe)` terms and in each of the `r nrow(DTM_reduced)` documents (rows of the reduced DTM). Since our corpus covers a time span of more than 200 years, we could aggregate frequencies in single documents per decade to get a meaningful representation of term frequencies over time.
Information of each decade per document we added in the beginning to the `textdata` variable. There are ten speeches per decade mostly (you can check with `table(textdata$decade`). We use `textdata$decade` as a grouping parameter for the `aggregate` function. This function sub-selects rows from the input data (`DTM_reduced`) for all different decade values given in the `by`-parameter. Each sub-selection is processed column-wise using the function provided in the third parameter (`sum`).
```{r reduceDTM2, results='hide', message=FALSE, warning=FALSE}
counts_per_decade <- aggregate(DTM_reduced, by = list(decade = textdata$decade), sum)
```
`counts_per_decade` now contains sums of term frequencies per decade. Time series for single terms can be plotted either by the simple `plot` function. Additional time series could be added by the `lines`-function (Tutorial 2). A more simple way is to use the matplot-function which can draw multiple lines in one command.
```{r plotMat, results='hide', message=FALSE, warning=FALSE}
# give x and y values beautiful names
decades <- counts_per_decade$decade
frequencies <- counts_per_decade[, terms_to_observe]
# plot multiple frequencies
matplot(decades, frequencies, type = "l")
# add legend to the plot
l <- length(terms_to_observe)
legend('topleft', legend = terms_to_observe, col=1:l, text.col = 1:l, lty = 1:l)
```
Among other things, we can observe peaks in reference to `war` around the US civil war, around 1900 and WWII. The term nation also peaks around 1900 giving hints for further investigations on possible relatedness of both terms during that time. References to security, god or terror appear to be more 'modern' phenomena.
# Grouping of sentiments
Frequencies cannot only be aggregated over time for time series analysis, but to count categories of terms for comparison. For instance, we can model a very simple **Sentiment analysis** approach using lists of positive an negative words. Then, counts of these words can be aggregated w.r.t to any metadata. For instance, if we count sentiment words per president, we can get an impression of who utilized emotional language to what extent.
We provide a selection of very positive / negative English words extracted from an English Opinion Word Lexicon compiled by @Hu.2004. Have a look in the text files to see, what they consist of.
As an alternative, we also could use the AFINN sentiment lexicon from @Nielsen.2011.
```{r buildTS1, warning=F}
# English Opinion Word Lexicon by Hu et al. 2004
positive_terms_all <- readLines("data/positive-words.txt")
negative_terms_all <- readLines("data/negative-words.txt")
# AFINN sentiment lexicon by Nielsen 2011
afinn_terms <- read.csv("data/AFINN-111.txt", header = F, sep = "\t")
positive_terms_all <- afinn_terms$V1[afinn_terms$V2 > 0]
negative_terms_all <- afinn_terms$V1[afinn_terms$V2 < 0]
```
To count occurrence of these terms in our speeches, we first need to restrict the list to those words actually occurring in our speeches. These terms then can be aggregated per speech by a simple `row_sums` command.
```{r buildTS2, warning=F, message=F}
positive_terms_in_suto <- intersect(colnames(DTM), positive_terms_all)
counts_positive <- rowSums(DTM[, positive_terms_in_suto])
negative_terms_in_suto <- intersect(colnames(DTM), negative_terms_all)
counts_negative <- rowSums(DTM[, negative_terms_in_suto])
```
Since lengths of speeches tend to vary greatly, we do want to measure relative frequencies of sentiment terms. This can be achieved by dividing counts of sentiment terms by the number of all terms in each document. The relative frequencies we store in a dataframe for subsequent aggregation and visualization.
```{r buildTS3, warning=F}
counts_all_terms <- rowSums(DTM)
relative_sentiment_frequencies <- data.frame(
positive = counts_positive / counts_all_terms,
negative = counts_negative / counts_all_terms
)
```
Now we aggregate not per decade, but per president. Further we do not take the sum (not all presidents have the same number of speeches) but the mean. A sample output shows the computed mean sentiment scores per president.
```{r buildTS4, warning=F}
sentiments_per_president <- aggregate(relative_sentiment_frequencies, by = list(president = textdata$president), mean)
head(sentiments_per_president)
```
Scores per president can be visualized as bar plot. The package `ggplot2` offers a great variety of plots. The package `reshape2` offers functions to convert data into the right format for ggplot2. For more information on ggplot2 see: https://ggplot2.tidyverse.org
```{r buildTS5, warning=F, message=F}
require(reshape2)
df <- melt(sentiments_per_president, id.vars = "president")
require(ggplot2)
ggplot(data = df, aes(x = president, y = value, fill = variable)) +
geom_bar(stat="identity", position=position_dodge()) + coord_flip()
```
The standard output is sorted by president's names alphabetically. We can make use of the reorder command, to sort by positive / negative sentiment score:
```{r buildTS6, warning=F}
# order by positive sentiments
ggplot(data = df, aes(x = reorder(president, value, head, 1), y = value, fill = variable)) + geom_bar(stat="identity", position=position_dodge()) + coord_flip()
# order by negative sentiments
ggplot(data = df, aes(x = reorder(president, value, tail, 1), y = value, fill = variable)) + geom_bar(stat="identity", position=position_dodge()) + coord_flip()
```
# Heatmaps
The overlapping of several time series in a plot can become very confusing. Heatmaps provide an alternative for the visualization of multiple frequencies over time. In this visualization method, a time series is mapped as a row in a matrix grid. Each cell of the grid is filled with a color corresponding to the value from the time series. Thus, several time series can be displayed in parallel.
In addition, the time series can be sorted by similarity in a heatmap. In this way, similar frequency sequences with parallel shapes (heat activated cells) can be detected more quickly. Dendrograms can be plotted aside to visualize quantities of similarity.
```{r HM1, fig.width=9, fig.height=8, warning=F}
terms_to_observe <- c("war", "peace", "health", "terror", "islam",
"threat", "security", "conflict", "job",
"economy", "indian", "afghanistan", "muslim",
"god", "world", "territory", "frontier", "north",
"south", "black", "racism", "slavery", "iran")
DTM_reduced <- as.matrix(DTM[, terms_to_observe])
rownames(DTM_reduced) <- ifelse(as.integer(textdata$year) %% 2 == 0, textdata$year, "")
heatmap(t(DTM_reduced), Colv=NA, col = rev(heat.colors(256)), keep.dendro= FALSE, margins = c(5, 10))
```
# Optional exercises
1. Create the time series plot with the terms "environment", "climate", "planet", "space" as shown above. Then, try to use the ggplot2 library for the line plot (e.g. the function `geom_line()`).
```{r ex1, echo=F, results='hide', message=FALSE, warning=FALSE}
# code from above
terms_to_observe <- c("environment", "climate", "planet", "space")
DTM_reduced <- as.matrix(DTM[, terms_to_observe])
counts_per_decade <- aggregate(DTM_reduced, by = list(decade = textdata$decade), sum)
# ggplot2 version
df <- melt(counts_per_decade, id.vars = "decade")
ggplot(data = df, aes(x = decade, y = value, group=variable, color = variable)) +
geom_line()
```
2. Use a different relative measure for the sentiment analysis: Instead computing the proportion of positive/negative terms regarding all terms, compute the share of positive/negative terms regarding all sentiment terms only.
```{r ex2, echo=F, results='hide', message=FALSE, warning=FALSE}
relative_sentiment_frequencies <- data.frame(
positive = counts_positive / (counts_positive + counts_negative),
negative = counts_negative / (counts_positive + counts_negative)
)
sentiments_per_president <- aggregate(relative_sentiment_frequencies, by = list(president = textdata$president), mean)
df <- melt(sentiments_per_president, id.vars = "president")
ggplot(data = df, aes(x = reorder(president, value, head, 1), y = value, fill = variable)) + geom_bar(stat="identity", position="stack") + coord_flip()
```
3. The AFINN sentiment lexicon provides not only negative/positive terms, but also a valence value for each term ranging from [-5;+5]. Instead of counting sentiment terms only, use the valence values for sentiment scoring.
```{r ex3, echo=F, results='hide', message=FALSE, warning=FALSE}
corpus_afinn <- sotu_corpus %>%
tokens(remove_punct = TRUE, remove_numbers = TRUE, remove_symbols =
TRUE) %>%
tokens_tolower() %>%
tokens_remove(pattern = stopwords())
# AFINN sentiment lexicon by Nielsen 2011
afinn_terms <- read.csv("data/AFINN-111.txt", header = F, sep = "\t")
pos_idx <- afinn_terms$V2 > 0
positive_terms_score <- afinn_terms$V2[pos_idx]
names(positive_terms_score) <- afinn_terms$V1[pos_idx]
neg_idx <- afinn_terms$V2 < 0
negative_terms_score <- afinn_terms$V2[neg_idx] * -1
names(negative_terms_score) <- afinn_terms$V1[neg_idx]
pos_DTM <- corpus_afinn %>%
tokens_keep(names(positive_terms_score)) %>%
dfm()
positive_terms_score <- positive_terms_score[colnames(pos_DTM)]
# caution: to multiply all rows of a matrix with a vector of
#ncol(matrix) length
# you need to transpose the left matrix and then the result again
pos_DTM <- t(t(as.matrix(pos_DTM)) * positive_terms_score)
counts_positive <- rowSums(pos_DTM)
neg_DTM <- corpus_afinn %>%
tokens_keep(names(negative_terms_score)) %>%
dfm()
negative_terms_score <- negative_terms_score[colnames(neg_DTM)]
# caution: to multiply all rows of a matrix with a vector of ncol(matrix) length
# you need to transpose the left matrix and then the result again
neg_DTM <- t(t(as.matrix(neg_DTM)) * negative_terms_score)
counts_negative <- rowSums(neg_DTM)
counts_all_terms <- corpus_afinn %>% dfm() %>% rowSums()
relative_sentiment_frequencies <- data.frame(
positive = counts_positive / (counts_positive + counts_negative),
negative = counts_negative / (counts_positive + counts_negative)
)
sentiments_per_president <- aggregate(
relative_sentiment_frequencies,
by = list(president = textdata$president),
mean)
head(sentiments_per_president)
df <- melt(sentiments_per_president, id.vars = "president")
# order by positive sentiments
ggplot(data = df, aes(x = reorder(president, value, head, 1), y = value, fill = variable)) + geom_bar(stat="identity", position="stack") + coord_flip()
```
4. Draw a heatmap of the terms "i", "you", "he", "she", "we", "they" aggregated per president. Caution: you need to modify the preprocessing in a certain way! Also consider setting the parameter `scale='none'` when calling the `heatmap` function.
```{r ex4, echo=F, results='hide', message=FALSE, warning=FALSE}
# do not use stop word removal!
DTM <- sotu_corpus %>%
tokens(remove_punct = TRUE, remove_numbers = TRUE, remove_symbols =
TRUE) %>%
tokens_tolower() %>%
dfm()
# aggregate relative counts per president
terms_to_observe <- c("i", "you", "he", "she", "we", "they")
DTM_reduced <- as.matrix(DTM[, terms_to_observe])
abs_counts_per_president <- aggregate(
DTM_reduced,
by = list(president = textdata$president),
sum)
lengths_speeches_per_president <- aggregate(
rowSums(DTM),
by = list(president = textdata$president),
sum)
rel_counts_per_president <- abs_counts_per_president[, -1] / lengths_speeches_per_president[, -1]
rownames(rel_counts_per_president) <- abs_counts_per_president$president
# temporal re-ordering
temporally_ordered_presidents <- unique(textdata$president)
rel_counts_per_president <- rel_counts_per_president[temporally_ordered_presidents, ]
# plot
heatmap(t(rel_counts_per_president), Colv=NA, col = rev(heat.colors(256)),
keep.dendro= FALSE, margins = c(5, 10), scale = "none")
```
# References