Skip to content

Commit

Permalink
Rebuilt with edits from me.
Browse files Browse the repository at this point in the history
  • Loading branch information
dajmcdon committed Oct 31, 2021
1 parent bde16eb commit 8bd264e
Show file tree
Hide file tree
Showing 4 changed files with 10 additions and 9 deletions.
15 changes: 8 additions & 7 deletions forecast/paper/supplement-text.tex
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,15 @@ \section{Finalized Versus Vintage Data}

\chngcli~(and, to a lesser extent, the other claims-based signals) is the most
affected by this distinction, reflecting the latency in claims-based reporting.
This underscores the importance of efforts to provide ``nowcasts'' for claims
This highlights the importance of efforts to provide ``nowcasts'' for claims
signals (which corresponds to a 0-ahead forecast of what the claims signal's
value will be once all data has been collected). Looking at the \chngcli~and
\dv~curves in Figure \ref{fig:fcast-finalized}, we can see that they perform
very similarly when trained on the finalized data. This is reassuring because
they are, in principle, measuring the same thing (namely, the percentage of
outpatient visits that are primarily about COVID-related symptoms), but based on
data from different providers. The substantial difference in their curves in
Figure 3 of the main paper must therefore reflect their having very different
Figure 3 of the main paper must, therefore, reflect their having very different
backfill profiles.

While using finalized rather than vintage data affects \dv~the least for
Expand Down Expand Up @@ -104,8 +104,8 @@ \section{Comparing COVID-19 Forecast Hub Models}
quantiles, a trailing training window of 21 days, pooling across all locations
jointly, and fitting to case rates rather than counts (as we do in all our
models in the main paper)---can be robust and effective, performing
competitively many of the top models from COVID-19 Forecast Hub, including the
Hub ensemble model.
competitively to the top models submitted to the COVID-19 Forecast Hub,
including the Hub's ensemble model.

The closest forecast target in the Hub to that used in the main paper is
state-level case incidence over an epiweek---defined by the sum of new case
Expand All @@ -128,11 +128,12 @@ \section{Comparing COVID-19 Forecast Hub Models}
combination of their quantiles, but rather, depends intricately on the
correlations between the random variables).

Therefore, to make a comparison to models in the Hub as direct as possible, we
Therefore, to make the comparison to models in the Hub as direct as possible, we
retrained our models over the same forecast period as in the main paper, and
with the same general setup entirely, except at the state rather than HRR level.
We then rescaled them post hoc to account for the different temporal resolution
and the rate versus count scale (first and third points in the above list). The
and the rate-versus-count scaling (first and third points in the above list).
The
results are given in Figure \ref{fig:compare-to-hub}. The evaluation was
carried out exactly as in the main paper, and the figure displays both mean WIS
and geometric mean WIS, as a function of ahead, relative to the baseline model.
Expand All @@ -146,7 +147,7 @@ \section{Comparing COVID-19 Forecast Hub Models}
in Figure \ref{fig:compare-to-hub} that the AR model examined in this paper is
competitive with top models in the Hub, even outperforming the Hub ensemble
model for smaller ahead values. The same general conclusion can be drawn for
the indicators models as well. However, interestingly, a close inspection
the indicator-assisted models as well. However, interestingly, a close inspection
reveals that the AR model here is for the most part in the ``middle of the
pack'' when compared to the indicator models, and only the Google-AA model
offers clear improvement over AR for all aheads. This is likely due to the fact
Expand Down
2 changes: 1 addition & 1 deletion forecast/paper/supplement.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ plotter(fcasts_honest,

<!-- Comparison to the Hub -->

```{r compare-to-hub, fig.cap="Forecast performance for AR and indicators models, each retrained at the state level, compared to models submitted to the COVID-19 Forecast Hub over the same period. The thin grey lines individual models from the Hub; the blue line is the Hub ensemble model. (Note that, to align prediction dates as best as possible, we look at the AR and indicator model forecasts for 5, 12, 19, and 26 days ahead; this roughly corresponds to 1, 2, 3, and 4 weeks ahead, respectively, since in the Hub, models typically submit forecasts on a Tuesday for the epiweeks aligned to end on each of the following 4 Saturdays.)"}
```{r compare-to-hub, fig.cap="Forecast performance for AR and indicator models, each retrained at the state level, compared to models submitted to the COVID-19 Forecast Hub over the same period. The thin grey lines are individual models from the Hub; the blue line is the Hub ensemble model. (To align prediction dates as best as possible, we look at the AR and indicator model forecasts for 5, 12, 19, and 26 days ahead; this roughly corresponds to 1, 2, 3, and 4 weeks ahead, respectively, since in the Hub, models typically submit forecasts on a Tuesday for the epiweeks aligned to end on each of the following 4 Saturdays.)"}
knitr::include_graphics("fig/compare-states-to-hub.pdf")
```

Expand Down
Binary file modified forecast/paper/supplement.pdf
Binary file not shown.
2 changes: 1 addition & 1 deletion forecast/paper/supplement.tex
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@

}

\caption{Forecast performance for AR and indicators models, each retrained at the state level, compared to models submitted to the COVID-19 Forecast Hub over the same period. The thin grey lines individual models from the Hub; the blue line is the Hub ensemble model. (Note that, to align prediction dates as best as possible, we look at the AR and indicator model forecasts for 5, 12, 19, and 26 days ahead; this roughly corresponds to 1, 2, 3, and 4 weeks ahead, respectively, since in the Hub, models typically submit forecasts on a Tuesday for the epiweeks aligned to end on each of the following 4 Saturdays.)}\label{fig:compare-to-hub}
\caption{Forecast performance for AR and indicator models, each retrained at the state level, compared to models submitted to the COVID-19 Forecast Hub over the same period. The thin grey lines are individual models from the Hub; the blue line is the Hub ensemble model. (To align prediction dates as best as possible, we look at the AR and indicator model forecasts for 5, 12, 19, and 26 days ahead; this roughly corresponds to 1, 2, 3, and 4 weeks ahead, respectively, since in the Hub, models typically submit forecasts on a Tuesday for the epiweeks aligned to end on each of the following 4 Saturdays.)}\label{fig:compare-to-hub}
\end{figure}

\clearpage
Expand Down

0 comments on commit 8bd264e

Please sign in to comment.