An AR(1) model can be specified as:
3. Using the above likelihood and priors, give an equation for each of the conditional posterior densities of each parameter of an AR(1) model up to a constant of proportionality.
From our prior, both parameters are propostional to 1 in their respective range, so the posterior density would be:
$$
p(\phi, \sigma^2 | x) \propto Lp(\phi)p(\sigma^2)\propto L
$$
So, for conditional posterior density of
4. Based on the above, write down, in detail, an MCMC algorithm for conducting posterior inference for an AR(1) model.
Step 1: Start with uniformative starting value of
Step 2: Plug in the
Step 3: Plug in the
Step 4: Repeat above steps for N times (N is large).
Then we get a distribution of the parameters
5. What is a suitable prior to enforce a stationarity assumption in an AR(1) model, i.e., impose the constraint(s) necessary ensure the variable is stationary?
Stationary (week form) by definition is that the mean, variance and autocorrelation is constant over time.
For AR(1) model to be stable we have
6. Using Julia, generate pseudo-data for two variables, 100 observations, each from an AR(1) DGP; one stationary, one nonstationary (you can experiment with different parameter values), and provide a time plot for each one.
let n = 1000
Stationary:(let
phi = 0.5
alpha = 0
z = zeros(n+20)
for t = 2:n+20
z[t] = alpha + phi*z[t-1] + randn(1)[1]
end
y = z[21:end]
plot(y)
Non-Stationary:(let
phi = 1.001
z = zeros(n+20)
for t = 2:(n+20)
z[t] = 0.0 + phi*z[t-1] + randn(1)[1]
end
y = z[21:(n+20)]
plot(y)
7. Using Julia and you data generated in qu. 6, estimate an AR(1) model (i) using Bayesian HMC/MCMC methods in Turing.jl, and also (ii) using Gibbs sampling (not via Turing.jl). Provide estimation summary statistics and plots of the posterior densities along with plot and/or information about the prior distributions used.
(1)Using Turing.jl
yt = y[2:n]
yt1 = y[1:(n-1)]
X = [ones(n-1) yt1]
using Turing
@model simple_regression(y,X, ::Type{TV}=Vector{Float64}) where {TV} =begin
n, D = size(X)
#alpha ~ Normal(0,1)
sig ~ Uniform(0.01,10)
#m ~ Truncated(Normal(-2,3),-999.9,0.999)
beta = TV(undef,(D))
# sd<10 too restrictive for beta coeffs
for k in 1:(D)
beta[k] ~ Normal(0, 20.0)
end
#delta ~ Normal(0,3.0)
# mu = logistic.(Matrix(X) * beta)
for i in 1:n
y[i] ~ Normal(X[i,:]'*beta, sig)
end
end
model = simple_regression(yt, X)
Turing.setprogress!(true)
@time cc = sample(model, NUTS(0.65),3000)
cc
plot(cc)
** Parameter disstributions:**
Summary Statistics parameters mean std mcse ess_bulk ess_tail rhat ess_per_sec Symbol Float64 Float64 Float64 Float64 Float64 Float64 Float64
sig 0.9925 0.0221 0.0003 4246.3725 2070.5354 1.0002 866.9605
beta[1] 0.0223 0.0321 0.0005 4977.8410 2198.9361 1.0028 1016.3007
beta[2] 0.5031 0.0277 0.0004 4540.3554 2170.7153 0.9999 926.9815
Quantiles parameters 2.5% 25.0% 50.0% 75.0% 97.5% Symbol Float64 Float64 Float64 Float64 Float64
sig 0.9506 0.9774 0.9924 1.0072 1.0375
beta[1] -0.0396 0.0009 0.0212 0.0445 0.0850
beta[2] 0.4489 0.4848 0.5027 0.5210 0.5592
(2)Using Gibbs sampling
Since AR(1) model taken on the same form as a linear model if we set
yt = y[2:end]
yt1 = y[1:end-1]
include("gsreg.jl")
b = [0.0; 0.0] # prior coeff. means
iB = inv([0.0001 0.0; 0.0 0.0001])
X = [ones(99) yt1]
bdraws,s2draws = gsreg(yt,X)
plot(bdraws[:,2], st=:density, fill=true, alpha=0.5, title = "phi posterior")
mean(bdraws[:,2])
std(bdraws[:,2])
quantile(bdraws[:,2],[0.025,0.975])
plot(s2draws[:,1], st=:density, fill=true, alpha=0.5, title = "sigma^2 posterior" )
mean(s2draws[:,1])
std(s2draws[:,1])
quantile(s2draws[:,1],[0.
** Distribution for
mean of
**Distribution for
mean of