library(astsa)
# your code here
Assignment 2 Due 10/7 at Midnight
NOTE: I forgot to include a relevant detail for Part 2, number 7. The change is bolded. Sorry!!! Part 2 number 8 should be easier to answer now.
Part 1: Math
In class, we have worked with “Signal plus noise Model” (equation 1.5)
\[ \begin{aligned} \text{Model: }& x_t = 2\cos(2\pi\frac{t+15}{50}) + w_t\\ \text{Mean function: }& \mathbb{E}(x_t) = 2\cos(2\pi\frac{t+15}{50}) \end{aligned} \]
[5 points] The mean function is derived in Example 2.4. Describe what happens in each step of the computation [3 points], and provide a “math stress” rating (1 = effortless, 100 = nightmare) and 3 emojis[2 points]. This is personal and there is no right answer.
[5 points] Is the signal plus noise model stationary in the mean?
[5 points] Write down \(\gamma_x(s,t)\), the autocovariance function of \(x_t\) [3 points]. You may accomplish this in any way, including asking me personally in office hours or asking a classmate. Just make sure you cite the source![2 points]
[6 points] Consider the model:
\[ y_t = x_t - 2\cos(2\pi\frac{t+15}{50}) \]
Compute the mean function of \(y_t\) [3 points]. Is \(y_t\) stationary in the mean?[1 point] How do you know?[2 points]
Part 2: Code
Note: I have set the code chunks here to have eval: false
in the code chunk. Change that to true
so that I can run your code easily.
[5 points] All your code runs without errors (unless that’s the point), and if there is a message, explain what it means. (Bonus: to be nice to me, submit a rendered pdf)
[5 points] Simulate from an AR(1) process with coefficient 0.7 and 10 data points.
- [6 points] Look at the documentation for the
stats::lag
function (run?lag
in the console). State what package the function is in and what the function does[4 points]. Usingk = 1
compute a lag(1) version ofx_t
that you simulated above[2 points].
<- # your code here x_t_lag1
- [3 points] Run the following code and compare
x_t
andx_t_lag1
.
cbind(x_t, x_t_lag1)
- Make a time series plot of
x_t
andx_t_1
. Do you notice the same features as when in the previous question?
# your code here
- Run the below code. Why are the plots different? Are either particularly useful?
plot(x_t, x_t_lag1)
plot(as.vector(x_t), as.vector(x_t_lag1))
- Instead of using
stats::lag
, usedplyr::lag
to create a new version ofx_t_lag
. Repeat the code from steps 2-5. Describe how the output has changed.
<- dplyr::lag(# your code here) x_t_lag1
- Re-simulate an AR(1) process as in number 1, but this time with 100 observations. Also recompute
x_t_lag1
. Fit an intercept-free regression model to predictx_t
fromx_t_lag
. Provide the value of the slope estimate and interpret the value in the context of this simulation.
<- # your code here linear_model
- [11 points] Plot the
acf
ofx_t
[2 points] and theacf
of the residuals from the regression model[4 points]. Which looks more like white noise?[2 points] What does this tell you about the temporal structure inx_t
and its residuals?[3 points]
# your code here
Part 3: Reading
[9 points] Read sections 2.8 and 2.9 from Forecasting Principles and Practice. Make 3 connections [3 points each] to content from the course textbook (equations or similar examples.).