library(tidyverse)
## ── Attaching packages ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── tidyverse 1.3.0 ──
## ✓ ggplot2 3.3.2 ✓ purrr 0.3.4
## ✓ tibble 3.0.3 ✓ dplyr 1.0.2
## ✓ tidyr 1.1.1 ✓ stringr 1.4.0
## ✓ readr 1.3.1 ✓ forcats 0.5.0
## ── Conflicts ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── tidyverse_conflicts() ──
## x dplyr::filter() masks stats::filter()
## x dplyr::lag() masks stats::lag()
df <- readr::read_rds("df.rds")
step_1_df <- readr::read_rds("step_1_df.rds")
step_2_a_df <- readr::read_rds("step_2_a_df.rds")
step_2_b_df <- readr::read_rds("step_2_b_df.rds")
mod_lm_basis <- readr::read_rds("mod_lm_basis.rds")
mod_lm_step1 <- readr::read_rds("mod_lm_step1.rds")
This R Markdown file tackles part iiB, specifically using Bayesian linear models to fit the two linear models we previously fit with lm()
, in part iiA.
iiA. Regression models - lm()
iiB. Regression models - Bayesian iiC. Regression models - Models with Resampling
We will use the rstanarm
package’s stan_lm()
function to fit full Bayesian linear models, with syntax similar to the lm()
function.
library(rstanarm)
## Loading required package: Rcpp
## This is rstanarm version 2.21.1
## - See https://mc-stan.org/rstanarm/articles/priors for changes to default priors!
## - Default priors may change, so it's safest to specify priors, even if equivalent to the defaults.
## - For execution on a local, multicore CPU with excess RAM we recommend calling
## options(mc.cores = parallel::detectCores())
response_1
to all step 1 inputsLet’s first fit a linear model for response_1
, using a linear relationship with all step 1 inputs, as we did in part iiA.
First, see frequentist estimated coefficients using lm()
model:
round(coef(mod_lm_step1), 3)
## (Intercept) xAA2 xBB2 xBB3 xBB4 x01
## 180.492 0.432 -2.408 -7.868 -1.273 0.861
## x02 x03 x04 x05 x06
## -4.137 0.065 0.037 -0.003 -0.158
Use stan_lm()
to fit Bayesian linear model:
post_stanlm_step1 <- stan_lm(response_1 ~ .,
data = step_1_df,
prior = R2(what = "mode", location = 0.5),
seed = 12345)
##
## SAMPLING FOR MODEL 'lm' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 5.9e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.59 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 27.0842 seconds (Warm-up)
## Chain 1: 31.7419 seconds (Sampling)
## Chain 1: 58.8261 seconds (Total)
## Chain 1:
##
## SAMPLING FOR MODEL 'lm' NOW (CHAIN 2).
## Chain 2:
## Chain 2: Gradient evaluation took 1.2e-05 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2:
## Chain 2:
## Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 2:
## Chain 2: Elapsed Time: 28.7937 seconds (Warm-up)
## Chain 2: 26.6203 seconds (Sampling)
## Chain 2: 55.4139 seconds (Total)
## Chain 2:
##
## SAMPLING FOR MODEL 'lm' NOW (CHAIN 3).
## Chain 3:
## Chain 3: Gradient evaluation took 1e-05 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3:
## Chain 3:
## Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 3: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 3:
## Chain 3: Elapsed Time: 29.1279 seconds (Warm-up)
## Chain 3: 35.6271 seconds (Sampling)
## Chain 3: 64.755 seconds (Total)
## Chain 3:
##
## SAMPLING FOR MODEL 'lm' NOW (CHAIN 4).
## Chain 4:
## Chain 4: Gradient evaluation took 1.3e-05 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4:
## Chain 4:
## Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 4: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 4:
## Chain 4: Elapsed Time: 31.6533 seconds (Warm-up)
## Chain 4: 27.3117 seconds (Sampling)
## Chain 4: 58.965 seconds (Total)
## Chain 4:
post_stanlm_step1
## stan_lm
## family: gaussian [identity]
## formula: response_1 ~ .
## observations: 2013
## predictors: 11
## ------
## Median MAD_SD
## (Intercept) 179.4 28.0
## xAA2 0.4 0.2
## xBB2 -2.4 0.2
## xBB3 -7.8 0.2
## xBB4 -1.3 0.3
## x01 0.9 0.1
## x02 -4.1 0.2
## x03 0.1 0.0
## x04 0.0 0.0
## x05 0.0 0.0
## x06 -0.2 0.2
##
## Auxiliary parameter(s):
## Median MAD_SD
## R2 0.5 0.0
## log-fit_ratio 0.0 0.0
## sigma 3.8 0.1
##
## ------
## * For help interpreting the printed output see ?print.stanreg
## * For info on the priors used see ?prior_summary.stanreg
round(coef(post_stanlm_step1), 3)
## (Intercept) xAA2 xBB2 xBB3 xBB4 x01
## 179.366 0.429 -2.399 -7.826 -1.271 0.858
## x02 x03 x04 x05 x06
## -4.115 0.064 0.037 -0.004 -0.155
response_1
to basis functionLet’s fit a linear model for response_1
, using a linear relationship with the basis function we defined previously in part iiA.
First, see frequentist estimated coefficients using lm()
model:
round(coef(mod_lm_basis), 3)
## (Intercept) xAA2
## 44.064 0.325
## xBB2 xBB3
## -20.771 -96.085
## xBB4 splines::ns(x01, df = 2)1
## -21.366 -25.011
## splines::ns(x01, df = 2)2 splines::ns(x02, df = 2)1
## 15.460 -24.719
## splines::ns(x02, df = 2)2 splines::ns(x03, df = 2)1
## -4.968 -23.922
## splines::ns(x03, df = 2)2 xBB2:splines::ns(x01, df = 2)1
## 7.489 25.215
## xBB3:splines::ns(x01, df = 2)1 xBB4:splines::ns(x01, df = 2)1
## 68.747 7.313
## xBB2:splines::ns(x01, df = 2)2 xBB3:splines::ns(x01, df = 2)2
## -9.313 -20.177
## xBB4:splines::ns(x01, df = 2)2 xBB2:splines::ns(x02, df = 2)1
## -3.288 0.505
## xBB3:splines::ns(x02, df = 2)1 xBB4:splines::ns(x02, df = 2)1
## 24.252 22.155
## xBB2:splines::ns(x02, df = 2)2 xBB3:splines::ns(x02, df = 2)2
## -0.701 -8.780
## xBB4:splines::ns(x02, df = 2)2 xBB2:splines::ns(x03, df = 2)1
## -9.637 6.866
## xBB3:splines::ns(x03, df = 2)1 xBB4:splines::ns(x03, df = 2)1
## 62.252 6.233
## xBB2:splines::ns(x03, df = 2)2 xBB3:splines::ns(x03, df = 2)2
## -2.672 -22.961
## xBB4:splines::ns(x03, df = 2)2
## -0.895
With stan_lm()
the linear model is fit using the formula interface, specifying a prior mode for R-squared:
post_stanlm_basis <- stan_lm(response_1 ~ xA + xB*(splines::ns(x01, df = 2) + splines::ns(x02, df = 2) + splines::ns(x03, df = 2)),
data = step_1_df,
prior = R2(what = "mode", location = 0.5),
iter = 4000,
seed = 12345)
##
## SAMPLING FOR MODEL 'lm' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 1.9e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.19 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 4000 [ 0%] (Warmup)
## Chain 1: Iteration: 400 / 4000 [ 10%] (Warmup)
## Chain 1: Iteration: 800 / 4000 [ 20%] (Warmup)
## Chain 1: Iteration: 1200 / 4000 [ 30%] (Warmup)
## Chain 1: Iteration: 1600 / 4000 [ 40%] (Warmup)
## Chain 1: Iteration: 2000 / 4000 [ 50%] (Warmup)
## Chain 1: Iteration: 2001 / 4000 [ 50%] (Sampling)
## Chain 1: Iteration: 2400 / 4000 [ 60%] (Sampling)
## Chain 1: Iteration: 2800 / 4000 [ 70%] (Sampling)
## Chain 1: Iteration: 3200 / 4000 [ 80%] (Sampling)
## Chain 1: Iteration: 3600 / 4000 [ 90%] (Sampling)
## Chain 1: Iteration: 4000 / 4000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 5.73242 seconds (Warm-up)
## Chain 1: 6.4968 seconds (Sampling)
## Chain 1: 12.2292 seconds (Total)
## Chain 1:
##
## SAMPLING FOR MODEL 'lm' NOW (CHAIN 2).
## Chain 2:
## Chain 2: Gradient evaluation took 1.4e-05 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.14 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2:
## Chain 2:
## Chain 2: Iteration: 1 / 4000 [ 0%] (Warmup)
## Chain 2: Iteration: 400 / 4000 [ 10%] (Warmup)
## Chain 2: Iteration: 800 / 4000 [ 20%] (Warmup)
## Chain 2: Iteration: 1200 / 4000 [ 30%] (Warmup)
## Chain 2: Iteration: 1600 / 4000 [ 40%] (Warmup)
## Chain 2: Iteration: 2000 / 4000 [ 50%] (Warmup)
## Chain 2: Iteration: 2001 / 4000 [ 50%] (Sampling)
## Chain 2: Iteration: 2400 / 4000 [ 60%] (Sampling)
## Chain 2: Iteration: 2800 / 4000 [ 70%] (Sampling)
## Chain 2: Iteration: 3200 / 4000 [ 80%] (Sampling)
## Chain 2: Iteration: 3600 / 4000 [ 90%] (Sampling)
## Chain 2: Iteration: 4000 / 4000 [100%] (Sampling)
## Chain 2:
## Chain 2: Elapsed Time: 6.62946 seconds (Warm-up)
## Chain 2: 9.55221 seconds (Sampling)
## Chain 2: 16.1817 seconds (Total)
## Chain 2:
##
## SAMPLING FOR MODEL 'lm' NOW (CHAIN 3).
## Chain 3:
## Chain 3: Gradient evaluation took 1.3e-05 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3:
## Chain 3:
## Chain 3: Iteration: 1 / 4000 [ 0%] (Warmup)
## Chain 3: Iteration: 400 / 4000 [ 10%] (Warmup)
## Chain 3: Iteration: 800 / 4000 [ 20%] (Warmup)
## Chain 3: Iteration: 1200 / 4000 [ 30%] (Warmup)
## Chain 3: Iteration: 1600 / 4000 [ 40%] (Warmup)
## Chain 3: Iteration: 2000 / 4000 [ 50%] (Warmup)
## Chain 3: Iteration: 2001 / 4000 [ 50%] (Sampling)
## Chain 3: Iteration: 2400 / 4000 [ 60%] (Sampling)
## Chain 3: Iteration: 2800 / 4000 [ 70%] (Sampling)
## Chain 3: Iteration: 3200 / 4000 [ 80%] (Sampling)
## Chain 3: Iteration: 3600 / 4000 [ 90%] (Sampling)
## Chain 3: Iteration: 4000 / 4000 [100%] (Sampling)
## Chain 3:
## Chain 3: Elapsed Time: 5.7479 seconds (Warm-up)
## Chain 3: 6.28153 seconds (Sampling)
## Chain 3: 12.0294 seconds (Total)
## Chain 3:
##
## SAMPLING FOR MODEL 'lm' NOW (CHAIN 4).
## Chain 4:
## Chain 4: Gradient evaluation took 1.5e-05 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.15 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4:
## Chain 4:
## Chain 4: Iteration: 1 / 4000 [ 0%] (Warmup)
## Chain 4: Iteration: 400 / 4000 [ 10%] (Warmup)
## Chain 4: Iteration: 800 / 4000 [ 20%] (Warmup)
## Chain 4: Iteration: 1200 / 4000 [ 30%] (Warmup)
## Chain 4: Iteration: 1600 / 4000 [ 40%] (Warmup)
## Chain 4: Iteration: 2000 / 4000 [ 50%] (Warmup)
## Chain 4: Iteration: 2001 / 4000 [ 50%] (Sampling)
## Chain 4: Iteration: 2400 / 4000 [ 60%] (Sampling)
## Chain 4: Iteration: 2800 / 4000 [ 70%] (Sampling)
## Chain 4: Iteration: 3200 / 4000 [ 80%] (Sampling)
## Chain 4: Iteration: 3600 / 4000 [ 90%] (Sampling)
## Chain 4: Iteration: 4000 / 4000 [100%] (Sampling)
## Chain 4:
## Chain 4: Elapsed Time: 5.24899 seconds (Warm-up)
## Chain 4: 6.05385 seconds (Sampling)
## Chain 4: 11.3028 seconds (Total)
## Chain 4:
post_stanlm_basis
## stan_lm
## family: gaussian [identity]
## formula: response_1 ~ xA + xB * (splines::ns(x01, df = 2) + splines::ns(x02,
## df = 2) + splines::ns(x03, df = 2))
## observations: 2013
## predictors: 29
## ------
## Median MAD_SD
## (Intercept) 43.9 1.0
## xAA2 0.3 0.1
## xBB2 -20.7 1.5
## xBB3 -95.8 1.4
## xBB4 -21.3 1.6
## splines::ns(x01, df = 2)1 -24.9 1.0
## splines::ns(x01, df = 2)2 15.4 0.5
## splines::ns(x02, df = 2)1 -24.6 0.9
## splines::ns(x02, df = 2)2 -5.0 0.6
## splines::ns(x03, df = 2)1 -23.8 1.1
## splines::ns(x03, df = 2)2 7.5 0.8
## xBB2:splines::ns(x01, df = 2)1 25.2 1.6
## xBB3:splines::ns(x01, df = 2)1 68.5 1.4
## xBB4:splines::ns(x01, df = 2)1 7.3 1.7
## xBB2:splines::ns(x01, df = 2)2 -9.3 0.7
## xBB3:splines::ns(x01, df = 2)2 -20.1 0.9
## xBB4:splines::ns(x01, df = 2)2 -3.3 0.7
## xBB2:splines::ns(x02, df = 2)1 0.5 1.5
## xBB3:splines::ns(x02, df = 2)1 24.2 1.2
## xBB4:splines::ns(x02, df = 2)1 22.1 1.5
## xBB2:splines::ns(x02, df = 2)2 -0.7 0.7
## xBB3:splines::ns(x02, df = 2)2 -8.8 0.9
## xBB4:splines::ns(x02, df = 2)2 -9.6 0.8
## xBB2:splines::ns(x03, df = 2)1 6.9 1.7
## xBB3:splines::ns(x03, df = 2)1 62.0 1.6
## xBB4:splines::ns(x03, df = 2)1 6.2 2.0
## xBB2:splines::ns(x03, df = 2)2 -2.6 0.9
## xBB3:splines::ns(x03, df = 2)2 -22.9 1.0
## xBB4:splines::ns(x03, df = 2)2 -0.9 1.0
##
## Auxiliary parameter(s):
## Median MAD_SD
## R2 0.9 0.0
## log-fit_ratio 0.0 0.0
## sigma 1.8 0.0
##
## ------
## * For help interpreting the printed output see ?print.stanreg
## * For info on the priors used see ?prior_summary.stanreg
round(coef(post_stanlm_basis), 3)
## (Intercept) xAA2
## 43.917 0.324
## xBB2 xBB3
## -20.718 -95.758
## xBB4 splines::ns(x01, df = 2)1
## -21.310 -24.944
## splines::ns(x01, df = 2)2 splines::ns(x02, df = 2)1
## 15.409 -24.646
## splines::ns(x02, df = 2)2 splines::ns(x03, df = 2)1
## -4.957 -23.831
## splines::ns(x03, df = 2)2 xBB2:splines::ns(x01, df = 2)1
## 7.457 25.157
## xBB3:splines::ns(x01, df = 2)1 xBB4:splines::ns(x01, df = 2)1
## 68.529 7.336
## xBB2:splines::ns(x01, df = 2)2 xBB3:splines::ns(x01, df = 2)2
## -9.284 -20.098
## xBB4:splines::ns(x01, df = 2)2 xBB2:splines::ns(x02, df = 2)1
## -3.283 0.540
## xBB3:splines::ns(x02, df = 2)1 xBB4:splines::ns(x02, df = 2)1
## 24.177 22.096
## xBB2:splines::ns(x02, df = 2)2 xBB3:splines::ns(x02, df = 2)2
## -0.701 -8.751
## xBB4:splines::ns(x02, df = 2)2 xBB2:splines::ns(x03, df = 2)1
## -9.597 6.852
## xBB3:splines::ns(x03, df = 2)1 xBB4:splines::ns(x03, df = 2)1
## 62.029 6.220
## xBB2:splines::ns(x03, df = 2)2 xBB3:splines::ns(x03, df = 2)2
## -2.639 -22.893
## xBB4:splines::ns(x03, df = 2)2
## -0.870
loo_post_stanlm_basis <- loo(post_stanlm_basis)
loo_post_stanlm_step1 <- loo(post_stanlm_step1)
loo_compare(loo_post_stanlm_basis, loo_post_stanlm_step1)
## elpd_diff se_diff
## post_stanlm_basis 0.0 0.0
## post_stanlm_step1 -1517.0 52.3
The Bayesian linear model model using the basis function is the preferred and better one, for its lower LOO Information Criterion (LOOIC) that estimates the expected log predicted density (ELPD) for a new dataset (out-of-sample data). The LOOIC is a Bayesian-equivalent to the Akaike Information Criterion (AIC), that integrates over uncertainty in the parameters of the posterior distribution.
First visualize the coefficient estimates, similar to in part iiA.
plot(post_stanlm_basis) +
geom_vline(xintercept = 0, color = "grey", linetype = "dashed", size = 1.)
Next we plot the coefficient distributions.
as.data.frame(post_stanlm_basis) %>% tibble::as_tibble() %>%
select(names(post_stanlm_basis$coefficients)) %>%
tibble::rowid_to_column("post_id") %>%
tidyr::gather(key = "key", value = "value", -post_id) %>%
ggplot(mapping = aes(x = value)) +
geom_histogram(bins = 55) +
facet_wrap(~key, scales = "free") +
theme_bw() +
theme(axis.text.y = element_blank())
Display the maximum likelihood estimate of the noise, or residual standard error, \(\sigma\) obtained using lm()
for the basis model:
SSE <- sum(mod_lm_basis$residuals**2)
n <- length(mod_lm_basis$residuals)
mle_sigma <- sqrt(SSE/(n - length(mod_lm_basis$coefficients)))
mle_sigma
## [1] 1.796463
Display the posterior uncertainty on \(\sigma\):
# noise quantiles
as.data.frame(post_stanlm_basis) %>% tibble::as_tibble() %>%
select(sigma) %>%
pull() %>%
quantile(c(0.05, 0.5, 0.95))
## 5% 50% 95%
## 1.760379 1.806893 1.854134
# noise 95% posterior interval
posterior_interval(post_stanlm_basis, prob = 0.95, pars = "sigma")
## 2.5% 97.5%
## sigma 1.751596 1.864234
Visualize the posterior distribution on \(\sigma\), indicating MLE of \(\sigma\) from part iiA:
as.data.frame(post_stanlm_basis) %>% tibble::as_tibble() %>%
ggplot(mapping = aes(x = sigma)) +
geom_histogram(bins = 55) +
geom_vline(xintercept = mle_sigma,
color = "red", linetype = "dashed", size = 1.1)
The MLE of \(\sigma\) falls within the 95% posterior uncertainty interval, near the mode.
post_stanlm_basis %>% readr::write_rds("post_stanlm_basis.rds")
post_stanlm_step1 %>% readr::write_rds("post_stanlm_step1.rds")