-
Notifications
You must be signed in to change notification settings - Fork 260
Open
Description
Hello CausalImpact team,
Thank you for this great package! I have some questions regarding how burn-in (or warm-up) steps is handled in this original implementation.
From my understanding, this package adaptively determines the burn-in by analyzing the log-likelihood trajectory, using the following process:
- Compute the log-likelihood for each sample.
- Consider the last fraction of samples (e.g., the final 10%) and find a high quantile (e.g., the 90th percentile) of log-likelihood in this tail portion.
- Identify the earliest point in the chain where the log-likelihood exceeds this quantile, and drop all samples before this point.
This adaptive approach can, in some cases, lead to dropping the first 90% of samples, depending on the log-likelihood trajectory. I have a few questions about this:
- Could you confirm if my understanding of the burn-in behavior in the R implementation is correct? If not, I’d appreciate any corrections or clarifications.
- I’m curious about the logic of looking at the last fraction of samples to determine the burn-in threshold. Wouldn’t it make more sense to use the first fraction (e.g., the first 20%) to determine the threshold? This might prevent discarding a large proportion of samples and ensure the point estimates and credible intervals are based on a sufficiently large set of samples.
Thank you for your time and for considering these questions. I look forward to hearing your insights!
Metadata
Metadata
Assignees
Labels
No labels