-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Description
Before
# Current workflow (manual & repetitive)
import arviz as az
idata = pm.sample(...)
az.summary(idata)
az.plot_trace(idata)
az.plot_energy(idata)
# Manually inspect:
# - R-hat values
# - ESS
# - Divergences
# - Tree depth saturation
# Then interpret results and decide next stepsAfter
pm.diagnostics.summary(
trace=idata,
model=model,
warn=True
)Context for the issue:
Interpreting sampling diagnostics is one of the biggest pain points for new PyMC users and a repetitive task for advanced users. While ArviZ already provides excellent low-level tools, PyMC currently lacks a high-level, opinionated diagnostics summary that answers the question:
“Is my sampling result healthy, and what should I do next?”
This feature would:
Improve onboarding and usability
Encourage best practices
Reduce misinterpretation of posterior samples
Build on existing ArviZ diagnostics without introducing new algorithms
Avoid breaking changes
The proposal intentionally focuses on aggregation, interpretation, and presentation, keeping the scope small and maintainable.
I’m happy to discuss API design, implementation location, and testing strategy based on maintainer feedback.