diff --git a/Curriculum/Day_2/day_2.qmd b/Curriculum/Day_2/day_2.qmd
index 4a4cb2c..d6898fb 100644
--- a/Curriculum/Day_2/day_2.qmd
+++ b/Curriculum/Day_2/day_2.qmd
@@ -486,16 +486,22 @@ D) You can't
::: {.callout-note appearance="minimal"}
-20 minute break for questions & practicing
+Break for questions & practicing
:::
{{< include ../snippets/complex_asar_exercises.qmd >}}
+# NEFSC-specific content
+
+This information will be available after the workshop in the sidebar's [Regionally-specific content tab](../../resources/regional_info.qmd).
+
+{{< include ../snippets/nefsc_transition.qmd >}}
+
# NOAA Standard Assessment Report Guidelines
A part of the approach to standardizing workflow is to design report guidelines that identifies the audience of assessment reports and attempts to streamline the review process through succinct reporting which will increase throughput.
-The standard guidelines were designed using a thorough review of assessment reports from across the agency. Theses guidelines were reviewed by a steering committee of assessment scientists across NOAA, leadership at NOAA Fisheries HQ, and U.S. fishery management councils. The guidelines are also under formal review in the Northwest grounfish SSC subcommittee for adoptions for the 2026-2027 assessment cycle.
+The standard guidelines were designed using a thorough review of assessment reports from across the agency. Theses guidelines were reviewed by a steering committee of assessment scientists across NOAA, leadership at NOAA Fisheries HQ, and U.S. fishery management councils. The guidelines are also under formal review in the Northwest groundfish SSC subcommittee for adoptions for the 2026-2027 assessment cycle.
diff --git a/Curriculum/snippets/nefsc_transition.qmd b/Curriculum/snippets/nefsc_transition.qmd
new file mode 100644
index 0000000..37a40e2
--- /dev/null
+++ b/Curriculum/snippets/nefsc_transition.qmd
@@ -0,0 +1,125 @@
+### Adaptation of NEFSC Template to {asar}
+
+Dan Hennen, the NEFSC representative on the workflows steering committee, has designed a sleek workflow to help aid the transition from the typical management track assessment to NOAA standard guidelines. The standard guidelines executive summary mimics closely the current management track reports, so here we will automate the executive summary in this section of the report.
+
+Use the following code to download files into your working directory for the next process:
+
+```{r}
+#| eval: false
+get_nefsc_files <- function(dir){
+ NEFSCtoASAR_folder <- file.path(dir, "ExportLegacyNEFSCtoASARproject")
+ testStocks_folder <- file.path(NEFSCtoASAR_folder, "testStocks")
+ asarreport_folder <- file.path(NEFSCtoASAR_folder, "ASARreportFiles")
+ # if (!file.exists(dir)) {
+ dir.create(NEFSCtoASAR_folder)
+ dir.create(testStocks_folder)
+ dir.create(asarreport_folder)
+ # }
+ file_names_to_download <- c(
+ # "ASARreportFiles",
+ "CheckLatexInstall.R",
+ "create_asar_object.R",
+ "Example.R",
+ "MapAutoUpdateToAsar.R",
+ "plot_survey_indices.R",
+ "plot_total_catch.R",
+ "table_brp1.1.R",
+ "table_catch_status1.1.R",
+ "table_projections1.1.R",
+ "testAdditionalStocks.R",
+ "TestMoreStocksReportOnly.R"
+ )
+ teststocks_files <- c(
+ "BSBUNITAutoAss.RData",
+ "BUTUNITAutoAss.RData",
+ "CODGBAutoAss.RData",
+ "CODWGOMAutoAss.RData",
+ "SCUNITAutoAss.RData"
+ )
+ asarreport_files <- c(
+ "01_executive_summary.qmd",
+ "in-header.tex",
+ "preamble.R"
+ )
+
+ for (i in file_names_to_download) {
+ cli::cli_alert_info("๐ฅ Downloading {i}...")
+ download.file(
+ glue::glue("https://raw.githubusercontent.com/nmfs-ost/workflows-workshop/nefsc-transition/resources/ExportLegacyNEFSCtoASARproject/{i}"),
+ glue::glue("{NEFSCtoASAR_folder}/{i}"),
+ mode = "wb"
+ )
+ }
+
+ for (i in teststocks_files) {
+ cli::cli_alert_info("๐ฅ Downloading {i}...")
+ download.file(
+ glue::glue("https://raw.githubusercontent.com/nmfs-ost/workflows-workshop/nefsc-transition/resources/ExportLegacyNEFSCtoASARproject/testStocks/{i}"),
+ glue::glue("{testStocks_folder}/{i}"),
+ mode = "wb"
+ )
+ }
+
+ for (i in asarreport_files) {
+ cli::cli_alert_info("๐ฅ Downloading {i}...")
+ download.file(
+ glue::glue("https://raw.githubusercontent.com/nmfs-ost/workflows-workshop/nefsc-transition/resources/ExportLegacyNEFSCtoASARproject/ASARreportFiles/{i}"),
+ glue::glue("{asarreport_folder}/{i}"),
+ mode = "wb"
+ )
+ }
+
+ message("โ
Download complete.")
+}
+get_nefsc_files(getwd())
+```
+
+The following steps will guide you through how to adapt the {asar} workflow and template and migrate the current content in your documents to the new ones:
+
+ 1. Change paths in TestMoreStocksReportOnly.R and source it (run all the code in the file).
+
+ 2. Copy the 3 files in ASARreportFiles and paste them into the report directory made by step 1 (and overwrite).
+
+```{r}
+#| eval: false
+file.copy(
+ from = list.files(file.path(getwd(), "ExportLegacyNEFSCtoASARproject", "ASARreportFiles"), full.names = TRUE),
+ to = file.path(getwd(), "report"),
+ recursive = FALSE,
+ overwrite = TRUE
+)
+```
+
+ 3. Move in-header.tex to report/support_files and overwrite (this just adds the LaTeX package {float})
+
+```{r}
+#| eval: false
+fs::file_move(
+ file.path(getwd(), "report", "in-header.tex"),
+ file.path(getwd(), "report", "support_files", "in-header.tex")
+)
+```
+
+ 4. Change paths at the top of report/preamble.R.
+
+ * For the third line (starts with `load`), the example rda file ("Black_Sea_Bass2024.rda") will probably be in your working directory.
+
+ 5. Add the following code to the yaml in the skeleton.qmd (probably named "sar_N_Black_Sea_Bass_skeleton.qmd):
+
+```
+execute:
+ echo: false
+ message: false
+ warning: false
+```
+
+ 6. From the skeleton, delete lines 70-75 (before step 5, lines 66-71) and *render*.
+
+```
+# load converted output from stockplotr::convert_output()
+load(fname)
+# Call reference points and quantities below
+output <- out_new |>
+ dplyr::mutate(estimate = as.numeric(estimate),
+ uncertainty = as.numeric(uncertainty))
+```
diff --git a/_quarto.yml b/_quarto.yml
index 3e6cd62..bc77d6e 100644
--- a/_quarto.yml
+++ b/_quarto.yml
@@ -1,5 +1,8 @@
project:
type: website
+ render:
+ - "*.qmd" # Render all qmd files in the root
+ - "!resources/ExportLegacyNEFSCtoASARproject" # Exclude this folder from rendering
website:
page-navigation: true
@@ -48,6 +51,8 @@ website:
text: "Day 3: stockplotr + asar"
- href: Curriculum/nsaw.qmd
text: "NSAW workshop"
+ - href: resources/regional_info.qmd
+ text: "Regionally-specific content"
format:
diff --git a/agenda.qmd b/agenda.qmd
index 6fa3358..408933b 100644
--- a/agenda.qmd
+++ b/agenda.qmd
@@ -20,6 +20,8 @@ format: html
## Day 2
+(See @sec-nefsc for the NEFSC Workshop adjustment)
+
| Time | Topic |
|-----------|---------------------------|
| 0:00-0:15 |[ Introduction & Icebreaker](https://nmfs-ost.github.io/workflows-workshop/Curriculum/Day_2/day_2.html#introduction) |
@@ -51,3 +53,21 @@ format: html
| 2:15-2:40 | [Creating accessible documents](https://nmfs-ost.github.io/workflows-workshop/Curriculum/Day_3/day_3.html#sec-a11y) |
| 2:40-2:50 | [Preparing for the Future](https://nmfs-ost.github.io/workflows-workshop/Curriculum/Day_3/day_3.html#sec-future) |
| 2:50-3:00 | [Final remarks and questions](https://nmfs-ost.github.io/workflows-workshop/Curriculum/Day_3/day_3.html#questions-comments-feedback-and-closing) |
+
+
+# NEFSC Day 2 {#sec-nefsc}
+
+| Time | Topic |
+|-----------|---------------------------|
+| 0:00-0:15 |[ Introduction & Icebreaker](https://nmfs-ost.github.io/workflows-workshop/Curriculum/Day_2/day_2.html#introduction) |
+| 0:15-0:35 | [{asar}: installation and overview](https://nmfs-ost.github.io/workflows-workshop/Curriculum/Day_2/day_2.html#installation) |
+| 0:35-1:05 | [asar::create_template()](https://nmfs-ost.github.io/workflows-workshop/Curriculum/Day_2/day_2.html#asarcreate_template) |
+| **1:05-1:20** | **Break** |
+| 1:20-1:30 | [Mid-session icebreaker](https://docs.google.com/document/d/1nVlmmZgp8vCHZKCMHnBm43bf-mtk7pSA1t2FLHEoyZA/edit?tab=t.0#heading=h.lwix6dbx3lr8) |
+| 1:30-1:45 | [stockplotr::convert_output()](https://nmfs-ost.github.io/workflows-workshop/Curriculum/Day_2/day_2.html#stockplotrconvert_output) |
+| 1:45-2:05 | [Advanced workflow](https://nmfs-ost.github.io/workflows-workshop/Curriculum/Day_2/day_2.html#advanced-asar-workflow) |
+| **2:05-2:20** | **Break for questions** |
+| 2:20-2:25 | [Mid-session icebreaker](https://docs.google.com/document/d/1nVlmmZgp8vCHZKCMHnBm43bf-mtk7pSA1t2FLHEoyZA/edit?tab=t.0#heading=h.lwix6dbx3lr8) |
+| 2:25-2:45 | [Adaptation of NEFSC Template](https://nmfs-ost.github.io/workflows-workshop/Curriculum/Day_2/day_2.html#adaptation-of-nefsc-template-to-asar) |
+| 2:45-2:55 | [Standard Guidelines Discussion](https://nmfs-ost.github.io/workflows-workshop/Curriculum/Day_2/day_2.html#noaa-standard-assessment-report-guidelines) |
+| 2:55-3:00 | [Wrap-up](https://nmfs-ost.github.io/workflows-workshop/Curriculum/Day_2/day_2.html#summary-10-mins) |
diff --git a/resources/ExportLegacyNEFSCtoASARproject/ASARreportFiles/01_executive_summary.qmd b/resources/ExportLegacyNEFSCtoASARproject/ASARreportFiles/01_executive_summary.qmd
new file mode 100644
index 0000000..8de343e
--- /dev/null
+++ b/resources/ExportLegacyNEFSCtoASARproject/ASARreportFiles/01_executive_summary.qmd
@@ -0,0 +1,370 @@
+# Executive Summary {#sec-exec-sum}
+
+
+```{r}
+#| label: 'preamble'
+#| warning: false
+#| eval: true
+#| include: false
+#| results: asis
+
+#source("preamble.R")
+#Preamble text
+# Dynamically find the final projection year from the data
+# This looks for the latest year in the 'fore' (forecast) era
+final_proj_yr <- model_results |>
+ dplyr::filter(era == "fore") |>
+ dplyr::pull(year) |>
+ max(na.rm = TRUE)
+
+# Build the Preamble using glue
+preamble_text <- glue::glue("
+ This assessment of the {metadata$spp_name} ({metadata$spp_latin}) stock is a \\
+ Management Track assessment update of the {metadata$last_ass} Research Track \\
+ assessment. This assessment updates fishery catch data, research survey \\
+ indices of abundance, and the Research Track assessment model and \\
+ reference points through {metadata$term_yr}. Additionally, stock \\
+ projections have been updated through {final_proj_yr}.
+")
+
+# Display the result
+cat(preamble_text)
+
+#You can then change what needs to be changed for the update, or set up a nice paragraph with dynamic variables.
+```
+
+
+```{r}
+#| label: 'state_of_stock'
+#| eval: true
+#| include: false
+
+# STATE OF THE STOCK TEXT
+# Calculate Percentages on the fly
+calc_perc <- function(num, den) {
+ if (is.null(num) || is.null(den) || is.na(num) || is.na(den) || den == 0) return("___")
+ return(round((as.numeric(num) / as.numeric(den)) * 100))
+}
+
+# Calculate percentages using the safety function
+ssb_perc <- calc_perc(metadata$terminal_b_adj, metadata$val_ssb_msy)
+f_perc <- calc_perc(metadata$terminal_f_adj, metadata$val_f_msy)
+
+
+# Force values to be simple, single characters to prevent glue from collapsing
+spp_name <- to_latex_caption(metadata$spp_name %||% "___")
+spp_latin <- to_latex_caption(metadata$spp_latin %||% "___")
+ssb_status <- to_latex_caption(metadata$status_ssb_now %||% "___")
+f_status <- to_latex_caption(metadata$status_f_now %||% "___")
+term_yr <- as.numeric(metadata$term_yr %||% 0)
+ssb_val <- to_latex_caption(metadata$terminal_b_adj %||% "___")
+ssb_units <- to_latex_caption(metadata$ssb_units %||% "___")
+ssb_name <- to_latex_caption(metadata$ssb_name %||% "___")
+msy_val <- to_latex_caption(metadata$val_ssb_msy %||% "___")
+f_val <- to_latex_caption(metadata$terminal_f_adj %||% "___")
+f_name <- to_latex_caption(metadata$f_name %||% "___")
+f_msy_val <- to_latex_caption(metadata$val_f_msy %||% "___")
+
+# Clean the rho variables using your to_latex_caption function
+rho_ssb <- to_latex_caption("rho") # This will return the actual Greek symbol ฯ
+rho_f <- to_latex_caption("rho")
+
+# Logic for the Rho Adjustment sentence
+if (isTRUE(metadata$rho_adj_used)) {
+ rho_val_ssb <- metadata$rho_ssb_now %||% 0
+ rho_val_f <- metadata$rho_f_now %||% 0
+
+ rho_sentence <- glue::glue("Retrospective adjustments were made to the terminal year model results ({rho_ssb} = {rho_val_ssb}; {rho_f} = {rho_val_f}).")
+} else {
+ rho_sentence <- "Retrospective adjustments were not made to the model results because the retrospective pattern was minor."
+}
+
+
+# Build the final state of the stock text
+state_of_stock_text <- glue::glue("
+ Based on this updated assessment, the {spp_name} ({spp_latin}) \\
+ stock is {ssb_status} and {f_status}. {rho_sentence} \\
+ Spawning stock biomass (SSB) in {term_yr} was estimated to be \\
+ {ssb_val} {ssb_units}, which is {ssb_perc}% of the \\
+ biomass target ({ssb_name} = {msy_val} {ssb_units}). \\
+ The {term_yr} fully selected fishing mortality was estimated to be \\
+ {f_val}, which is {f_perc}% of the overfishing \\
+ threshold proxy ({f_name} = {f_msy_val}).
+")
+
+# Display result
+#print(state_of_stock_text)
+
+```
+
+```{r}
+#| label: 'projections'
+#| warning: false
+#| eval: true
+#| include: false
+#| results: asis
+
+# PROJECTION TEXT
+# Calculate the projection span dynamically
+# This determines how many years out the forecast goes
+projection_span <- final_proj_yr - metadata$term_yr
+
+# Define the average year range used for model inputs
+# This aligns the window length with the projection length
+avg_start_yr <- metadata$term_yr - (projection_span - 1)
+avg_end_yr <- metadata$term_yr
+
+# Build the Projection text using your metadata$proj_text (just paste it in your console and copy)
+# if there are parts you can automate - do it!
+# This one uses the null-coalescing operator to default to WHAM if model_type is missing
+projection_text <- glue::glue("
+ Short-term {projection_span}-year projections of biomass and catch were \\
+ performed through {final_proj_yr} using standard {metadata$model_type %||% 'WHAM'} \\
+ projections from the assessment model. The most recent {projection_span}-year \\
+ averages ({avg_start_yr}-{avg_end_yr}) of annual fishery selectivity, \\
+ maturity, and mean weight-at-age were used in the projections.
+")
+
+report_text <- list(
+ preamble = preamble_text,
+ sos = state_of_stock_text,
+ projection = projection_text
+)
+
+#print(to_latex_caption(metadata$cap_brp))
+```
+
+
+
+## Introduction
+
+`r report_text$preamble`
+
+## State of the Stock
+
+`r report_text$sos`
+
+```{r}
+#| echo: false
+#| label: tbl-catch
+#| tbl-cap: !expr to_latex_caption(metadata$cap_status)
+
+table_catch_status(output)
+```
+
+
+```{r}
+#| echo: false
+#| label: tbl-brp
+#| tbl-cap: !expr to_latex_caption(metadata$cap_brp)
+table_brp(dat = output)
+```
+
+
+
+
+## Projections
+
+`r report_text$projection`
+
+
+```{r}
+#| echo: false
+#| results: asis
+
+# Extract and clean the caption string
+cap_text <- to_latex_caption(metadata$cap_proj)
+
+# Prepare the table object
+# We apply the caption internally to ensure LaTeX compatibility
+tab_out <- table_projections(output) |>
+ gt::tab_caption(caption = cap_text)
+
+# Render output based on format
+if (knitr::is_latex_output()) {
+ # Print the raw LaTeX to the document
+ cat(gt::as_latex(tab_out))
+} else {
+ # Standard HTML display
+ tab_out
+}
+```
+
+\clearpage
+
+```{r}
+#| label: fig-ssb
+#| fig-cap: !expr to_latex_caption(metadata$cap_ssb)
+#| fig-pos: 'H'
+#| fig-height: 8
+#| fig-width: 7
+
+# Start with the base biomass plot
+plt_ssb <- stockplotr::plot_biomass(output)
+
+# Filter and add the previous assessment line
+plt_ssb <- plt_ssb + ggplot2::geom_line(
+ data = dplyr::filter(output,
+ module_name == "model_results",
+ label == "biomass",
+ era == "prev"),
+ ggplot2::aes(x = year, y = estimate, color = "Previous"),
+ linetype = "dashed",
+ linewidth = 0.8
+ )
+
+# Check for Rho Adjusted SSB value
+if (!is.null(metadata$terminal_b_adj) && !is.na(metadata$terminal_b_adj)) {
+
+ # Add point and annotation for Rho Adj
+ plt_ssb <- plt_ssb +
+ ggplot2::geom_point(
+ data = data.frame(x = metadata$term_yr, y = metadata$terminal_b_adj),
+ ggplot2::aes(x = x, y = y, color = "Rho Adj"),
+ size = 3
+ ) +
+ ggplot2::annotate(
+ "text",
+ x = metadata$term_yr,
+ y = metadata$terminal_b_adj,
+ label = "Rho Adj",
+ color = "red",
+ vjust = -1.5
+ ) +
+ ggplot2::scale_color_manual(
+ name = "Assessment",
+ values = c("Current" = "black", "Previous" = "blue", "Rho Adj" = "red")
+ ) +
+ ggplot2::labs(
+ subtitle = "Current Assessment vs. Previous and Rho-Adjusted Terminal Year"
+ )
+
+} else {
+
+ # Standard styling if no Rho Adj exists
+ plt_ssb <- plt_ssb +
+ ggplot2::scale_color_manual(
+ name = "Assessment",
+ values = c("Current" = "black", "Previous" = "blue")
+ ) +
+ ggplot2::labs(
+ subtitle = "Current Assessment vs. Previous Assessment"
+ )
+}
+
+plt_ssb
+
+
+
+
+```
+
+
+```{r}
+#| label: fig-f
+#| fig-cap: !expr to_latex_caption(metadata$cap_f)
+#| fig-pos: 'H'
+#| fig-height: 8
+#| fig-width: 7
+
+# Start with the base F plot
+plt_f <- stockplotr::plot_fishing_mortality(output)
+
+# Filter and add the previous assessment line
+plt_f <- plt_f + ggplot2::geom_line(
+ data = dplyr::filter(output,
+ module_name == "model_results",
+ label == "fishing_mortality",
+ era == "prev"),
+ ggplot2::aes(x = year, y = estimate, color = "Previous"),
+ linetype = "dashed",
+ linewidth = 0.8
+ )
+
+# Check for Rho Adjusted F value
+if (!is.null(metadata$terminal_f_adj) && !is.na(metadata$terminal_f_adj)) {
+
+ # Add point and annotation for Rho Adj
+ plt_f <- plt_f +
+ ggplot2::geom_point(
+ data = data.frame(x = metadata$term_yr, y = metadata$terminal_f_adj),
+ ggplot2::aes(x = x, y = y, color = "Rho Adj"),
+ size = 3
+ ) +
+ ggplot2::annotate(
+ "text",
+ x = metadata$term_yr,
+ y = metadata$terminal_f_adj,
+ label = "Rho Adj",
+ color = "red",
+ vjust = -1.5
+ ) +
+ ggplot2::scale_color_manual(
+ name = "Assessment",
+ values = c("Current" = "black", "Previous" = "blue", "Rho Adj" = "red")
+ ) +
+ ggplot2::labs(
+ subtitle = "Current Assessment vs. Previous and Rho-Adjusted Terminal Year"
+ )
+
+} else {
+
+ # Standard styling if no Rho Adj exists
+ plt_f <- plt_f +
+ ggplot2::scale_color_manual(
+ name = "Assessment",
+ values = c("Current" = "black", "Previous" = "blue")
+ ) +
+ ggplot2::labs(
+ subtitle = "Current Assessment vs. Previous Assessment"
+ )
+}
+
+plt_f
+
+```
+
+
+```{r}
+#| label: fig-fish
+#| fig-cap: !expr to_latex_caption(metadata$cap_fish)
+#| fig-pos: 'H'
+#| fig-height: 7
+#| fig-width: 7
+
+#plot_total_catch(dat = output)
+#or if you want to keep the legacy style
+plot_total_catch(dat = output, type = "bar")
+
+```
+
+
+```{r}
+#| label: fig-surveys
+#| fig-cap: !expr to_latex_caption(metadata$cap_surv)
+#| fig-pos: 'H'
+#| fig-height: 8.5
+#| fig-width: 7
+
+plot_survey_indices(dat = output, interactive = F)
+
+```
+
+
+
+```{r}
+#| label: fig-recr
+#| fig-cap: !expr to_latex_caption(metadata$cap_recr)
+#| fig-pos: 'H'
+#| fig-height: 6
+#| fig-width: 7
+
+#Recruitment - this one seems to work fine
+stockplotr::plot_recruitment(dat = output)
+
+
+
+
+
+
+```
diff --git a/resources/ExportLegacyNEFSCtoASARproject/ASARreportFiles/in-header.tex b/resources/ExportLegacyNEFSCtoASARproject/ASARreportFiles/in-header.tex
new file mode 100644
index 0000000..45605a6
--- /dev/null
+++ b/resources/ExportLegacyNEFSCtoASARproject/ASARreportFiles/in-header.tex
@@ -0,0 +1,46 @@
+\usepackage{hyphenat}
+\usepackage{graphicx}
+% and their extensions so you won't have to specify these with
+ % every instance of \includegraphics
+\usepackage{pdfcomment}
+\DeclareGraphicsExtensions{.pdf,.jpeg,.png}
+\usepackage{wallpaper} % for the background image on title page
+\usepackage{geometry}
+
+\usepackage{lastpage}
+% set font
+% \usepackage{fontspec}
+% \setsansfont{Latin Modern Sans}
+% \renewcommand{\setmainfont}[2][]{\fontspec[#1]{Latin Modern Sans}}
+% \renewcommand{\rmdefault}{lmss}
+
+% Acronyms
+\usepackage[acronym]{glossaries}
+\glsdisablehyper
+\makenoidxglossaries
+\loadglsentries{report_glossary.tex}
+
+% \usepackage{lmodern}
+\usepackage[T1]{fontenc}
+
+\newfontfamily\sectionfont[Color=Black]{Latin Modern Sans}
+\newfontfamily\subsectionfont[Color=Black]{Latin Modern Sans}
+\newfontfamily\subsubsectionfont[Color=Black]{Latin Modern Sans}
+% \addtokomafont{section}{\sectionfont}
+% \addtokomafont{subsection}{\subsectionfont}
+% \addtokomafont{subsubsection}{\subsubsectionfont}
+\usepackage[headsepline=0.005pt:,footsepline=0.005pt:,plainfootsepline,automark]{scrlayer-scrpage}
+\clearpairofpagestyles
+\ohead[]{\headmark} \cofoot[\pagemark]{\pagemark}
+\lohead{Western Gulf of Maine cod assessment 2026}
+\ModifyLayer[addvoffset=-.6ex]{scrheadings.foot.above.line}
+\ModifyLayer[addvoffset=-.6ex]{plain.scrheadings.foot.above.line}
+\setkomafont{pageheadfoot}{\small}
+
+% add soul package to remove latex error
+\usepackage{soul}
+
+%\usepackage{pdflscape} %not worth it! Too hard to implement in quarto
+
+\usepackage{float}
+\floatplacement{table}{H}
diff --git a/resources/ExportLegacyNEFSCtoASARproject/ASARreportFiles/preamble.R b/resources/ExportLegacyNEFSCtoASARproject/ASARreportFiles/preamble.R
new file mode 100644
index 0000000..958bb41
--- /dev/null
+++ b/resources/ExportLegacyNEFSCtoASARproject/ASARreportFiles/preamble.R
@@ -0,0 +1,123 @@
+myDir <- "dhennen" #change to your network home name
+sourcePath <- file.path("/home",myDir,"EIEIO","ASAR","MapLegacyAutoUpdateToAsar") #change to your path
+load(file.path(sourcePath,"MyStockHere.rda")) #change to your rda name
+#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+output <- model_results |>
+ dplyr::mutate(estimate = as.numeric(estimate),
+ uncertainty = as.numeric(uncertainty))
+
+metadata <- attr(model_results, "metadata")
+source(file.path(sourcePath,"MapAutoUpdateToAsar.R"))
+source(file.path(sourcePath,"table_catch_status1.1.R"))
+source(file.path(sourcePath,"table_projections1.1.R"))
+source(file.path(sourcePath,"table_brp1.1.R"))
+source(file.path(sourcePath,"plot_total_catch.R"))
+source(file.path(sourcePath,"plot_survey_indices.R"))
+
+
+# Pre-calculated quantities and output for use in a stock assessment report
+start_year <- output |>
+ dplyr::filter(era == 'time') |>
+ dplyr::summarise(min_year = min(year)) |>
+ dplyr::pull(min_year) |>
+ as.numeric()
+
+end_year <- output |>
+ dplyr::filter(era == 'time') |>
+ dplyr::summarise(max_year = max(year)) |>
+ dplyr::pull(max_year) |>
+ as.numeric()
+
+# subset output to remove quantities that are split by factor
+output2 <- output |>
+ dplyr::filter(is.na(season),
+ is.na(fleet),
+ is.na(sex),
+ is.na(area),
+ is.na(growth_pattern),
+ is.na(subseason),
+ is.na(age))
+
+# terminal fishing mortality
+Fend <- output2 |>
+ dplyr::filter((label == 'fishing_mortality' & year == end_year) | (label == 'terminal_fishing_mortality' & is.na(year))) |>
+ dplyr::pull(estimate) |>
+ # unique() |>
+ dplyr::first()
+
+# fishing mortality at msy
+# please change target if desired
+Ftarg <- output2 |>
+ dplyr::filter(grepl('f_target', label) | grepl('f_msy', label) | (grepl('fishing_mortality_msy', label) & is.na(year))) |>
+ dplyr::pull(estimate)
+
+# Terminal year F respective to F target
+F_Ftarg <- Fend / Ftarg
+
+# terminal year biomass
+Bend <- output2 |>
+ dplyr::filter(grepl('^biomass$', label),
+ year == end_year) |>
+ dplyr::pull(estimate)
+
+# target biomass (msy)
+# please change target if desired
+Btarg <- output2 |>
+ dplyr::filter(
+ !grepl("spawning|catch", label),
+ (grepl('biomass', label) & grepl('target', label) & estimate >1) | label == 'biomass_msy') |>
+ dplyr::pull(estimate)
+
+# total catch in the last year
+total_catch <- output |>
+ dplyr::filter(grepl('^catch$', label),
+ year == end_year) |>
+ dplyr::group_by(year) |>
+ dplyr::summarise(total_catch = sum(estimate)) |>
+ dplyr::ungroup() |>
+ dplyr::pull(total_catch)
+
+# total landings in the last year
+total_landings <- output |>
+ dplyr::filter(grepl('landings_observed', label), year == end_year) |>
+ dplyr::group_by(year) |>
+ dplyr::summarise(total_land = sum(estimate)) |>
+ dplyr::ungroup() |>
+ dplyr::pull(total_land)
+
+# spawning biomass in the last year
+SBend <- output2 |>
+ dplyr::filter(
+ grepl('spawning_biomass$', label), year == end_year,
+ !is.na(estimate)
+ ) |>
+ dplyr::pull(estimate) |>
+ unique()
+
+# overall natural mortality or at age
+M <- output |>
+ dplyr::filter(grepl('natural_mortality', label)) |>
+ dplyr::pull(estimate) |>
+ unique()
+
+# Biomass at msy
+# to change to another reference point, replace msy in the following lines with other label
+Bmsy <- output2 |>
+ dplyr::filter((grepl('^biomass', label) & grepl('msy', label) & estimate >1) | grepl('^biomass_msy$', label)) |>
+ dplyr::pull(estimate)
+
+# target spawning biomass(msy)
+# please change target if desired
+SBmsy <- output2 |>
+ dplyr::filter((grepl('spawning_biomass', label) & grepl('msy$', label) & estimate > 1) | label == 'spawning_biomass_msy$') |>
+ dplyr::pull(estimate)
+
+# steepness
+h <- output |>
+ dplyr::filter(grepl('steep', label)) |>
+ dplyr::pull(estimate)
+
+# recruitment
+R0 <- output |>
+ dplyr::filter(grepl('recruitment_unfished$', label)) |>
+ dplyr::pull(estimate)
diff --git a/resources/ExportLegacyNEFSCtoASARproject/CheckLatexInstall.R b/resources/ExportLegacyNEFSCtoASARproject/CheckLatexInstall.R
new file mode 100644
index 0000000..a0c0fcb
--- /dev/null
+++ b/resources/ExportLegacyNEFSCtoASARproject/CheckLatexInstall.R
@@ -0,0 +1,123 @@
+#' Multi-Engine LaTeX Dependency Checker
+#'
+#' Scans for .sty files and attempts to identify the parent package
+#' regardless of the underlying LaTeX distribution (TeX Live or MiKTeX).
+#'
+#' @param sty_list Character vector of style files (e.g., "zref-base.sty")
+#'
+#' @return A data frame with the file status and the required package name.
+#' @export
+check_latex_dependencies_multi <- function(sty_list) {
+
+ # Determine the distribution type
+ tex_info <- tryCatch({
+ system2("pdflatex", "--version", stdout = TRUE, stderr = NULL)[1]
+ }, error = function(e) "Unknown")
+
+ is_miktex <- grepl("MiKTeX", tex_info, ignore.case = TRUE)
+ dist_name <- if (is_miktex) "MiKTeX" else "TeX Live/Other"
+
+ cli::cli_alert_info("Detected LaTeX Distribution: {dist_name}")
+
+ results <- purrr::map_df(sty_list, function(sty) {
+ # Check if file exists using kpsewhich (works on both)
+ path <- system2("kpsewhich", sty, stdout = TRUE, stderr = NULL)
+ is_installed <- length(path) > 0
+
+ pkg_name <- NA_character_
+
+ # If missing, find the owner package
+ if (!is_installed) {
+ if (is_miktex) {
+ # MiKTeX specific lookup using mpm (MiKTeX Package Manager)
+ pkg_search <- system2("mpm", c("--list-file-owners", sty),
+ stdout = TRUE, stderr = NULL)
+ if (length(pkg_search) > 0) pkg_name <- pkg_search[1]
+ } else {
+ # TeX Live specific lookup
+ pkg_search <- system2("tlmgr", c("search", "--file", paste0("/", sty)),
+ stdout = TRUE, stderr = NULL)
+ if (length(pkg_search) > 0) pkg_name <- gsub(":.*", "", pkg_search[1])
+ }
+ }
+
+ tibble::tibble(
+ file = sty,
+ installed = is_installed,
+ required_package = pkg_name,
+ path = if (is_installed) path[1] else "MISSING"
+ )
+ })
+
+ # Summary and call to action
+ missing_data <- results |> dplyr::filter(!installed)
+ if (nrow(missing_data) > 0) {
+ unique_pkgs <- unique(na.omit(results$required_package))
+
+ if (is_miktex) {
+ cat("\nRun this in your terminal to install missing MiKTeX packages:\n")
+ cat("mpm --install", paste(unique_pkgs, collapse = " "), "\n\n")
+ } else {
+ cat("\nRun this in your terminal to install missing TeX Live packages:\n")
+ cat("tlmgr install", paste(unique_pkgs, collapse = " "), "\n\n")
+ }
+ } else {
+ cli::cli_alert_success("All style files are present and accounted for.")
+ }
+
+ return(results)
+}
+
+
+#List of Latex dependencies from Sam.
+
+sty_list <- c(
+ "pdfmanagement-testphase.sty", "tagpdf-base.sty", "latex-lab-testphase-latest.sty",
+ "tagpdf.sty", "tagpdf-mc-code-lua.sty", "latex-lab-testphase-names.sty",
+ "latex-lab-testphase-new-or-2.sty", "latex-lab-testphase-block.sty",
+ "latex-lab-kernel-changes.sty", "latex-lab-testphase-context.sty",
+ "latex-lab-testphase-sec.sty", "latex-lab-testphase-toc.sty",
+ "latex-lab-testphase-minipage.sty", "latex-lab-testphase-new-or-1.sty",
+ "latex-lab-testphase-graphic.sty", "latex-lab-testphase-float.sty",
+ "latex-lab-testphase-bib.sty", "latex-lab-testphase-text.sty",
+ "latex-lab-testphase-marginpar.sty", "latex-lab-testphase-title.sty",
+ "latex-lab-testphase-table.sty", "array.sty", "latex-lab-testphase-math.sty",
+ "latex-lab-testphase-firstaid.sty", "latex-lab-testphase-tikz.sty",
+ "pdfmanagement-firstaid.sty", "scrkbase.sty", "scrbase.sty", "scrlfile.sty",
+ "scrlfile-hook.sty", "scrlogo.sty", "keyval.sty", "tocbasic.sty",
+ "typearea.sty", "xcolor.sty", "xcolor-patches-tmp-ltx.sty", "amsmath.sty",
+ "amstext.sty", "amsgen.sty", "amsbsy.sty", "amsopn.sty", "amssymb.sty",
+ "amsfonts.sty", "iftex.sty", "expl3.sty", "unicode-math-luatex.sty",
+ "xparse.sty", "l3keys2e.sty", "fontspec.sty", "fontspec-luatex.sty",
+ "fontenc.sty", "fix-cm.sty", "lualatex-math.sty", "etoolbox.sty",
+ "lmodern.sty", "longtable.sty", "booktabs.sty", "calc.sty", "footnote.sty",
+ "graphicx.sty", "graphics.sty", "trig.sty", "babel.sty", "luatexbase.sty",
+ "ctablestack.sty", "selnolig.sty", "ifluatex.sty", "selnolig-english-patterns.sty",
+ "selnolig-english-hyphex.sty", "hyphenat.sty", "pdfcomment.sty", "xkeyval.sty",
+ "luatex85.sty", "datetime2.sty", "tracklang.sty", "zref-savepos.sty",
+ "zref-base.sty", "ltxcmds.sty", "infwarerr.sty", "kvsetkeys.sty",
+ "kvdefinekeys.sty", "pdftexcmds.sty", "etexcmds.sty", "auxhook.sty",
+ "refcount.sty", "ifthen.sty", "marginnote.sty", "ifpdf.sty", "soulpos.sty",
+ "hyperref.sty", "pdfescape.sty", "hycolor.sty", "nameref.sty",
+ "gettitlestring.sty", "kvoptions.sty", "stringenc.sty", "intcalc.sty",
+ "url.sty", "bitset.sty", "bigintcalc.sty", "wallpaper.sty", "eso-pic.sty",
+ "geometry.sty", "ifvtex.sty", "scrlayer-scrpage.sty", "scrlayer.sty",
+ "pdflscape.sty", "lscape.sty", "glossaries.sty", "mfirstuc.sty", "xfor.sty",
+ "datatool-base.sty", "glossary-hypernav.sty", "glossary-list.sty",
+ "glossary-long.sty", "glossary-tree.sty", "multirow.sty", "wrapfig.sty",
+ "float.sty", "colortbl.sty", "tabu.sty", "varwidth.sty", "threeparttable.sty",
+ "threeparttablex.sty", "environ.sty", "trimspaces.sty", "ulem.sty",
+ "makecell.sty", "caption.sty", "caption3.sty", "ltcaption.sty",
+ "anyfontsize.sty", "subcaption.sty", "bookmark.sty", "luamml.sty",
+ "luamml-patches-kernel.sty", "luamml-patches-amsmath.sty", "epstopdf-base.sty",
+ "soulutf8.sty", "soul.sty", "soul-ori.sty"
+)
+
+
+# Implementation: Use your specific list
+# (Paste your list into a character vector first)
+
+# Run the check
+dependency_report <- check_latex_dependencies(sty_list)
+
+
diff --git a/resources/ExportLegacyNEFSCtoASARproject/Example.R b/resources/ExportLegacyNEFSCtoASARproject/Example.R
new file mode 100644
index 0000000..7d7b36d
--- /dev/null
+++ b/resources/ExportLegacyNEFSCtoASARproject/Example.R
@@ -0,0 +1,160 @@
+# Example conversion from autoUpdate script to asar/stockplotr
+# use the R container for this and asar/stockplotr are loaded and available
+
+myDir <- "dhennen" #change to your network home name
+sourcePath <- file.path("/home",myDir,"EIEIO","ASAR","MapLegacyAutoUpdateToAsar")
+source(file.path(sourcePath,"MapAutoUpdateToAsar.R"))
+
+
+rdata_path <- file.path("/home","dhennen","EIEIO","ASAR","MapLegacyAutoUpdateToAsar"
+ ,"testStocks","CODWGOMAutoAss.RData")
+CODWGOM <- map_autoUpdate_to_stockplotr(rdata_path) #this is meant to mimic stockplotr::convert_output
+#structure for use in other stockplotr functions
+str(CODWGOM)
+
+#%%%%%%%%%%%% Recreating Figures and Tables we use, but with asar/stockplotr %%%%%%%%%%%%%%%%
+
+#try using this object in a stockplotr plotting function
+stockplotr::plot_biomass(
+ dat = CODWGOM, # dataset
+ geom = "line", # show a line graph
+ group = NULL, # don't group by sex, fleet, etc.
+ facet = NULL, # not faceting by any variable
+ ref_line = "MSY", # set reference line at unfished
+ unit_label = "mt", # unit label: metric tons
+ scale_amount = 1, # do not scale biomass
+ relative = FALSE, # show biomass, NOT relative biomass
+ interactive = TRUE, # prompt user for MODULE_NAME in console
+ module = NULL # MODULE_NAME not specified here
+)
+
+#how about for fishing mortality?
+stockplotr::plot_fishing_mortality(
+ dat = CODWGOM
+ #,era = "current"
+ ,ref_line = "msy"
+)
+
+#Nothing available specifically for catch in stockplotr - so make our own!
+source(file.path(sourcePath,"plot_total_catch.R"))
+plot_total_catch(dat = CODWGOM)
+#or if you want to keep the legacy style
+plot_total_catch(dat = CODWGOM, type = "bar")
+
+# Let's see how the indices look - once again make a function to copy our figure style,
+# but using stockplotr's style and inputs.
+source(file.path(sourcePath,"plot_survey_indices.R"))
+plot_survey_indices(dat = CODWGOM)
+
+#Recruitment - this one seems to work fine
+stockplotr::plot_recruitment(dat = CODWGOM)
+
+#%%%%%%%%%%%%%%%%%%%% How about tables? %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+# stockplotr::table_landings() is the only one in production currently, but
+# we can copy their style...
+source(file.path(sourcePath,"table_brp1.1.R"))
+table_brp(dat = CODWGOM)
+
+
+#Next we need our catch and status table
+source(file.path(sourcePath,"table_catch_status1.1.R"))
+table_catch_status(dat = CODWGOM)
+
+#projection table
+source(file.path(sourcePath,"table_projections1.1.R"))
+table_projections(dat = CODWGOM)
+#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+#How to update and produce a new report
+source(file.path(sourcePath,"create_asar_object.R"))
+
+#library(asarTableUtils) # Assuming we wrap these
+
+# Load the existing Source of Truth
+cod <- create_asar_object(CODWGOM)
+
+# Update the numbers for the new year
+cod <- cod |>
+ add_stock_data(module = "catch"
+ , label = "total_catch"
+ , year = 2024
+ , estimate = 712
+ ,era = "time"
+ , fleet = "Total") |>
+ update_stock_info(report_yr = 2026
+ , cap_proj = "Updated based on 2026 spring survey.")
+
+# Check if everything is still correct
+if(validate_asar_object(cod)) {
+ # Rename the object to what the template likely expects internally
+ model_results <- cod
+ save(model_results, file = "CODWGOM_2026.rda")
+}
+
+# Generate the report tables immediately
+table_catch_status(cod)
+
+
+
+#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+# Time to try to use asar
+
+
+# Create a new assessment directory and template
+asar::create_template(
+ office = "NEFSC"
+ ,output_dir = "CODWGOM_2026_Report"
+ ,format = "pdf"
+ ,stock = "Western Gulf of Maine Cod"
+ ,region = "Northeast"
+ ,authors = c("Jane Doe"="NEFSC")
+ ,species = attr(cod, "metadata")$spp_name
+ ,year = attr(cod, "metadata")$report_yr
+ ,model_results = "CODWGOM_2026.rda"
+)
+
+#open the report directory and find sar_N_Your_Stock_Name_skeleton.qmd
+# you can access all the variables from your autoReport like this:
+metadata <- attr(CODWGOM, "metadata") #paste into the above .qmd file
+#and you will have all you need to make an MT style report.
+#metadata$spp_name #etc...
+metadata$preamble
+
+
+
+
+# %%%%%%%%%%%%%% TROUBLE SHOOTING RENDER ISSUES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+LatexErrors = FALSE # Change to TRUE if you can't render due to LaTeX/LuaTeX errors
+
+if (LatexErrors) {
+ # Standard R package check
+ if (!requireNamespace("tinytex", quietly = TRUE)) {
+ install.packages("tinytex")
+ }
+
+ # Check if the current root is a system path or empty
+ current_root <- tinytex::tinytex_root()
+ is_system_path <- grepl("/usr/local", current_root)
+
+ if (current_root == "" || is_system_path) {
+ message("System LaTeX found or root empty. Installing local TinyTeX for user permissions...")
+ # 'force = TRUE' is key here to ignore the /usr/local version
+ tinytex::install_tinytex(force = TRUE)
+ }
+
+ # Final check/link to the local home directory version
+ # Quarto will now use this local version to install missing .sty files
+ # Manually point to the directory we just saw the installer create
+ tinytex::use_tinytex(from = glue::glue("/home/{myDir}/.TinyTeX") )
+
+ # Now verify it stuck
+ if(tinytex::tinytex_root()== glue::glue("/home/{myDir}/.TinyTeX")){cli::cli_alert_success("Active LaTeX Root: {tinytex::tinytex_root()}")
+ } else cli::cli_alert_danger("Active LaTeX Root: {tinytex::tinytex_root()}")
+ # Tell Quarto specifically where your new, writable pdflatex lives
+ Sys.setenv(QUARTO_PDF_LATEX = glue::glue("/home/{myDir}/.TinyTeX/bin/x86_64-linux/pdflatex"))
+
+
+ }
+
+
+
diff --git a/resources/ExportLegacyNEFSCtoASARproject/MapAutoUpdateToAsar.R b/resources/ExportLegacyNEFSCtoASARproject/MapAutoUpdateToAsar.R
new file mode 100644
index 0000000..b65b2ad
--- /dev/null
+++ b/resources/ExportLegacyNEFSCtoASARproject/MapAutoUpdateToAsar.R
@@ -0,0 +1,654 @@
+#' Map Legacy AutoUpdate RData to stockplotr Structure
+#'
+#' This function acts as a bridge between the legacy assessment report environment
+#' and the standardized asar_stock format. It extracts model results, catch,
+#' surveys, projections, and reference points, while preserving metadata as attributes.
+#'
+#' @param rdata_path Character string; path to the .RData file generated by the legacy report software.
+#'
+#' @return A tibble with the standardized stockplotr columns. Attributes include:
+#' \itemize{
+#' \item \code{metadata}: A list of stock names, report years, and table/figure captions.
+#' \item \code{legacy_brp}: A sidecar data frame containing raw biological reference point strings.
+#' }
+#'
+#' @details The function uses explicit lookup for survey CVs based on the \code{SurveyN.CV}
+#' convention found in the legacy environment. It also performs regex cleaning
+#' on projection strings to extract high/low confidence bounds.
+#'
+#' @export
+#_______________________________________________________________________________
+map_autoUpdate_to_stockplotr <- function(rdata_path = "AutoAss.RData") {
+
+ legacy_env <- new.env()
+ load(rdata_path, envir = legacy_env)
+
+ # Convert matrices to data frames
+ dt2_df <- as.data.frame(legacy_env$dt2)
+ dt1_df <- as.data.frame(legacy_env$dt1)
+ dt3_df <- as.data.frame(legacy_env$dt3)
+
+ # Define the standard stockplotr structure
+ out_new <- data.frame(
+ module_name = character(), label = character(), time = character(),
+ era = character(), year = numeric(), month = numeric(),
+ season = numeric(), subseason = numeric(), birthseas = numeric(),
+ initial = numeric(), estimate = numeric(), uncertainty = numeric(),
+ uncertainty_label = character(), likelihood = numeric(),
+ fleet = character(), platoon = character(), area = character(),
+ age = character(), sex = character(), growth_pattern = character(),
+ bio_pattern = character(), settlement = character(),
+ morph = character(), type = character(), factor = character(),
+ part = character(), kind = character(), nsim = numeric(),
+ bin = numeric(), age_a = character(), len_bins = character(),
+ count = numeric(), block = character(),
+ estimate_chr = character() # Added to store pre-formatted strings
+ )
+
+ # Map Model Results
+ model_results <- dt2_df |>
+ tidyr::pivot_longer(
+ cols = tidyselect::any_of(c(legacy_env$ModSSB, legacy_env$FF, legacy_env$Recruits)),
+ names_to = "legacy_label",
+ values_to = "estimate"
+ ) |>
+ dplyr::mutate(
+ module_name = "model_results",
+ era = "time",
+ year = as.numeric(Year),
+ label = dplyr::case_when(
+ legacy_label == legacy_env$ModSSB ~ "biomass",
+ legacy_label == legacy_env$FF ~ "fishing_mortality",
+ legacy_label == legacy_env$Recruits ~ "recruitment",
+ TRUE ~ legacy_label
+ ),
+ uncertainty = dplyr::case_when(
+ legacy_label == legacy_env$ModSSB ~ dt2_df[[legacy_env$ModSSB.CV]][match(Year, dt2_df$Year)],
+ legacy_label == legacy_env$FF ~ dt2_df[[legacy_env$FF.CV]][match(Year, dt2_df$Year)],
+ legacy_label == legacy_env$Recruits ~ dt2_df[[legacy_env$Recruits.CV]][match(Year, dt2_df$Year)],
+ TRUE ~ as.numeric(NA)
+ ),
+ uncertainty_label = "cv"
+ )
+
+ # Map Previous Assessment Results
+ old_cols_to_pivot <- c(legacy_env$SSB.old, legacy_env$F.old, legacy_env$Recruits.old)
+ old_cols_present <- old_cols_to_pivot[old_cols_to_pivot %in% colnames(dt2_df)]
+
+ if (length(old_cols_present) > 0) {
+ ssb_old_lab <- legacy_env$SSB.old %||% "___MISSING___"
+ f_old_lab <- legacy_env$F.old %||% "___MISSING___"
+ rec_old_lab <- legacy_env$Recruits.old %||% "___MISSING___"
+
+ model_old <- dt2_df |>
+ tidyr::pivot_longer(
+ cols = tidyselect::all_of(old_cols_present),
+ names_to = "legacy_label",
+ values_to = "estimate"
+ ) |>
+ dplyr::mutate(
+ module_name = "model_results",
+ era = "prev",
+ year = as.numeric(Year),
+ label = dplyr::case_when(
+ legacy_label == ssb_old_lab ~ "biomass",
+ legacy_label == f_old_lab ~ "fishing_mortality",
+ legacy_label == rec_old_lab ~ "recruitment",
+ TRUE ~ legacy_label
+ ),
+ uncertainty = NA_real_,
+ uncertainty_label = "cv"
+ )
+ model_results <- dplyr::bind_rows(model_results, model_old)
+ }
+
+ # Map Catch Data
+ fleet_indices <- 1:8
+ fleet_cols <- purrr::map_chr(fleet_indices, ~ legacy_env[[paste0("Fleet", .x)]] %||% NA_character_)
+ fleet_names <- purrr::map_chr(fleet_indices, ~ legacy_env[[paste0("Fleet", .x, ".name")]] %||% NA_character_)
+ fleet_map <- stats::setNames(fleet_names, fleet_cols)
+ fleet_map <- fleet_map[!is.na(names(fleet_map))]
+
+ catch_results <- dt1_df |>
+ tidyr::pivot_longer(
+ cols = tidyselect::any_of(c(names(fleet_map), legacy_env$Total)),
+ names_to = "legacy_fleet_id",
+ values_to = "estimate"
+ ) |>
+ dplyr::mutate(
+ module_name = "catch",
+ era = "time",
+ year = as.numeric(Year),
+ label = "total_catch",
+ fleet = dplyr::case_when(
+ legacy_fleet_id == legacy_env$Total ~ "Total",
+ legacy_fleet_id %in% names(fleet_map) ~ fleet_map[legacy_fleet_id],
+ TRUE ~ legacy_fleet_id
+ )
+ )
+
+ # Map Survey Data
+ survey_indices <- 1:8
+ survey_cols <- purrr::map_chr(survey_indices, ~ legacy_env[[paste0("Survey", .x)]] %||% NA_character_)
+ survey_cv_cols <- purrr::map_chr(survey_indices, ~ legacy_env[[paste0("Survey", .x, ".CV")]] %||% NA_character_)
+ survey_names <- purrr::map_chr(survey_indices, ~ legacy_env[[paste0("Survey", .x, ".name")]] %||% NA_character_)
+
+ surv_map <- stats::setNames(survey_names, survey_cols)
+ surv_map <- surv_map[!is.na(names(surv_map))]
+ cv_lookup <- stats::setNames(survey_cv_cols, survey_cols)
+ cv_lookup <- cv_lookup[!is.na(names(cv_lookup))]
+
+ survey_results <- dt3_df |>
+ tidyr::pivot_longer(
+ cols = tidyselect::any_of(names(surv_map)),
+ names_to = "legacy_survey_id",
+ values_to = "estimate"
+ ) |>
+ dplyr::mutate(
+ module_name = "survey",
+ era = "time",
+ year = as.numeric(Year),
+ label = "index",
+ fleet = surv_map[legacy_survey_id],
+ cv_col_to_pull = cv_lookup[legacy_survey_id]
+ ) |>
+ dplyr::rowwise() |>
+ dplyr::mutate(
+ uncertainty = if (!is.na(cv_col_to_pull) && cv_col_to_pull %in% colnames(dt3_df)) {
+ as.numeric(dt3_df[dt3_df$Year == Year, cv_col_to_pull])
+ } else {
+ as.numeric(NA)
+ },
+ uncertainty_label = "cv"
+ ) |>
+ dplyr::ungroup() |>
+ dplyr::select(-cv_col_to_pull)
+
+ # REVISED Projection Logic for Butterfish/Surfclam
+ projection_results <- data.frame(
+ year = as.numeric(legacy_env$PYear),
+ catch_raw = as.character(legacy_env$PCatch),
+ ssb_raw = as.character(legacy_env$PSSB),
+ f_raw = as.character(legacy_env$PF)
+ ) |>
+ dplyr::rowwise() |>
+ dplyr::mutate(
+ # Safely extract numeric estimate and uncertainty bounds for SSB
+ ssb_est = as.numeric(gsub(",", "", stringr::str_extract(ssb_raw, "^[^ ]+"))),
+ ssb_lo = as.numeric(gsub(",", "", stringr::str_extract(ssb_raw, "(?<=\\()[0-9,]+(?= -)"))),
+ ssb_hi = as.numeric(gsub(",", "", stringr::str_extract(ssb_raw, "(?<=- )[0-9,]+(?=\\))"))),
+
+ # Safely extract numeric estimate for Catch (handles strings with parentheses)
+ catch_est = as.numeric(gsub(",", "", stringr::str_extract(catch_raw, "^[^ ]+"))),
+
+ # Safely extract numeric estimate for F
+ f_est = as.numeric(gsub(",", "", f_raw))
+ ) |>
+ dplyr::ungroup() |>
+ tidyr::pivot_longer(
+ cols = c(catch_est, ssb_est, f_est),
+ names_to = "proj_label",
+ values_to = "estimate"
+ ) |>
+ dplyr::mutate(
+ module_name = "projections",
+ era = "fore",
+ label = dplyr::case_when(
+ proj_label == "catch_est" ~ "total_catch",
+ proj_label == "ssb_est" ~ "biomass",
+ proj_label == "f_est" ~ "fishing_mortality"
+ ),
+ # Save the raw formatted strings into estimate_chr to preserve parentheses
+ estimate_chr = dplyr::case_when(
+ label == "total_catch" ~ catch_raw,
+ label == "biomass" ~ ssb_raw,
+ label == "fishing_mortality" ~ f_raw
+ )
+ )
+
+ # Capture SSB uncertainty bounds
+ proj_uncertainty <- projection_results |>
+ dplyr::filter(label == "biomass") |>
+ tidyr::pivot_longer(
+ cols = c(ssb_lo, ssb_hi),
+ names_to = "u_label",
+ values_to = "u_val"
+ ) |>
+ dplyr::mutate(
+ uncertainty = u_val,
+ uncertainty_label = dplyr::if_else(u_label == "ssb_lo", "low", "high")
+ ) |>
+ dplyr::select(year, label, uncertainty, uncertainty_label)
+
+ projection_results <- projection_results |>
+ dplyr::left_join(proj_uncertainty, by = c("year", "label")) |>
+ dplyr::select(-c(catch_raw, ssb_raw, f_raw, proj_label, ssb_lo, ssb_hi))
+
+ # Map Biological Reference Points
+ ref_map <- c(
+ "biomass_msy" = "SSBMSYpt.est",
+ "biomass_target" = "SSBtarget",
+ "biomass_threshold" = "SSBthreshold",
+ "fishing_mortality_msy" = "FMSYpt.est",
+ "fishing_mortality_target" = "Ftarget",
+ "fishing_mortality_threshold" = "Fthreshold",
+ "msy" = "MSY",
+ "median_recruits" = "Recr"
+ )
+
+ if (!is.null(legacy_env$SSBMSY)) ref_map["biomass_msy"] <- "SSBMSY"
+ if (!is.null(legacy_env$FMSY)) ref_map["fishing_mortality_msy"] <- "FMSY"
+
+ legacy_sidecar <- purrr::map_df(names(ref_map), function(lab) {
+ raw_val <- legacy_env[[ref_map[lab]]]
+ if (is.null(raw_val)) return(NULL)
+ data.frame(
+ label = lab,
+ val = as.character(raw_val),
+ era = if(length(raw_val) == 2) c("prev", "current") else "current"
+ )
+ })
+
+ reference_results <- purrr::map_df(names(ref_map), function(lab) {
+ raw_val <- legacy_env[[ref_map[lab]]]
+ if (is.null(raw_val)) return(NULL)
+ current_str <- if(length(raw_val) == 2) raw_val[2] else raw_val[1]
+ data.frame(
+ label = lab,
+ module_name = "reference_points",
+ era = "time",
+ year = NA_real_,
+ estimate = as.numeric(stringr::str_extract(gsub(",", "", current_str), "[-+]?[0-9]*\\.?[0-9]+")),
+ estimate_chr = as.character(current_str)
+ )
+ })
+
+ # Final Combination
+ final_out <- dplyr::bind_rows(
+ model_results, catch_results, survey_results, projection_results, reference_results
+ )
+
+ final_out <- dplyr::bind_rows(out_new, final_out) |>
+ dplyr::select(tidyselect::all_of(names(out_new))) |>
+ tibble::as_tibble()
+
+ # Extract numeric values from reference_results for easy metadata access
+ # This pulls the specific estimates for SSB and F MSY
+ val_ssb_msy_num <- reference_results |>
+ dplyr::filter(label == "biomass_msy") |>
+ dplyr::pull(estimate)
+
+ val_f_msy_num <- reference_results |>
+ dplyr::filter(label == "fishing_mortality_msy") |>
+ dplyr::pull(estimate)
+
+ # Metadata attachment
+ metadata <- list(
+ spp_name = legacy_env$SppName,
+ spp_latin = legacy_env$SppLatinName,
+ report_yr = legacy_env$report.yr,
+ term_yr = legacy_env$term.yr,
+ last_ass = legacy_env$last.assessment,
+ status_f_now = legacy_env$Fstatus,
+ status_ssb_now = legacy_env$Bstatus,
+ status_f_old = legacy_env$Fstatus_old,
+ status_ssb_old = legacy_env$Bstatus_old,
+ is_draft = legacy_env$Draft,
+ cap_brp = legacy_env$BRPTab.cap,
+ cap_status = legacy_env$CatchStatusTab.cap,
+ cap_proj = legacy_env$ProjTab.cap,
+ cap_fish = legacy_env$figFish.cap,
+ cap_surv = legacy_env$figSurv.cap,
+ cap_ssb = legacy_env$figSSB.cap,
+ cap_f = legacy_env$figF.cap,
+ cap_recr = legacy_env$figRecr.cap,
+ rho_adj_used = legacy_env$Rho.adj,
+ rho_ssb_now = legacy_env$BRho.now,
+ rho_f_now = legacy_env$FRho.now,
+ rho_ssb_old = legacy_env$BRho.old,
+ rho_f_old = legacy_env$FRho.old,
+ terminal_f_adj = legacy_env$F.retro.adj,
+ terminal_b_adj = legacy_env$B.retro.adj,
+ f_name = legacy_env$FMSY.name,
+ ssb_name = legacy_env$SSBMSY.name,
+ msy_name = legacy_env$MSY.name,
+ recr_name = legacy_env$BRP.Recruits.name,
+ ff_name = legacy_env$FF.name,
+ mod_ssb_name = legacy_env$ModSSB.name,
+ mod_f_name = legacy_env$FF.name,
+ mod_recr_name = legacy_env$Recruits.name,
+ catch_units = legacy_env$CatchUnits,
+ ssb_units = legacy_env$SSBUnits,
+ recr_units = legacy_env$RecrUnits,
+ preamble = legacy_env$Preamble,
+ sos_text = legacy_env$StateOfStock,
+ proj_text = legacy_env$Project,
+ comments = legacy_env$SpecialComments,
+ references = legacy_env$References,
+ ci_method = legacy_env$ci.method,
+ ci_bounds = legacy_env$bounds,
+ val_ssb_msy = val_ssb_msy_num,
+ val_f_msy = val_f_msy_num,
+ catch_units = legacy_env$CatchUnits,
+ ssb_units = legacy_env$SSBUnits
+ )
+
+ attributes(final_out)$metadata <- metadata
+ attributes(final_out)$legacy_brp <- legacy_sidecar
+
+ return(final_out)
+}
+#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+#' Clean Assessment LaTeX Strings for HTML Output
+#'
+#' Converts legacy LaTeX commands and formatting into Markdown/HTML compatible
+#' tags. This is primarily used for converting technical captions and reference
+#' point names for use in web-based reports or interactive plots.
+#'
+#' @param x A character vector containing LaTeX strings (e.g., \code{\\SSBMSY}, \code{\\textbf{...}}).
+#'
+#' @return A character vector of cleaned strings with HTML tags (e.g., \code{SSBMSY}).
+#'
+#' @details This function uses a comprehensive translation map for common fishery
+#' reference points and Greek letters. It also handles structural LaTeX
+#' cleaning like removing center environments and converting double
+#' backslashes into line breaks.
+#'
+#' @export
+#______________________________________________________________________________
+clean_assessment_latex <- function(x) {
+ if (is.null(x)) return(NULL)
+
+ vapply(x, function(val) {
+ val <- as.character(val)
+ if (is.na(val) || val == "") return(val)
+
+ clean_key <- gsub("\\\\|\\.tx|\\.ref", "", val)
+
+ translation_map <- c(
+ # Spawning Biomass and Reference Point Terms
+ "SSBMSY" = "SSBMSY",
+ "SSBMSYproxy" = "SSBMSY proxy",
+ "FMSY" = "FMSY",
+ "FMSYproxy" = "FMSY proxy",
+ "FMSYpr" = "FMSY proxy",
+ "EMSY" = "EMSY",
+ "EMSYproxy" = "EMSY proxy",
+ "RMSY" = "RMSY",
+ "RMSYproxy" = "RMSY proxy",
+ "BMSY" = "BMSY",
+ "BMSYproxy" = "BMSY proxy",
+ "SSB0" = "SSB0",
+
+ # SPR Reference Points
+ "B30SPR" = "B30%SPR",
+ "B35SPR" = "B35%SPR",
+ "B40SPR" = "B40%SPR",
+ "B45SPR" = "B45%SPR",
+ "B50SPR" = "B50%SPR",
+ "F30SPR" = "F30%SPR",
+ "F35SPR" = "F35%SPR",
+ "F40SPR" = "F40%SPR",
+ "F45SPR" = "F45%SPR",
+ "F50SPR" = "F50%SPR",
+
+ # Mortality and Exploitation Rate Terms
+ "FFull" = "FFull",
+ "Favg5to7" = "F5-7",
+ "Fbar" = "F",
+ "Ftarg" = "FTarget",
+ "Fthresh" = "FThreshold",
+ "Frebuild" = "FRebuild",
+ "F0.1" = "F0.1",
+ "EFull" = "EFull",
+ "Etarg" = "ETarget",
+ "Ethresh" = "EThreshold",
+ "Ebar" = "E",
+
+ # Spawning Biomass and Ratios
+ "SSBbar" = "SSB",
+ "SSBtarg" = "SSBTarget",
+ "SSBthresh" = "SSBThreshold",
+ "Btarg" = "BTarget",
+ "Bthresh" = "BThreshold",
+ "Fratio" = "F⁄FThreshold",
+ "Bratio" = "B⁄BThreshold",
+ "Rratio" = "R⁄R0",
+ "SSBratio" = "SSB⁄SSBThreshold", # Matches SSBratio.tx
+ "half" = "½",
+
+ # Status Indicators
+ "FStatus" = "Overfishing",
+ "BStatus" = "Overfished",
+
+ # Greek Letters
+ "rho" = "ρ",
+ "alpha" = "α",
+ "beta" = "β",
+ "gamma" = "γ",
+ "delta" = "δ",
+ "epsilon" = "ε",
+ "mu" = "μ",
+ "lambda" = "λ",
+ "sigma" = "σ",
+ "rhoSSB" = "SSBρ",
+ "rhoF" = "Fρ",
+ "rhoB" = "Bρ",
+
+ # Structural and Headers
+ "sosHead" = "State of Stock: ",
+ "ProjHead" = "Projections: ",
+ "SpecComHead" = "Special Comments: ",
+ "RefHead" = "References: ",
+ "item" = "• ",
+ "lbreak" = "
",
+ "mskip" = "
",
+ "indent" = " "
+ )
+
+ # Static lookup
+ if (clean_key %in% names(translation_map)) {
+ return(translation_map[[clean_key]])
+ }
+
+ res <- val
+
+ # Handle fractions before stripping braces
+ #res <- gsub("\\\\frac\\{([^\\}]+)\\}\\{([^\\}]+)\\}", "\\1⁄\\2", res)
+ # More aggressive fraction handler for Surfclam ratios
+ res <- gsub("\\\\frac\\s*\\{([^\\}]+)\\}\\s*\\{([^\\}]+)\\}", "\\1⁄\\2", res)
+
+ # Proxy logic
+ res <- gsub("\\\\(F|SSB|B|E|R)_?\\{?MSY\\}?pr(\\.|oxy)?", "\\1MSY proxy", res)
+ res <- gsub("\\\\(F|SSB|B|E|R)MSYpr(\\.|oxy)?", "\\1MSY proxy", res)
+
+ # Standard fishery commands
+ res <- gsub("\\\\FMSY", "FMSY", res)
+ res <- gsub("\\\\SSBMSY", "SSBMSY", res)
+ res <- gsub("\\\\FFull", "FFull", res)
+
+ # Structural cleanup
+ res <- gsub("\\\\ref\\{[^\\}]+\\}", "", res)
+ res <- gsub("\\\\label\\{[^\\}]+\\}", "", res)
+ res <- gsub("\\\\begin\\{center\\}|\\\\end\\{center\\}", "", res)
+ res <- gsub("\\\\textbf\\{([^\\}]+)\\}", "\\1", res)
+ res <- gsub("\\\\textit\\{([^\\}]+)\\}", "\\1", res)
+
+ # Line breaks
+ res <- gsub("\\\\\\\\(?![a-zA-Z])", "
", res, perl = TRUE)
+
+ # Remove remaining LaTeX symbols
+ res <- gsub("\\$", "", res)
+ res <- gsub("\\\\", "", res)
+ res <- gsub("\\{", "", res)
+ res <- gsub("\\}", "", res)
+
+ # Whitespace normalization
+ res <- gsub("\\s+", " ", res)
+
+ return(trimws(res))
+ }, character(1), USE.NAMES = FALSE)
+}
+
+#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+
+
+# A converter that preserves HTML logic but satisfies the LaTeX compiler for the PDF caption
+# annoying to have to go both directions, but here we are.
+
+#' Create Unicode Subscripts
+#' @param text Standard text (e.g., "40")
+#' @return Unicode subscript version (e.g., "โโ")
+make_subscript <- function(text) {
+ chars <- c("0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "a", "e", "h", "i", "j", "k", "l", "m", "n", "o", "p", "r", "s", "t", "u", "v", "x")
+ subs <- c("โ", "โ", "โ", "โ", "โ", "โ
", "โ", "โ", "โ", "โ", "โ", "โ", "โ", "แตข", "โฑผ", "โ", "โ", "โ", "โ", "โ", "โ", "แตฃ", "โ", "โ", "แตค", "แตฅ", "โ")
+
+ utf8_vec <- strsplit(as.character(text), "")[[1]]
+ res <- vapply(utf8_vec, function(char) {
+ idx <- match(tolower(char), chars)
+ if (!is.na(idx)) subs[idx] else char
+ }, character(1))
+
+ return(paste(res, collapse = ""))
+}
+
+# Example usage:
+# make_subscript("40") -> "โโ"
+
+#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+#' Convert Legacy LaTeX and HTML to Clean Markdown
+#'
+#' This function bridges legacy assessment report strings into a format
+#' compatible with Quarto rendering by converting LaTeX/HTML to Markdown.
+#'
+#' @param x Character vector of strings to be cleaned.
+#'
+#' @return A character vector of cleaned strings using Markdown syntax.
+#' @export
+to_latex_caption <- function(x) {
+ if (is.null(x)) return(NULL)
+
+ vapply(x, function(val) {
+ val <- as.character(val)
+ if (is.na(val) || val == "") return(val)
+
+ # Extract clean key to match legacy logic
+ clean_key <- gsub("\\\\|\\.tx|\\.ref", "", val)
+
+ # Established Dictionary: Legacy LaTeX keys mapped to Markdown
+ translation_map <- c(
+ # Spawning Biomass / Reference Points
+ "SSBMSY" = "*SSB*~MSY~",
+ "SSBMSYproxy" = "*SSB*~MSY~ *proxy*",
+ "FMSY" = "*F*~MSY~",
+ "FMSYproxy" = "*F*~MSY~ *proxy*",
+ "FMSYpr" = "*F*~MSY~ *proxy*",
+ "EMSY" = "*E*~MSY~",
+ "EMSYproxy" = "*E*~MSY~ *proxy*",
+ "RMSY" = "*R*~MSY~",
+ "RMSYproxy" = "*R*~MSY~ *proxy*",
+ "BMSY" = "*B*~MSY~",
+ "BMSYproxy" = "*B*~MSY~ *proxy*",
+ "SSB0" = "*SSB*~0~",
+ "SSB40" = "*SSB*~40%~",
+
+ # SPR Reference Points
+ "B30SPR" = "*B*~30%SPR~", "B35SPR" = "*B*~35%SPR~", "B40SPR" = "*B*~40%SPR~",
+ "B45SPR" = "*B*~45%SPR~", "B50SPR" = "*B*~50%SPR~",
+ "F30SPR" = "*F*~30%SPR~", "F35SPR" = "*F*~35%SPR~", "F40SPR" = "*F*~40%SPR~",
+ "F45SPR" = "*F*~45%SPR~", "F50SPR" = "*F*~50%SPR~",
+ "F30" = "*F*~30%~", "F35" = "*F*~35%~", "F40" = "*F*~40%~", "F45" = "*F*~45%~", "F50" = "*F*~50%~",
+ "E30" = "*E*~30%~", "E35" = "*E*~35%~", "E40" = "*E*~40%~", "E45" = "*E*~45%~", "E50" = "*E*~50%~",
+
+ # Mortality / Exploitation / Averages
+ "FFull" = "*F*~Full~",
+ "Favg5to7" = "Fฬ~5-7~",
+ "Fbar" = "Fฬ",
+ "Ftarg" = "*F*~Target~",
+ "Fthresh" = "*F*~Threshold~",
+ "Frebuild" = "*F*~Rebuild~",
+ "F0.1" = "*F*~0.1~",
+ "EFull" = "*E*~Full~",
+ "Etarg" = "*E*~Target~",
+ "Ethresh" = "*E*~Threshold~",
+ "Erebuild" = "*E*~Rebuild~",
+ "Ebar" = "Eฬ",
+ "SSBbar" = "SSBฬ",
+ "SSBtarg" = "*SSB*~Target~",
+ "SSBthresh"= "*SSB*~Threshold~",
+ "Btarg" = "*B*~Target~",
+ "Bthresh" = "*B*~Threshold~",
+
+ # Ratios / Fractions
+ "Fratio" = "*F*/*F*~Threshold~",
+ "Bratio" = "*B*/*B*~Threshold~",
+ "Rratio" = "*R*/*R*~0~",
+ "SSBratio" = "*SSB*/*SSB*~Threshold~",
+ "half" = "1/2",
+ "dfrac12" = "1/2",
+
+ # Status / Greek
+ "FStatus" = "*Overfishing*",
+ "BStatus" = "*Overfished*",
+ "rho" = "ฯ", "alpha" = "ฮฑ", "beta" = "ฮฒ", "gamma" = "ฮณ", "delta" = "ฮด",
+ "rhoSSB" = "*SSB*~ฯ~", "rhoF" = "*F*~ฯ~", "rhoB" = "*B*~ฯ~",
+ "Linf" = "*L*~โ~", "k" = "*k*",
+
+ # Structural
+ "sosHead" = "**State of Stock:** ",
+ "ProjHead" = "**Projections:** ",
+ "SpecComHead" = "**Special Comments:** ",
+ "RefHead" = "**References:** ",
+ "item" = "* ",
+ "lbreak" = "\n\n",
+ "mskip" = "\n\n",
+ "indent" = " "
+ )
+
+ # Priority Match: Use exact dictionary hit first
+ if (clean_key %in% names(translation_map)) {
+ return(translation_map[[clean_key]])
+ }
+
+ res <- val
+
+ # Clean up italics formatting for terms like proxy
+ res <- stringr::str_replace_all(res, "\\\\textit\\{([^\\}]+)\\}", "*\\1*")
+
+ # Convert remaining LaTeX subscripts to Markdown tildes
+ # This targets patterns like _{40%} or _MSY
+ res <- stringr::str_replace_all(res, "_\\{([^\\}]+)\\}", "~\\1~")
+ res <- stringr::str_replace_all(res, "_([0-9a-zA-Z%]+)", "~\\1~")
+
+ # Standardize HTML formatting to Markdown
+ res <- res |>
+ stringr::str_replace_all("|", "*") |>
+ stringr::str_replace_all("|", "**") |>
+ stringr::str_replace_all("", "~") |>
+ stringr::str_replace_all("", "~")
+
+ # Final strip of LaTeX symbols and braces
+ res <- res |>
+ stringr::str_replace_all("\\\\+FFull", "*F*~Full~") |>
+ stringr::str_replace_all("\\\\+SSBMSY", "*SSB*~MSY~") |>
+ # Remove ALL remaining backslashes (no matter how many)
+ stringr::str_replace_all("\\\\+", "") |>
+ # Remove empty braces and math signs
+ stringr::str_replace_all("\\$|\\{\\}", "") |>
+ # Remove stray braces
+ stringr::str_replace_all("\\{|\\}", "") |>
+ stringr::str_replace_all("d?frac12", "1/2")
+
+ res <- res |>
+ stringr::str_replace_all("\\n", " ") |> # Remove newlines
+ stringr::str_replace_all("\\s+", " ") |> # Collapse multiple spaces
+ stringr::str_replace_all(":::", "") # Explicitly nuke any triple colons
+
+
+ res <- trimws(res)
+ return(if(res == "") val else res)
+ }, character(1), USE.NAMES = FALSE)
+}
\ No newline at end of file
diff --git a/resources/ExportLegacyNEFSCtoASARproject/TestMoreStocksReportOnly.R b/resources/ExportLegacyNEFSCtoASARproject/TestMoreStocksReportOnly.R
new file mode 100644
index 0000000..9899402
--- /dev/null
+++ b/resources/ExportLegacyNEFSCtoASARproject/TestMoreStocksReportOnly.R
@@ -0,0 +1,35 @@
+#Additional Test Stocks
+myDir <- "~/Stock Assessment Workflow/NEFSC transition/ExportLegacyNEFSCtoASARproject" #change to your network home name
+sourcePath <- file.path(myDir) # ,"EIEIO","ASAR","MapLegacyAutoUpdateToAsar"
+source(file.path(sourcePath,"MapAutoUpdateToAsar.R"))
+source(file.path(sourcePath,"create_asar_object.R"))
+
+AutoAss <- "BSBUNITAutoAss.RData"
+rdata_path <- file.path(myDir
+ ,"testStocks",AutoAss)
+
+#make a report!
+# Load the existing Source of Truth
+stock <- create_asar_object(map_autoUpdate_to_stockplotr(rdata_path))
+
+# Check if everything is correct for use in stockplotr
+if(validate_asar_object(stock)) {
+ model_results <- stock
+ metadata <- attr(stock, "metadata")
+ fname <- gsub(" ","_",metadata$spp_name) |> paste0(metadata$report_yr) |> paste0(".rda")
+ save(model_results, file = fname)
+}
+
+# Create a new assessment directory and template
+asar::create_template(
+ office = "NEFSC",
+ format = "pdf",
+ region = "Northeast",
+ authors = c("Joe Blow"="NEFSC"),
+ species = metadata$spp_name,
+ year = metadata$report_yr,
+ model_results = fname,
+ title = glue::glue("Management Track Assessment of {metadata$spp_name} {metadata$report_yr}"),
+ # Pass the execution rules here
+ quarto_options = report_options
+)
diff --git a/resources/ExportLegacyNEFSCtoASARproject/create_asar_object.R b/resources/ExportLegacyNEFSCtoASARproject/create_asar_object.R
new file mode 100644
index 0000000..42ff797
--- /dev/null
+++ b/resources/ExportLegacyNEFSCtoASARproject/create_asar_object.R
@@ -0,0 +1,197 @@
+# Constructor for a standardized ASAR stock object
+# This bakes in the data, metadata, and BRP sidecar into a single RDS-ready unit
+create_asar_object <- function(main_data,
+ metadata = NULL,
+ legacy_brp = NULL,
+ stock_id = NULL) {
+
+ # Ensure the primary data is a clean tibble
+ # This facilitates compatibility with dplyr and stockplotr
+ asar_obj <- tibble::as_tibble(main_data)
+
+ # If metadata isn't provided, we check if it exists as an attribute
+ # This allows the function to 'refresh' an existing object
+ if (is.null(metadata)) {
+ metadata <- attr(main_data, "metadata")
+ }
+
+ if (is.null(legacy_brp)) {
+ legacy_brp <- attr(main_data, "legacy_brp")
+ }
+
+ # Add a stock_id to the metadata for easier report indexing
+ if (!is.null(stock_id)) {
+ metadata$stock_id <- stock_id
+ }
+
+ # Attach the Source of Truth attributes
+ # These are the 'glue' that table_brp() and table_catch_status() rely on
+ attr(asar_obj, "metadata") <- metadata
+ attr(asar_obj, "legacy_brp") <- legacy_brp
+
+ # Assign the custom class for S3 method dispatching in ASAR
+ class(asar_obj) <- c("asar_stock", class(asar_obj))
+
+ return(asar_obj)
+}
+#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+# # Add a new row of data to the object while maintaining structure
+# # This version handles the 'dots' safely to avoid duplicated column errors
+# add_stock_data <- function(dat, module, label, year, estimate, ...) {
+#
+# # Capture additional arguments from dots
+# extra_args <- list(...)
+#
+# # Controlled Vocabulary Check
+# valid_labels <- c("biomass", "total_catch", "fishing_mortality", "recruitment", "index")
+# if (!(label %in% valid_labels)) {
+# warning(paste0("'", label, "' is not in the standard vocabulary (",
+# paste(valid_labels, collapse=", "), "). Tables may not render correctly."))
+# }
+#
+# # Define baseline values for a new row
+# # We use a list first so we can check for user-supplied overrides in extra_args
+# row_list <- list(
+# module_name = module,
+# label = label,
+# year = year,
+# estimate = estimate,
+# estimate_chr = as.character(estimate),
+# era = "time"
+# )
+#
+# # Merge extra_args into our row_list
+# # If the user provided 'era' or 'estimate_chr', their values will overwrite the defaults
+# for (n in names(extra_args)) {
+# row_list[[n]] <- extra_args[[n]]
+# }
+#
+# # Convert the finalized list to a single-row tibble
+# new_row <- tibble::as_tibble(row_list)
+#
+# # Save current metadata and class before merging
+# # This ensures the 'Sidecar' data stays attached to the main data frame
+# meta_tmp <- attr(dat, "metadata")
+# brp_tmp <- attr(dat, "legacy_brp")
+# cls_tmp <- class(dat)
+#
+# # Use bind_rows to merge the new data
+# # Missing columns (like age, sex, etc.) will be filled with NA automatically
+# updated_dat <- dplyr::bind_rows(dat, new_row)
+#
+# # Restore the metadata and original object class
+# attr(updated_dat, "metadata") <- meta_tmp
+# attr(updated_dat, "legacy_brp") <- brp_tmp
+# class(updated_dat) <- cls_tmp
+#
+# return(updated_dat)
+# }
+#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+# Add a new row of data to the object while maintaining the 33-column structure
+# This is now compatible with the 'estimate_chr' column used for Surfclam/Butterfish
+add_stock_data <- function(dat, module, label, year, estimate, ...) {
+
+ # Controlled Vocabulary Check for core metrics
+ valid_labels <- c("biomass", "total_catch", "fishing_mortality", "recruitment", "index")
+ if (!(label %in% valid_labels)) {
+ warning(paste0("'", label, "' is not in the standard vocabulary (",
+ paste(valid_labels, collapse=", "), "). Tables may not render correctly."))
+ }
+
+ # Create the new observation with standard structure
+ # We include estimate_chr to match the mapper's output
+ new_row <- tibble::tibble(
+ module_name = module,
+ label = label,
+ year = year,
+ estimate = estimate,
+ estimate_chr = as.character(estimate), # Default string version of the estimate
+ era = "time", # Default to time series era
+ ...
+ )
+
+ # Save attributes and class before binding
+ # dplyr::bind_rows can be aggressive with stripping custom metadata
+ meta_tmp <- attr(dat, "metadata")
+ brp_tmp <- attr(dat, "legacy_brp")
+ cls_tmp <- class(dat)
+
+ # Use rows_append or bind_rows to merge the new data
+ # This ensures the new row gets NAs for all the columns not explicitly named (like bio_pattern)
+ updated_dat <- dplyr::bind_rows(dat, new_row)
+
+ # Restore the metadata and legacy sidecar attributes
+ attr(updated_dat, "metadata") <- meta_tmp
+ attr(updated_dat, "legacy_brp") <- brp_tmp
+ class(updated_dat) <- cls_tmp
+
+ return(updated_dat)
+}
+
+#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+# Pre-flight check to ensure the object is ready for ASAR reporting
+validate_asar_object <- function(dat) {
+
+ issues <- c()
+
+ # Check 1: Structure and Class
+ if (!inherits(dat, "asar_stock")) {
+ issues <- c(issues, "Object is missing 'asar_stock' class. Use create_asar_object().")
+ }
+
+ # Check 2: Required Columns
+ req_cols <- c("module_name", "label", "year", "estimate") # uncertainty column is also required
+ missing_cols <- setdiff(req_cols, colnames(dat))
+ if (length(missing_cols) > 0) {
+ issues <- c(issues, paste("Missing required data columns:", paste(missing_cols, collapse = ", ")))
+ }
+
+ # Check 3: Metadata Integrity
+ meta <- attr(dat, "metadata")
+ if (is.null(meta)) {
+ issues <- c(issues, "Metadata attribute is missing.")
+ } else {
+ req_meta <- c("spp_name", "report_yr", "cap_brp", "cap_status", "cap_proj")
+ missing_meta <- setdiff(req_meta, names(meta))
+ if (length(missing_meta) > 0) {
+ issues <- c(issues, paste("Missing metadata fields:", paste(missing_meta, collapse = ", ")))
+ }
+ }
+
+ # Check 4: BRP Sidecar
+ if (is.null(attr(dat, "legacy_brp"))) {
+ issues <- c(issues, "Legacy BRP sidecar is missing. BRP tables will not render.")
+ }
+
+ # Check 5: Comprehensive Vocabulary Check
+ # This list now includes the reference point labels generated by the mapping function
+ valid_labels <- c(
+ "biomass", "total_catch", "fishing_mortality", "recruitment", "index",
+ "biomass_msy", "biomass_target", "biomass_threshold",
+ "fishing_mortality_msy", "fishing_mortality_target", "fishing_mortality_threshold",
+ "msy", "median_recruits"
+ )
+
+ found_labels <- unique(dat$label)
+ invalid_labels <- setdiff(found_labels, valid_labels)
+ invalid_labels <- invalid_labels[!is.na(invalid_labels)]
+
+ if (length(invalid_labels) > 0) {
+ issues <- c(issues, paste("Non-standard labels found:", paste(invalid_labels, collapse = ", ")))
+ }
+
+ # Final Reporting
+ if (length(issues) == 0) {
+ cli::cli_alert_success("Object '{substitute(dat)}' passed all ASAR validation checks.")
+ return(TRUE)
+ } else {
+ cli::cli_alert_danger("Validation failed with {length(issues)} issue(s):")
+ bullet_list <- setNames(issues, rep("*", length(issues)))
+ cli::cli_bullets(bullet_list)
+ return(FALSE)
+ }
+}
+
+
+
+
diff --git a/resources/ExportLegacyNEFSCtoASARproject/plot_survey_indices.R b/resources/ExportLegacyNEFSCtoASARproject/plot_survey_indices.R
new file mode 100644
index 0000000..567a69a
--- /dev/null
+++ b/resources/ExportLegacyNEFSCtoASARproject/plot_survey_indices.R
@@ -0,0 +1,138 @@
+#' Plot Survey Abundance Indices
+#'
+#' @param dat A standardized asar_stock object containing survey module data.
+#' @param unit_label Character string for the Y-axis units (e.g., "kg", "mt").
+#' @param geom Character string defining the primary geometry type for stockplotr filtering.
+#' @param group Column name used to group/color the surveys (default is "fleet").
+#' @param facet Column name used to panel the plot (default is "fleet").
+#' @param era Filter for specific time periods (e.g., "time", "fore").
+#' @param scale_amount Numeric multiplier for scaling Y-axis values (e.g., 1000).
+#' @param module The data module to extract. Default is "survey".
+#' @param interactive Logical; if TRUE, prepares data for plotly compatibility.
+#' @param make_rda Logical; if TRUE, saves plot and data as an .rda file.
+#' @param figures_dir Character string of the directory path for saving .rda files.
+#' @param show_line Logical; toggles the interpolation line between survey points.
+#' @param line_type Character string defining the style of the interpolation line.
+#' @param ... Additional arguments passed to stockplotr::filter_data.
+#'
+#' @return A ggplot2 object with adaptive uncertainty layers (ribbons for annual data,
+#' error bars for sparse data).
+#' @export
+plot_survey_indices <- function(dat,
+ unit_label = "kg",
+ geom = "line",
+ group = "fleet",
+ facet = "fleet",
+ era = NULL,
+ scale_amount = 1,
+ module = "survey",
+ interactive = FALSE,
+ make_rda = FALSE,
+ figures_dir = getwd(),
+ show_line = TRUE,
+ line_type = "solid",
+ ...) {
+
+ # Filter data for 'index' labels using stockplotr core
+ prepared_data <- stockplotr::filter_data(
+ dat = dat,
+ label_name = "index",
+ geom = geom,
+ era = era,
+ group = group,
+ facet = facet,
+ module = module,
+ scale_amount = scale_amount,
+ interactive = interactive
+ )
+
+ if (nrow(prepared_data) == 0) {
+ stop("No data found for label 'index' in the survey module.")
+ }
+
+ # Prepare data and determine sampling continuity per fleet
+ p_dat_final <- prepared_data |>
+ dplyr::mutate(
+ year = as.numeric(year),
+ estimate = as.numeric(estimate),
+ cv_val = as.numeric(uncertainty)
+ ) |>
+ dplyr::filter(!is.na(estimate)) |>
+ dplyr::group_by(!!ggplot2::sym(group)) |>
+ dplyr::mutate(
+ # Identify if survey is annual (gap <= 1) or sparse (gap > 1)
+ max_gap = max(diff(sort(year)), na.rm = TRUE),
+ is_consecutive = max_gap <= 1,
+ has_cv = !is.na(cv_val) & cv_val > 0,
+ sdlog = ifelse(has_cv, sqrt(log(cv_val^2 + 1)), NA),
+ lci = ifelse(has_cv, estimate * exp(-1.645 * sdlog), NA),
+ uci = ifelse(has_cv, estimate * exp(1.645 * sdlog), NA),
+ group_var = !!ggplot2::sym(group)
+ ) |>
+ dplyr::ungroup()
+
+ # Initialize plot
+ plt <- ggplot2::ggplot(p_dat_final, ggplot2::aes(x = year, y = estimate))
+
+ # Shaded ribbons for consecutive (annual) data
+ plt <- plt + ggplot2::geom_ribbon(
+ data = subset(p_dat_final, is_consecutive == TRUE),
+ ggplot2::aes(ymin = lci, ymax = uci, fill = group_var),
+ alpha = 0.3,
+ na.rm = TRUE
+ )
+
+ # Discrete error bars for sparse (periodic) data
+ plt <- plt + ggplot2::geom_errorbar(
+ data = subset(p_dat_final, is_consecutive == FALSE),
+ ggplot2::aes(ymin = lci, ymax = uci, color = group_var),
+ width = 0.5,
+ alpha = 0.8,
+ na.rm = TRUE
+ )
+
+ # Conditional interpolation line
+ if (show_line) {
+ plt <- plt +
+ ggplot2::geom_line(
+ ggplot2::aes(color = group_var, linetype = is_consecutive),
+ linewidth = 0.8,
+ alpha = 0.6
+ ) +
+ # Use solid for annual, dashed for sparse to visually signal interpolation
+ ggplot2::scale_linetype_manual(values = c("TRUE" = "solid", "FALSE" = "dashed"), guide = "none")
+ }
+
+ # Plot actual observed points
+ plt <- plt + ggplot2::geom_point(ggplot2::aes(color = group_var), size = 2)
+
+ # Faceting logic
+ if (!is.null(facet)) {
+ plt <- plt + ggplot2::facet_wrap(
+ ggplot2::vars(!!ggplot2::sym(facet)),
+ scales = "free_y",
+ ncol = 2
+ )
+ }
+
+ # Final styling and metadata attribution
+ plt <- plt +
+ stockplotr::theme_noaa() +
+ ggplot2::scale_y_continuous(
+ expand = ggplot2::expansion(mult = c(0, 0.1)),
+ limits = c(0, NA)
+ ) +
+ ggplot2::labs(
+ y = paste0("Index (", unit_label, ")"),
+ x = "Year",
+ color = "Survey",
+ fill = "Survey",
+ #caption = attr(dat, "metadata")$cap_surv
+ ) +
+ ggplot2::theme(
+ legend.position = "bottom",
+ #plot.caption = ggplot2::element_text(hjust = 0, size = 8)
+ )
+
+ return(plt)
+}
\ No newline at end of file
diff --git a/resources/ExportLegacyNEFSCtoASARproject/plot_total_catch.R b/resources/ExportLegacyNEFSCtoASARproject/plot_total_catch.R
new file mode 100644
index 0000000..6bc5c10
--- /dev/null
+++ b/resources/ExportLegacyNEFSCtoASARproject/plot_total_catch.R
@@ -0,0 +1,115 @@
+#' Plot Total Catch by Fleet
+#'
+#' @param dat A standardized asar_stock object containing catch module data.
+#' @param unit_label Character string for the Y-axis units (e.g., "metric tons", "kg").
+#' @param type Character string defining the plot type: "line" (default), "bar", or "histogram".
+#' @param group Column name used to group/color the data (default is "fleet").
+#' @param facet Column name used to panel the plot (default is NULL).
+#' @param era Filter for specific time periods (e.g., "time", "fore").
+#' @param scale_amount Numeric multiplier for scaling Y-axis values (e.g., 1000).
+#' @param module The data module to extract. Default is "catch".
+#' @param interactive Logical; if TRUE, prepares data for plotly compatibility.
+#' @param make_rda Logical; if TRUE, saves plot and data as an .rda file.
+#' @param figures_dir Character string of the directory path for saving .rda files.
+#' @param ... Additional arguments passed to stockplotr::filter_data.
+#'
+#' @return A ggplot2 object showing total catch. If type is "bar", the function
+#' automatically filters out 'Total' fleet rows to ensure proper stacking.
+#' @export
+plot_total_catch <- function(dat,
+ unit_label = "metric tons",
+ type = "line",
+ group = "fleet",
+ facet = NULL,
+ era = NULL,
+ scale_amount = 1,
+ module = "catch",
+ interactive = FALSE,
+ make_rda = FALSE,
+ figures_dir = getwd(),
+ ...) {
+
+ # Determine magnitude label based on scale_amount
+ magnitude_text <- if (scale_amount == 1000) {
+ " (000s "
+ } else if (scale_amount == 1e+06) {
+ " (millions "
+ } else {
+ " ("
+ }
+
+ catch_label <- paste0("Total Catch", magnitude_text, unit_label, ")")
+
+ # Filter data using stockplotr core logic
+ prepared_data <- stockplotr::filter_data(
+ dat = dat,
+ label_name = "total_catch",
+ geom = "line",
+ era = era,
+ group = group,
+ facet = facet,
+ module = module,
+ scale_amount = scale_amount,
+ interactive = interactive
+ )
+
+ # Ensure numeric types for plotting
+ prepared_data$year <- as.numeric(prepared_data$year)
+ prepared_data$estimate <- as.numeric(prepared_data$estimate)
+
+ # Logic to prevent double-counting in stacked bars
+ if (type %in% c("bar", "histogram")) {
+ prepared_data <- prepared_data |>
+ dplyr::filter(!tolower(fleet) %in% c("total", "all"))
+ }
+
+ # Process and aggregate data through stockplotr
+ processed_data <- stockplotr::process_data(
+ dat = prepared_data,
+ group = group,
+ facet = facet,
+ method = "sum"
+ )
+
+ p_dat <- processed_data[[1]]
+ g_var <- processed_data[[2]]
+ f_var <- processed_data[[3]]
+
+ plt <- ggplot2::ggplot(p_dat, ggplot2::aes(x = year, y = estimate))
+
+ # Render appropriate geometry based on type
+ if (type %in% c("bar", "histogram")) {
+ plt <- plt +
+ ggplot2::geom_col(ggplot2::aes(fill = !!ggplot2::sym(g_var)), position = "stack") +
+ ggplot2::labs(fill = "Fleet")
+ } else {
+ plt <- plt +
+ ggplot2::geom_line(ggplot2::aes(color = !!ggplot2::sym(g_var)), linewidth = 1) +
+ ggplot2::labs(color = "Fleet")
+ }
+
+ # Faceting if specified
+ if (!is.null(f_var)) {
+ plt <- plt + ggplot2::facet_wrap(ggplot2::vars(!!ggplot2::sym(f_var)))
+ }
+
+ # Final styling
+ plt <- plt +
+ stockplotr::theme_noaa() +
+ ggplot2::labs(y = catch_label, x = "Year")
+
+ # Remove legend if only one group exists
+ if (length(unique(p_dat[[g_var]])) == 1) {
+ plt <- plt + ggplot2::theme(legend.position = "none")
+ }
+
+ # Artifact generation
+ if (make_rda) {
+ stockplotr::create_rda(
+ object = plt, topic_label = "total_catch", fig_or_table = "figure",
+ dat = dat, dir = figures_dir, scale_amount = scale_amount, unit_label = unit_label
+ )
+ }
+
+ return(plt)
+}
\ No newline at end of file
diff --git a/resources/ExportLegacyNEFSCtoASARproject/table_brp1.1.R b/resources/ExportLegacyNEFSCtoASARproject/table_brp1.1.R
new file mode 100644
index 0000000..58ed436
--- /dev/null
+++ b/resources/ExportLegacyNEFSCtoASARproject/table_brp1.1.R
@@ -0,0 +1,183 @@
+#' Create Biological Reference Point (BRP) Table
+#'
+#' @param dat A standardized asar_stock object.
+#' @param stock_name Character; display name for the stock.
+#' @param caption Character; table subtitle/caption.
+#' @param current_year Numeric; year of the current assessment.
+#' @param previous_year Numeric; year of the previous assessment.
+#' @param label Character; internal ID for the table.
+#' @param status_f_current Character; optional explicit override for overfishing status.
+#' @param status_ssb_current Character; optional explicit override for overfished status.
+#'
+#' @return A formatted gt table object.
+#' @export
+table_brp <- function(dat,
+ stock_name = attr(dat, "metadata")$spp_name,
+ caption = attr(dat, "metadata")$cap_brp,
+ current_year = attr(dat, "metadata")$report_yr,
+ previous_year = attr(dat, "metadata")$last_ass,
+ label = "BRP_Table",
+ status_f_current = NULL,
+ status_ssb_current = NULL) {
+
+ meta <- attr(dat, "metadata")
+ sidecar <- attr(dat, "legacy_brp")
+
+ is_latex <- !is.null(knitr::opts_knit$get("rmarkdown.pandoc.to")) &&
+ knitr::opts_knit$get("rmarkdown.pandoc.to") == "latex"
+
+ display_prev_yr <- if(is.null(previous_year) || is.na(previous_year)) "Previous" else as.character(previous_year)
+
+ # Helper to find values in the sidecar
+ get_val <- function(lab, target_era) {
+ if (is.null(sidecar)) return("")
+ val <- sidecar |>
+ dplyr::filter(label == lab, era == target_era) |>
+ dplyr::pull(val)
+ return(if(length(val) > 0) val[1] else "")
+ }
+
+ # Standardize status strings
+ get_prev_status <- function(val) {
+ if (is.null(val) || is.na(val) || val == "" || tolower(val) == "unknown") return("Unknown")
+ clean_val <- tolower(val)
+ if (grepl("not", clean_val)) {
+ return(if(grepl("overfishing", clean_val)) "Not Overfishing" else "Not Overfished")
+ }
+ if (grepl("overfishing", clean_val)) return("Overfishing")
+ if (grepl("overfished", clean_val)) return("Overfished")
+ return(val)
+ }
+
+ # Logical status check with NULL protection
+ check_status <- function(type = c("f", "ssb")) {
+ type <- match.arg(type)
+ manual_arg <- if(type == "f") status_f_current else status_ssb_current
+ if (!is.null(manual_arg)) return(manual_arg)
+
+ meta_val <- if(type == "f") meta$status_f_now else meta$status_ssb_now
+ if (!is.null(meta_val) && !is.na(meta_val) && meta_val != "" && tolower(meta_val) != "unknown") {
+ return(get_prev_status(meta_val))
+ }
+
+ threshold_label <- if(type == "f") "fishing_mortality_threshold" else "biomass_threshold"
+ model_label <- if(type == "f") "fishing_mortality" else "biomass"
+
+ thresh_val <- dat |>
+ dplyr::filter(module_name == "reference_points", label == threshold_label, era == "time") |>
+ dplyr::pull(estimate) |> as.numeric()
+
+ curr_val <- dat |>
+ dplyr::filter(module_name == "model_results", label == model_label, era == "time") |>
+ dplyr::filter(year == max(year, na.rm = TRUE)) |>
+ dplyr::pull(estimate) |> as.numeric()
+
+ if (length(curr_val) == 0 || length(thresh_val) == 0 || is.na(curr_val) || is.na(thresh_val)) return("Unknown")
+
+ if (type == "f") {
+ return(if(curr_val > thresh_val) "Overfishing" else "Not Overfishing")
+ } else {
+ return(if(curr_val < thresh_val) "Overfished" else "Not Overfished")
+ }
+ }
+
+ # NULL-safe label cleaning
+ safe_clean <- function(x, fallback) {
+ input <- if(is.null(x)) fallback else x
+ if (is_latex) {
+ res <- to_latex_caption(input)
+ } else {
+ res <- clean_assessment_latex(input)
+ }
+ return(if (is.null(res) || length(res) == 0) fallback else res)
+ }
+
+ f_label <- safe_clean(meta$f_name, "F")
+ ssb_label <- safe_clean(meta$ssb_name, "SSB")
+ msy_label <- safe_clean(meta$msy_name, "MSY")
+
+ df <- data.frame(
+ Metric = c(f_label, ssb_label, msy_label, "Overfishing", "Overfished"),
+ Previous = c(
+ get_val("fishing_mortality_msy", "prev"),
+ get_val("biomass_msy", "prev"),
+ get_val("msy", "prev"),
+ get_prev_status(meta$status_f_old),
+ get_prev_status(meta$status_ssb_old)
+ ),
+ Current = c(
+ get_val("fishing_mortality_msy", "current"),
+ get_val("biomass_msy", "current"),
+ get_val("msy", "current"),
+ check_status("f"),
+ check_status("ssb")
+ )
+ )
+
+ df <- df |> dplyr::filter(!(Metric == "" & Current %in% c("Unknown", "", NA)))
+
+ final_table <- df |>
+ gt::gt(id = label) |>
+ gt::fmt_markdown(columns = Metric) |>
+ gt::tab_style(
+ style = gt::cell_fill(color = "#CCCCCC"),
+ locations = gt::cells_body(columns = Previous)
+ ) |>
+ gt::cols_label(
+ Metric = "",
+ Previous = display_prev_yr,
+ Current = as.character(current_year %||% "Current")
+ ) |>
+ gt::tab_header(
+ title = if (is_latex) NULL else stock_name,
+ subtitle = if (is_latex) gt::md(to_latex_caption(caption)) else gt::html(clean_assessment_latex(caption))
+ ) |>
+ gt::tab_style(
+ style = gt::cell_text(color = "red", weight = "bold"),
+ locations = gt::cells_body(columns = Current, rows = Current %in% c("Overfishing", "Overfished"))
+ ) |>
+ gt::tab_style(
+ style = gt::cell_text(color = "darkgreen", weight = "bold"),
+ locations = gt::cells_body(columns = Current, rows = Current %in% c("Not Overfishing", "Not Overfished"))
+ ) |>
+ gt::cols_align(align = "right", columns = c(Previous, Current)) |>
+ gt::opt_row_striping() |>
+ gt::tab_options(
+ table.font.size = if (is_latex) gt::px(12) else gt::px(14),
+ heading.title.font.size = if (is_latex) gt::px(14) else gt::px(16),
+ heading.subtitle.font.size = if (is_latex) gt::px(11) else gt::px(13),
+ # Explicitly set width to NULL for LaTeX to prevent \linewidth
+ table.width = if (is_latex) NULL else gt::pct(100),
+ #Can't get this stupid table to NOT use all the horizontal space
+ #Gemini suggests this fix (it messes up other stuff):
+ #table.width = if (is_latex) gt::px(1) else gt::pct(100),
+ # Use this instead to keep the table compact
+ table.layout = "auto",
+ # Ensure data rows aren't overly tall
+ data_row.padding = gt::px(4),
+ column_labels.font.weight = "bold"
+ ) |>
+ # This is one way to try to get the horizontal spacing right, but the current
+ #version of gt we have loaded does not support it.
+ # gt::tab_options(
+ # table.font.size = if (is_latex) gt::px(12) else gt::px(14),
+ # heading.title.font.size = if (is_latex) gt::px(14) else gt::px(16),
+ # heading.subtitle.font.size = if (is_latex) gt::px(11) else gt::px(13),
+ #
+ # # Force standard tabular instead of tabular*
+ # table.width = NULL,
+ # table.layout = "auto",
+ #
+ # # This tells gt to stop trying to be smart with LaTeX width
+ # latex.use_tabular = TRUE,
+ #
+ # data_row.padding = gt::px(4),
+ # column_labels.font.weight = "bold"
+ # ) |>
+ gt::tab_style(
+ style = gt::cell_text(align = "left"),
+ locations = list(gt::cells_title(groups = "title"), gt::cells_title(groups = "subtitle"))
+ )
+
+ return(final_table)
+}
\ No newline at end of file
diff --git a/resources/ExportLegacyNEFSCtoASARproject/table_catch_status1.1.R b/resources/ExportLegacyNEFSCtoASARproject/table_catch_status1.1.R
new file mode 100644
index 0000000..a49c50c
--- /dev/null
+++ b/resources/ExportLegacyNEFSCtoASARproject/table_catch_status1.1.R
@@ -0,0 +1,81 @@
+#' Create Catch and Status Trend Table
+#'
+#' @param dat A standardized asar_stock object.
+#' @param n_years Numeric; number of recent years to display (default is 7).
+#' @param stock_name Character; display name for the stock.
+#' @param caption Character; table subtitle/caption.
+#' @param label Character; internal ID for the table.
+#'
+#' @return A formatted gt table object.
+#' @export
+table_catch_status <- function(dat,
+ n_years = 7,
+ stock_name = attr(dat, "metadata")$spp_name,
+ caption = attr(dat, "metadata")$cap_status,
+ label = "Catch_Status_Table") {
+
+ meta <- attr(dat, "metadata")
+ is_latex <- !is.null(knitr::opts_knit$get("rmarkdown.pandoc.to")) &&
+ knitr::opts_knit$get("rmarkdown.pandoc.to") == "latex"
+
+ # Null-safe labels
+ ssb_lab <- meta$mod_ssb_name %||% "Spawning Stock Biomass"
+ f_lab <- meta$mod_f_name %||% "Fishing Mortality"
+ rec_lab <- meta$mod_recr_name %||% "Recruitment"
+
+ terminal_year <- max(dat$year[dat$module_name %in% c("model_results", "catch")], na.rm = TRUE)
+ year_range <- (terminal_year - n_years + 1):terminal_year
+
+ model_metrics <- dat |>
+ dplyr::filter(module_name == "model_results", year %in% year_range) |>
+ dplyr::mutate(label = dplyr::case_when(
+ label == "biomass" ~ ssb_lab,
+ label == "fishing_mortality" ~ f_lab,
+ label == "recruitment" ~ rec_lab,
+ TRUE ~ label
+ )) |>
+ dplyr::select(year, label, estimate)
+
+ catch_metrics <- dat |>
+ dplyr::filter(module_name == "catch", year %in% year_range) |>
+ dplyr::select(year, fleet, estimate) |>
+ dplyr::rename(label = fleet)
+
+ df_wide <- dplyr::bind_rows(catch_metrics, model_metrics) |>
+ dplyr::mutate(label_chr = as.character(label)) |>
+ dplyr::mutate(Metric = if (is_latex) to_latex_caption(label_chr) else clean_assessment_latex(label_chr)) |>
+ dplyr::mutate(Metric = dplyr::if_else(is.na(Metric) | Metric == "", label_chr, Metric)) |>
+ dplyr::select(Metric, year, estimate) |>
+ dplyr::distinct(Metric, year, .keep_all = TRUE) |>
+ tidyr::pivot_wider(names_from = year, values_from = estimate)
+
+ final_table <- df_wide |>
+ gt::gt() |>
+ gt::fmt_number(
+ columns = where(is.numeric),
+ rows = grepl("ratio|⁄|~|/|R0", Metric, ignore.case = TRUE),
+ decimals = 3,
+ use_seps = FALSE
+ ) |>
+ gt::fmt_number(
+ columns = where(is.numeric),
+ rows = !grepl("ratio|⁄|~|/|R0", Metric, ignore.case = TRUE),
+ decimals = 0,
+ use_seps = TRUE
+ ) |>
+ gt::fmt_markdown(columns = Metric) |>
+ gt::cols_label(Metric = "") |>
+
+ gt::tab_header(
+ title = if (is_latex) NULL else stock_name,
+ subtitle = if (is_latex) NULL else gt::html(clean_assessment_latex(caption))
+ ) |>
+ gt::opt_row_striping() |>
+ gt::tab_options(
+ table.font.size = if (is_latex) gt::px(12) else gt::px(14),
+ data_row.padding = gt::px(4),
+ table.width = if (is_latex) gt::pct(100) else NULL
+ )
+
+ return(final_table)
+}
\ No newline at end of file
diff --git a/resources/ExportLegacyNEFSCtoASARproject/table_projections1.1.R b/resources/ExportLegacyNEFSCtoASARproject/table_projections1.1.R
new file mode 100644
index 0000000..2fb009f
--- /dev/null
+++ b/resources/ExportLegacyNEFSCtoASARproject/table_projections1.1.R
@@ -0,0 +1,64 @@
+#' Create Projection Summary Table
+#'
+#' Generates a table showing assumed and projected years of Catch, SSB, and F.
+#' This version dynamically maps the fishing mortality label and includes
+#' NULL-safety for metadata attributes.
+#'
+#' @param dat A standardized asar_stock object.
+#' @param stock_name Character; display name for the stock.
+#' @param caption Character; table subtitle/caption.
+#' @param label Character; internal ID for the table.
+#'
+#' @return A formatted gt table object.
+#' @export
+table_projections <- function(dat,
+ stock_name = attr(dat, "metadata")$spp_name,
+ caption = attr(dat, "metadata")$cap_proj,
+ label = "Projection_Table") {
+
+ meta <- attr(dat, "metadata")
+ is_latex <- !is.null(knitr::opts_knit$get("rmarkdown.pandoc.to")) &&
+ knitr::opts_knit$get("rmarkdown.pandoc.to") == "latex"
+
+ # Dynamic F Label logic
+ legacy_f_label <- meta$mod_f_name %||% "F"
+ f_display_label <- if (is_latex) to_latex_caption(legacy_f_label) else clean_assessment_latex(legacy_f_label)
+
+ # Data processing
+ proj_raw <- dat |> dplyr::filter(module_name == "projections")
+ proj_wide <- proj_raw |>
+ dplyr::select(year, label, estimate_chr) |>
+ dplyr::distinct(year, label, .keep_all = TRUE) |>
+ tidyr::pivot_wider(names_from = label, values_from = estimate_chr)
+
+ proj_data <- proj_wide |>
+ dplyr::mutate(Period = dplyr::case_when(year == min(year, na.rm = TRUE) ~ "Assumed", TRUE ~ "Projected")) |>
+ dplyr::select(Period, Year = year, `Catch (mt)` = total_catch, `SSB (mt)` = biomass, F_VAL = fishing_mortality)
+
+ # Build table
+ final_table <- proj_data |>
+ gt::gt(groupname_col = "Period", id = label) |>
+ gt::cols_align(align = "right", columns = dplyr::everything()) |>
+ gt::cols_align(align = "left", columns = Year) |>
+ gt::cols_label(F_VAL = if (is_latex) gt::md(f_display_label) else gt::html(f_display_label)) |>
+
+ # Titling: Only used for HTML; suppressed for LaTeX to avoid double-titles
+ gt::tab_header(
+ title = if (is_latex) NULL else stock_name,
+ subtitle = if (is_latex) NULL else gt::html(clean_assessment_latex(caption))
+ ) |>
+
+ gt::opt_row_striping() |>
+ gt::tab_options(
+ #table.width = if (is_latex) gt::pct(100) else gt::px(700),
+ table.font.size = if (is_latex) gt::px(12) else gt::px(14),
+ # Use this instead to keep the table compact
+ table.width = if (is_latex) NULL else gt::pct(100),
+ # Ensure data rows aren't overly tall
+ data_row.padding = gt::px(4),
+ row_group.font.weight = "bold",
+ row_group.background.color = "#eeeeee"
+ )
+
+ return(final_table)
+}
\ No newline at end of file
diff --git a/resources/ExportLegacyNEFSCtoASARproject/testAdditionalStocks.R b/resources/ExportLegacyNEFSCtoASARproject/testAdditionalStocks.R
new file mode 100644
index 0000000..1fea3c6
--- /dev/null
+++ b/resources/ExportLegacyNEFSCtoASARproject/testAdditionalStocks.R
@@ -0,0 +1,289 @@
+#Additional Test Stocks
+myDir <- "dhennen" #change to your network home name
+sourcePath <- file.path("/home",myDir,"EIEIO","ASAR","MapLegacyAutoUpdateToAsar")
+source(file.path(sourcePath,"MapAutoUpdateToAsar.R"))
+
+
+rdata_path <- file.path("/home","dhennen","EIEIO","ASAR","MapLegacyAutoUpdateToAsar"
+ ,"testStocks","BUTUNITAutoAss.RData")
+BUTUNIT <- map_autoUpdate_to_stockplotr(rdata_path) #this is meant to mimic stockplotr::convert_output
+#structure for use in other stockplotr functions
+#str(BUTUNIT)
+#test <- BUTUNIT |> dplyr::filter(module_name=="survey") |> str()
+
+stockplotr::plot_biomass(dat = BUTUNIT)
+#how about for fishing mortality?
+stockplotr::plot_fishing_mortality(
+ dat = BUTUNIT
+ #,era = "current"
+ ,ref_line = "msy"
+)
+
+#Nothing available specifically for catch in stockplotr - so make our own!
+source(file.path(sourcePath,"plot_total_catch.R"))
+plot_total_catch(dat = BUTUNIT)
+#or if you want to keep the legacy style
+plot_total_catch(dat = BUTUNIT, type = "bar")
+
+# Let's see how the indices look - once again make a function to copy our figure style,
+# but using stockplotr's style and inputs.
+source(file.path(sourcePath,"plot_survey_indices.R"))
+plot_survey_indices(dat = BUTUNIT)
+
+#Recruitment - this one seems to work fine
+stockplotr::plot_recruitment(dat = BUTUNIT)
+
+#%%%%%%%%%%%%%%%%%%%% How about tables? %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+# stockplotr::table_landings() is the only one in production currently, but
+# we can copy their style...
+source(file.path(sourcePath,"table_brp1.1.R"))
+table_brp(dat = BUTUNIT)
+
+
+#Next we need our catch and status table
+source(file.path(sourcePath,"table_catch_status1.1.R"))
+table_catch_status(dat = BUTUNIT)
+
+#projection table
+source(file.path(sourcePath,"table_projections1.1.R"))
+table_projections(dat = BUTUNIT)
+#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+#_______________________________________________
+# Example of using the metadata
+metadata <- attr(BUTUNIT, "metadata")
+
+#to plot a rho adjusted value and previous assessment
+# Generate the comparison plot
+stockplotr::plot_biomass(BUTUNIT) +
+
+ # Add the previous assessment line by filtering for the 'prev' era
+ ggplot2::geom_line(
+ data = dplyr::filter(BUTUNIT,
+ module_name == "model_results",
+ label == "biomass",
+ era == "prev"),
+ ggplot2::aes(x = year, y = estimate, color = "Previous"),
+ linetype = "dashed",
+ linewidth = 0.8
+ ) +
+
+ # Add the Rho Adjusted terminal point from metadata attributes
+ ggplot2::geom_point(
+ ggplot2::aes(x = metadata$term_yr, y = metadata$terminal_b_adj, color = "Rho Adj"),
+ size = 3
+ ) +
+
+ # Text annotation for the Rho Adjusted point
+ ggplot2::annotate(
+ "text",
+ x = metadata$term_yr,
+ y = metadata$terminal_b_adj,
+ label = "Rho Adj",
+ color = "red",
+ vjust = -1.5
+ ) +
+
+ # Define the aesthetic mapping for the legend and line colors
+ ggplot2::scale_color_manual(
+ name = "Assessment",
+ values = c("Current" = "black", "Previous" = "blue", "Rho Adj" = "red")
+ ) +
+
+ # Update labels to reflect the inclusion of the comparison data
+ ggplot2::labs(
+ subtitle = "Current Assessment vs. Previous and Rho-Adjusted Terminal Year"
+ )
+#________________________________________________________________________
+#make a report!
+source(file.path(sourcePath,"create_asar_object.R"))
+
+# Load the existing Source of Truth
+stock <- create_asar_object(BUTUNIT)
+
+# Check if everything is still correct
+if(validate_asar_object(stock)) {
+ model_results <- stock
+ metadata <- attr(stock, "metadata")
+ fname <- gsub(" ","_",metadata$spp_name) |> paste0(metadata$report_yr) |> paste0(".rda")
+ save(model_results, file = fname)
+}
+
+# Define Quarto global execution options
+# This ensures all chunks default to no echo, no warnings, and no messages
+report_options <- list(
+ execute = list(
+ warning = FALSE,
+ message = FALSE,
+ echo = FALSE
+ )
+)
+
+# Create a new assessment directory and template
+asar::create_template(
+ office = "NEFSC",
+ format = "pdf",
+ region = "Northeast",
+ authors = c("Joe Blow"="NEFSC"),
+ species = metadata$spp_name,
+ year = metadata$report_yr,
+ model_results = fname,
+ title = glue::glue("Management Track Assessment of {metadata$spp_name} {metadata$report_yr}"),
+ # Pass the execution rules here
+ quarto_options = report_options
+)
+
+
+#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+
+
+
+
+#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+#Additional Test Stocks
+myDir <- "dhennen" #change to your network home name
+sourcePath <- file.path("/home",myDir,"EIEIO","ASAR","MapLegacyAutoUpdateToAsar")
+source(file.path(sourcePath,"MapAutoUpdateToAsar.R"))
+
+
+rdata_path <- file.path("/home","dhennen","EIEIO","ASAR","MapLegacyAutoUpdateToAsar"
+ ,"testStocks","SCUNITAutoAss.RData")
+SCUNIT <- map_autoUpdate_to_stockplotr(rdata_path) #this is meant to mimic stockplotr::convert_output
+#structure for use in other stockplotr functions
+#str(SCUNIT)
+# test <- SCUNIT |> dplyr::filter(module_name=="model_results" & label=="biomass") |>
+# dplyr::pull(estimate) |> range()
+
+
+#Surfclam has weird reference point plots where the values i nthe BRP table are not useful in
+#the plot - we can use ggplot to address this
+p <- stockplotr::plot_biomass(
+ dat = SCUNIT
+ ,ref_line = "") #don't plot the reference line here - there may be a better way to do this!
+
+# Add your custom horizontal line
+threshold_val <- 1
+
+p_updated <- p +
+ ggplot2::geom_hline(yintercept = threshold_val,
+ color = "darkred",
+ linetype = "dashed",
+ linewidth = 1) +
+ # Place the text at the middle year, slightly above the line
+ ggplot2::annotate("text",
+ x = mean(range(as.numeric(SCUNIT$year), na.rm = TRUE)),
+ y = 1.2,
+ label = "Threshold",
+ color = "darkred",
+ fontface = "bold")
+plot(p_updated)
+
+
+#how about for fishing mortality?
+p <- stockplotr::plot_fishing_mortality(
+ dat = SCUNIT
+ #,era = "current"
+ ,ref_line = ""
+)
+# Add your custom horizontal line
+threshold_val <- 1
+
+p_updated <- p +
+ ggplot2::geom_hline(yintercept = threshold_val,
+ color = "darkred",
+ linetype = "dashed",
+ linewidth = 1) +
+ # Place the text at the middle year, slightly above the line
+ ggplot2::annotate("text",
+ x = mean(range(as.numeric(SCUNIT$year), na.rm = TRUE)),
+ y = 1.05,
+ label = "Threshold",
+ color = "darkred",
+ fontface = "bold")
+plot(p_updated)
+
+
+#Nothing available specifically for catch in stockplotr - so make our own!
+source(file.path(sourcePath,"plot_total_catch.R"))
+plot_total_catch(dat = SCUNIT)
+#or if you want to keep the legacy style
+plot_total_catch(dat = SCUNIT, type = "bar")
+
+# Let's see how the indices look - once again make a function to copy our figure style,
+# but using stockplotr's style and inputs.
+source(file.path(sourcePath,"plot_survey_indices.R"))
+plot_survey_indices(dat = SCUNIT)
+plot_survey_indices(SCUNIT, show_line = TRUE)
+
+
+#Recruitment - this one seems to work fine
+stockplotr::plot_recruitment(dat = SCUNIT)
+
+#%%%%%%%%%%%%%%%%%%%% How about tables? %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+# surfclam is weird because the f and ssb model results are relative
+# we have to supply the status because the automation won't work.
+source(file.path(sourcePath,"table_brp1.1.R"))
+table_brp(dat = SCUNIT)
+
+
+#Next we need our catch and status table
+source(file.path(sourcePath,"table_catch_status1.1.R"))
+table_catch_status(dat = SCUNIT)
+
+#projection table
+source(file.path(sourcePath,"table_projections1.1.R"))
+table_projections(dat = SCUNIT)
+#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+#attr(SCUNIT, "metadata")
+
+#make a report!
+source(file.path(sourcePath,"create_asar_object.R"))
+
+# Load the existing Source of Truth
+stock <- create_asar_object(SCUNIT)
+
+# Check if everything is still correct
+if(validate_asar_object(stock)) {
+ model_results <- stock
+ metadata <- attr(stock, "metadata")
+ fname <- gsub(" ","_",metadata$spp_name) |> paste0(metadata$report_yr) |> paste0(".rda")
+ save(model_results, file = fname)
+}
+
+# Define Quarto global execution options
+# This ensures all chunks default to no echo, no warnings, and no messages
+report_options <- list(
+ execute = list(
+ warning = FALSE,
+ message = FALSE,
+ echo = FALSE
+ )
+)
+
+# Create a new assessment directory and template
+asar::create_template(
+ office = "NEFSC",
+ format = "pdf",
+ region = "Northeast",
+ authors = c("Joe Blow"="NEFSC"),
+ species = metadata$spp_name,
+ year = metadata$report_yr,
+ model_results = fname,
+ title = glue::glue("Management Track Assessment of {metadata$spp_name} {metadata$report_yr}"),
+ # Pass the execution rules here
+ quarto_options = report_options
+)
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/resources/ExportLegacyNEFSCtoASARproject/testStocks/BSBUNITAutoAss.RData b/resources/ExportLegacyNEFSCtoASARproject/testStocks/BSBUNITAutoAss.RData
new file mode 100644
index 0000000..5f9aab9
Binary files /dev/null and b/resources/ExportLegacyNEFSCtoASARproject/testStocks/BSBUNITAutoAss.RData differ
diff --git a/resources/ExportLegacyNEFSCtoASARproject/testStocks/BUTUNITAutoAss.RData b/resources/ExportLegacyNEFSCtoASARproject/testStocks/BUTUNITAutoAss.RData
new file mode 100644
index 0000000..61e3e2a
Binary files /dev/null and b/resources/ExportLegacyNEFSCtoASARproject/testStocks/BUTUNITAutoAss.RData differ
diff --git a/resources/ExportLegacyNEFSCtoASARproject/testStocks/CODGBAutoAss.RData b/resources/ExportLegacyNEFSCtoASARproject/testStocks/CODGBAutoAss.RData
new file mode 100644
index 0000000..f4d7772
Binary files /dev/null and b/resources/ExportLegacyNEFSCtoASARproject/testStocks/CODGBAutoAss.RData differ
diff --git a/resources/ExportLegacyNEFSCtoASARproject/testStocks/CODWGOMAutoAss.RData b/resources/ExportLegacyNEFSCtoASARproject/testStocks/CODWGOMAutoAss.RData
new file mode 100644
index 0000000..820cc1e
Binary files /dev/null and b/resources/ExportLegacyNEFSCtoASARproject/testStocks/CODWGOMAutoAss.RData differ
diff --git a/resources/ExportLegacyNEFSCtoASARproject/testStocks/SCUNITAutoAss.RData b/resources/ExportLegacyNEFSCtoASARproject/testStocks/SCUNITAutoAss.RData
new file mode 100644
index 0000000..69bc7c4
Binary files /dev/null and b/resources/ExportLegacyNEFSCtoASARproject/testStocks/SCUNITAutoAss.RData differ
diff --git a/resources/regional_info.qmd b/resources/regional_info.qmd
new file mode 100644
index 0000000..8fd2ef2
--- /dev/null
+++ b/resources/regional_info.qmd
@@ -0,0 +1,5 @@
+# Regionally-specific content
+
+## NEFSC
+
+{{< include ../Curriculum/snippets/nefsc_transition.qmd >}}