From 2828447d90ac759da48de56a5d69be8d21bf04db Mon Sep 17 00:00:00 2001 From: k-doering-NOAA Date: Thu, 10 Aug 2023 12:24:57 +0000 Subject: [PATCH] =?UTF-8?q?Deploying=20to=20gh-pages=20from=20@=20nmfs-fis?= =?UTF-8?q?h-tools/SSMSE@47445d973a537eaf9a7361f842d3f7a404bca247=20?= =?UTF-8?q?=F0=9F=9A=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- manual/404.html | 2 +- manual/M-case-study-ex.html | 2 +- manual/SSMSE.html | 2 +- ...d-options-use-a-custom-management-strategyprocedure.html | 2 +- manual/glossary-of-mse-terms.html | 2 +- manual/helper.html | 2 +- manual/index.html | 6 +++--- manual/intro.html | 2 +- manual/output.html | 2 +- manual/plotting-output-and-performance-metrics.html | 2 +- manual/search_index.json | 2 +- manual/simple.html | 4 ++-- 12 files changed, 15 insertions(+), 15 deletions(-) diff --git a/manual/404.html b/manual/404.html index bbb9168b..d5acdb99 100644 --- a/manual/404.html +++ b/manual/404.html @@ -23,7 +23,7 @@ - + diff --git a/manual/M-case-study-ex.html b/manual/M-case-study-ex.html index fec12614..94883ca4 100644 --- a/manual/M-case-study-ex.html +++ b/manual/M-case-study-ex.html @@ -23,7 +23,7 @@ - + diff --git a/manual/SSMSE.html b/manual/SSMSE.html index 784fa291..4df06dba 100644 --- a/manual/SSMSE.html +++ b/manual/SSMSE.html @@ -23,7 +23,7 @@ - + diff --git a/manual/advanced-options-use-a-custom-management-strategyprocedure.html b/manual/advanced-options-use-a-custom-management-strategyprocedure.html index 28802da4..cceb1569 100644 --- a/manual/advanced-options-use-a-custom-management-strategyprocedure.html +++ b/manual/advanced-options-use-a-custom-management-strategyprocedure.html @@ -23,7 +23,7 @@ - + diff --git a/manual/glossary-of-mse-terms.html b/manual/glossary-of-mse-terms.html index 98d255d7..7bcf0e9d 100644 --- a/manual/glossary-of-mse-terms.html +++ b/manual/glossary-of-mse-terms.html @@ -23,7 +23,7 @@ - + diff --git a/manual/helper.html b/manual/helper.html index f3fda62f..ad9166c1 100644 --- a/manual/helper.html +++ b/manual/helper.html @@ -23,7 +23,7 @@ - + diff --git a/manual/index.html b/manual/index.html index 41577fe3..c70ef556 100644 --- a/manual/index.html +++ b/manual/index.html @@ -23,7 +23,7 @@ - + @@ -230,7 +230,7 @@

Preface

@@ -244,7 +244,7 @@

R session information
sessionInfo()

R version 4.3.1 (2023-06-16)
 Platform: x86_64-pc-linux-gnu (64-bit)
-Running under: Ubuntu 22.04.2 LTS
+Running under: Ubuntu 22.04.3 LTS
 
 Matrix products: default
 
diff --git a/manual/intro.html b/manual/intro.html
index f91c8c04..8d847fa7 100644
--- a/manual/intro.html
+++ b/manual/intro.html
@@ -23,7 +23,7 @@
 
 
 
-
+
 
   
   
diff --git a/manual/output.html b/manual/output.html
index 1c29d30e..e0f4e034 100644
--- a/manual/output.html
+++ b/manual/output.html
@@ -23,7 +23,7 @@
 
 
 
-
+
 
   
   
diff --git a/manual/plotting-output-and-performance-metrics.html b/manual/plotting-output-and-performance-metrics.html
index a401d815..31c1ac25 100644
--- a/manual/plotting-output-and-performance-metrics.html
+++ b/manual/plotting-output-and-performance-metrics.html
@@ -23,7 +23,7 @@
 
 
 
-
+
 
   
   
diff --git a/manual/search_index.json b/manual/search_index.json
index d8925f45..a535d357 100644
--- a/manual/search_index.json
+++ b/manual/search_index.json
@@ -1 +1 @@
-[["index.html", "SSMSE user manual Preface R session information", " SSMSE user manual Kathryn Doering and Nathan Vaughan 2023-08-09 Preface Logo for the SSMSE package This is the user manual for SSMSE, an R package for Management Strategy Evaluation with Stock Synthesis Operating models. This document is still a work in progress! If you would like to suggest changes, the R markdown files used to generate the user manual are available in a bookdown folder within the SSMSE repository. Please feel free to open an issue or submit a pull request to suggest changes. R session information sessionInfo() R version 4.3.1 (2023-06-16) Platform: x86_64-pc-linux-gnu (64-bit) Running under: Ubuntu 22.04.2 LTS Matrix products: default locale: [1] LC_CTYPE=C.UTF-8 LC_NUMERIC=C LC_TIME=C.UTF-8 [4] LC_COLLATE=C.UTF-8 LC_MONETARY=C.UTF-8 LC_MESSAGES=C.UTF-8 [7] LC_PAPER=C.UTF-8 LC_NAME=C LC_ADDRESS=C [10] LC_TELEPHONE=C LC_MEASUREMENT=C.UTF-8 LC_IDENTIFICATION=C time zone: UTC tzcode source: system (glibc) attached base packages: [1] stats graphics grDevices utils datasets methods base loaded via a namespace (and not attached): [1] ss3sim_1.20.0 r4ss_1.48.1 "],["intro.html", "1 Introduction 1.1 Purpose 1.2 Functions in SSMSE 1.3 Brief description of the SSMSE MSE simulation procedure using run_SSMSE()", " 1 Introduction 1.1 Purpose SSMSE allows users to directly use Stock Synthesis (SS3) as an operating model (OM) in a Management Strategy Evaluation (MSE) through R functions. The approach requires a conditioned Stock Synthesis model, which is treated as the OM. ## Decide if SSMSE is the right tool Another simulation tool available is ss3sim. {ss3sim} helps users conduct simulation studies with SS3 operating models and estimation models. {ss3sim} is different than {SSMSE} beacuse: In {SSMSE}, there is feedback from the estimation method to the operating model and the operating model is projected forward. In {ss3sim}, there is no feedback and the operating model is not projected forward in time. {ss3sim} has more helper functions which modify the operating model and estimation model, but ss3sim is less flexible in the initial Operating models and estimation methods/models that can be used compared to {SSMSE}. {SSMSE} can use custom estimation methods; {ss3sim} must use a Stock Synthesis Estimation Model. Please post this question regarding the suitability of {SSMSE} for your study in SSMSE discussions q+a. 1.2 Functions in SSMSE The main functions users can call in SSMSE are: Function Description run_SSMSE() Run the MSE simulations SSMSE_summary_all() Summarize MSE output The helper functions to create inputs to run_SSMSE are: Helper Function Description create_sample_struct() Helper function to create a list for future sampling from a model to use as input in run_SSMSE() create_future_om_list() Helper function that provides examples of the structure for the future_om_list input to run_SSMSE(). develop_OMs() Helper function to turn one OM into many Exported functions that can be used for writing custom management strategies are: Function Description run_EM() Run an SS3 estimation model (uses run_ss_model) run_ss_model() Run an SS3 model get_bin() Get location of the SS3 binary. parse_MS() Function that runs the management strategy and returns catch by fleet for the projections. A reference function for those setting up custom management strategies. Finally, some plotting functions are available: Plotting Function Description plot_index_sampling() Plot to compare the sampled index values to the operating model expected values and original operating model conditioning index data. plot_comp_sampling() Plot to compare the sampled composition values to the operating model expected values and original operating model conditioning composition data. 1.3 Brief description of the SSMSE MSE simulation procedure using run_SSMSE() In general, the steps for a scenario after calling run_SSMSE is: Create the OM Sample data from the OM Run management strategy Determine if more years need to be projected. If yes, continue; if no, end simulation. Update OM with n years of catch where n is the number of years between assessments Sample n years of data Repeat steps 3-6 until the end of the simulation. The basic steps are contained in functions: User calls run_SSMSE Steps for one iteration are passed to run_SSMSE_scen() and then run_SSMSE_iter(). The OM is created with create_OM() run_OM() runs the OM and creates the dataset to pass to the management strategy Parse_MS() is called, and runs the Management Strategy function (e.g., EM()), which projects catch forward. Update_OM() is called, which puts catch into the operating model. run_OM() runs the OM and creates the dataset to pass to the management strategy Parse_MS() is called, and runs the Management Strategy function (e.g., EM()), which projects catch forward. Determine if more years need to be projected. If yes, continue; if no, end simulation. Repeat steps 5-8. 1.3.1 Conditioning the OM and sampling from the OM For each scenario, SSMSE starts with the user providing a fitted SSS3 model (or selecting an model from the SSMSE package) to use as an OM. For each iteration of the scenario, SSMSE turns the SS fitted model into an OM and runs it once with no estimation with Stock Synthesis in order to get the “true” values and a bootstrapped data set from SS3. Note that any modifications to the OM’s parameters specified by the users as it is extended forward in time are also applied. 1.3.2 First run of the management strategy in the MSE simulation The bootstrapped dataset is then used in a Management strategy to project catch by fleet to use for the next \\(n\\) years, where \\(n\\) is the number of years between assessments. 1.3.3 Feedback from managment strategy into OM: extending model years The catch for the next \\(n\\) years before the next assessment is then added to the OM, as well as any recruitment or time varying parameter deviations. The OM is again run with no estimation where it can be used to produce sampled data for the next \\(n\\) years. These new data values are appended to the original dataset. 1.3.4 Subsequent runs of the management strategy The appended data set is then used in the management strategy again, and new catch by fleet is produced that can then be fed back to the OM. "],["simple.html", "2 A simple example 2.1 Setup R workspace folders 2.2 Create the operating models (OMs) 2.3 Adding process error through recruitment deviations and time-varying selectivity 2.4 Examine the management procedure used 2.5 Run SSMSE 2.6 run_SSMSE output 2.7 Performance metrics 2.8 Summarize results 2.9 Simple Convergence Check 2.10 Plot Spawning Stock Biomass (SSB) 2.11 Example MSE Results 2.12 Delete the files", " 2 A simple example Suppose we want to look at how well we are able to achieve a performance metric under uncertainty in the operating model (OM). We will look 2 scenarios, one where Steepness (h) is specified correctly and one where it is specified incorrectly in an estimation model (EM): Scenario 1. h-ctl: Cod operating model (h = 0.65) with correctly specified cod model EM (fixed h = 0.65). The OM is the same as the EM. Scenario 2. h-1: Cod operating model (h = 1) with misspecified cod model EM (fixed h = 0.65); The OM is not the same as the EM. Note that this is a simple example where the OM and EM structures for both scenarios are identical, except for different steepness between the OM and EM in scenario 2 and some process error we will include in the operating model. We will assume we want to run the MSE loop for 6 years, with a stock assessment occuring every 3 years (and forecasting catch to maintain 40% of unfished spawning stock biomass). The cod model’s last year is 100, so the OM is initially conditioned through year 100. Then, after conditioning the operating model through year 100, assessments will occur in years 100 and 103. The operating model runs through year 106. We chose not to run the assessment in year 106, as there was no need for its output in this example. 2.1 Setup R workspace folders First, we will load the SSMSE package and create a folder in which to run the example: Installing 4 packages: mvtnorm, bdsmatrix, numDeriv, bbmle library(SSMSE) #load the package library(r4ss) #install using remotes::install_github("r4ss/r4ss) library(foreach) #if using run_parallel = TRUE library(doParallel) #if using run_parallel = TRUE # Create a folder for the output in the working directory. run_SSMSE_dir <- file.path("run_SSMSE-ex") dir.create(run_SSMSE_dir) 2.2 Create the operating models (OMs) 2.2.1 Specify alternative values for h The cod model with h = 0.65 (as in scenario 1) is included as external package data in SSMSE. However, we will need to modify it to use as an operating model with h = 1 (as in scenario 2). Note in this case that refit_OM is false, so the model is not being refit, just run through without fitting. To condition the new model on the same data as the input model, refit_OM should be TRUE. First, we identify where the base cod model is stored, modify it such that the steepness parameter is 1, and save the modified cod OM for scenario 2 in a new folder in the run_SSMSE_dir directory. cod_mod_path <- system.file("extdata", "models", "cod", package = "SSMSE") # develop_OMs will save a model called "cod_SR_BH_steep_1" in the out_dir # specified develop_OMs(OM_name = "cod", out_dir = run_SSMSE_dir, par_name = "SR_BH_steep", par_vals = 1, refit_OMs = FALSE, hess = FALSE) # OM model for scenario 2 cod_1_path <- file.path(run_SSMSE_dir, "cod_SR_BH_steep_1") 2.3 Adding process error through recruitment deviations and time-varying selectivity Recruitment deviations, implementation error, and changes in parameters in the projection period of the OM can be added through the future_om_list input to run_SSMSE. First, we’ll set up the list to add recruitment deviations in the projection period. The same recruitment deviation patterns are used across scenarios, but different patterns are use across iterations in the same scenario. We also want these deviations to have the same standard deviations as the historical deviations with 0 mean (the assumed default). # Start from a list created by a helper function template_mod_change <- create_future_om_list() # add recruitment deviations rec_dev_specify <- template_mod_change[[1]] rec_dev_specify$pars <- "rec_devs" # apply change to rec devs rec_dev_specify$scen <- c("replicate", "all") # using 1 to 100 means the sd or mean will be calculated by taking the sd across years # from 1 to 100 rec_dev_specify$input$first_yr_averaging <- 1 rec_dev_specify$input$last_yr_averaging <- 100 # The following 2 lines suggest that this change is immediately applied in year # 101, with no transitory period for using sd 0 to the new sd. rec_dev_specify$input$last_yr_orig_val <- 100 rec_dev_specify$input$first_yr_final_val <- 101 rec_dev_specify$input$ts_param <- "sd" # this change is for the sd # no input value needed since it will be calclated from the historical rec devs. rec_dev_specify$input$value <- NA rec_dev_specify $pars [1] "rec_devs" $scen [1] "replicate" "all" $pattern [1] "model_change" $input first_yr_averaging last_yr_averaging last_yr_orig_val first_yr_final_val 1 1 100 100 101 ts_param method value 1 sd absolute NA Next, suppose we want to allow selectivity to vary annually for 1 selectivity parameter of the fishery throughout the projection period. The following specifies that the value for selectivity varies randomly around the base value with a sd of 0.2. # put together the change for selectivity (random values around the orig val, with # an sd of 0.2) mod_change_sel <- template_mod_change[[1]] mod_change_sel$scen[2] <- "all" # apply to all scenarios # The following 2 lines suggest that this change is immediately applied in year # 101, with no transitory period for using sd 0 to the new sd. # historical values are NA in this case, because they are not used to determine # the sd to use. mod_change_sel$input$last_yr_orig_val <- 100 mod_change_sel$input$first_yr_final_val <- 101 mod_change_sel$input$ts_param <- "sd" # this change is for the sd mod_change_sel$input$value <- 0.2 # se to use in the projection period mod_change_sel $pars [1] "SizeSel_P_3_Fishery(1)" $scen [1] "replicate" "all" $pattern [1] "model_change" $input first_yr_averaging last_yr_averaging last_yr_orig_val first_yr_final_val 1 NA NA 100 101 ts_param method value 1 sd absolute 0.2 Finally, add these two changes together into an object to pass to run_SSMSE future_om_list_recdevs_sel <- list(rec_dev_specify, mod_change_sel) 2.3.1 Add observation error through sampling from OM The argument sample_struct specifies the structure for sampling from the OM (and passing to the EM). The function create_sample_struct can be used to construct a simple sampling structure consistent with an input data file: datfile <- system.file("extdata", "models", "cod", "ss3.dat", package = "SSMSE") sample_struct_1_scen <- create_sample_struct(dat = datfile, nyrs = 6) # note warning Warning in FUN(X[[i]], ...): Pattern not found for lencomp: FltSvy 1, Seas 1. Returning NA for Yr in this dataframe. sample_struct_1_scen $catch Yr Seas FltSvy SE 1 101 1 1 0.005 2 102 1 1 0.005 3 103 1 1 0.005 4 104 1 1 0.005 5 105 1 1 0.005 6 106 1 1 0.005 $CPUE Yr Seas FltSvy SE 1 105 7 2 0.2 $lencomp Yr Seas FltSvy Sex Part Nsamp 1 NA 1 1 0 0 125 $agecomp Yr Seas FltSvy Sex Part Ageerr Lbin_lo Lbin_hi Nsamp 1 105 1 2 0 0 1 -1 -1 500 $meanbodywt [1] NA $MeanSize_at_Age_obs [1] NA By default, create_sample_struct identifies sampling patterns from the historical period of the OM and replicates those patterns in the projection period. In our cod example, the sample structure specifies that catch will be added to the estimation model every year (years 101 to 106), but an index of abundance (i.e., CPUE) and age composition (i.e., agecomp) will only be added in year 105. We will use the same sampling scheme for both scenarios, but it is possible to specify different sampling for each scenario. The user could modify this sampling strategy (for example, maybe age composition should also be sampled from FltSvy 2 in Yr 102; the user could add another line to the dataframe in sample_struct$agecomp). Note that length comp (lencomp) includes an NA value for year. This is because no consistent pattern was identified, so the user must define their own input. In this case, we will remove sampling length comps all together: sample_struct_1_scen$lencomp <- NULL # don't use length sampling The same sampling structure will be used for both scenarios, which we define in a list below: sample_struct_list_all <- list("h-ctl" = sample_struct_1_scen, "h-1" = sample_struct_1_scen) 2.4 Examine the management procedure used We will use the same management procedure for both scenarios: Conduct a stock assessment every 3 years to get stock status. Project from this stock assessment using the SS3 forecast file to get projected future catch. Put this projected catch (without implementation error, in the case of this example) back into the OM. Extend the OM forward in time to get the true values for the population. Let’s take a look at step 2 in the management procedure, which is implemented using the forecasting module in SS3. We will examine the forecast file for the estimation model to better understand how catches will be forecasted from the assessment. We will use the same management procedure in both of these scenarios, although for a full MSE analysis, it is likely that multiple management procedures would be compared. fore <- r4ss::SS_readforecast( system.file("extdata", "models", "cod", "forecast.ss", package = "SSMSE"), verbose = FALSE) fore$Forecast [1] 3 fore$Btarget [1] 0.4 fore$Forecast = 3 means our forecasts from the assessment will use fishing mortality (F) to attmpt to achieve a relative (to unfished) spawning stock biomass. Based on fore$Btarget, the relative biomass target is 40% of unfished spawning stock biomass. Note also that the control rule fore$BforconstantF and fore$BfornoF values are set low to make it unlikely that they will be used (these parameters are used for a ramp harvest control rule, which we do not want to use here): fore$BforconstantF [1] 0.03 fore$BfornoF [1] 0.01 Futhermore, fore$Flimitfraction is set to 1 so that the forecasted catch is set equal to the overfishing limit (for simplicity): fore$Flimitfraction [1] 1 Note that the number of forecast years is 1: fore$Nforecastyrs [1] 1 However, an assessment will be conducted every 3 years and thus 3 years of projections is required. SSMSE will automatically modify this value in the estimation model to the appropriate number of forecasting years. More information on using the forecast module in SS3 to forecast catches is available in the Stock Synthesis users manual. Users can also specify their own [custom management procedures] 2.5 Run SSMSE Now, we create a directory to store our results, and use run_SSMSE to run the MSE analysis loop (note this will take some time to run, ~ 20 min): run_res_path <- file.path(run_SSMSE_dir, "results") dir.create(run_res_path) res <- run_SSMSE( scen_name_vec = c("h-ctl", "h-1"),# name of the scenario out_dir_scen_vec = run_res_path, # directory in which to run the scenario iter_vec = c(5,5), # run with 5 iterations each OM_name_vec = NULL, # specify directories instead OM_in_dir_vec = c(cod_mod_path, normalizePath(cod_1_path)), # OM files EM_name_vec = c("cod", "cod"), # cod is included in package data MS_vec = c("EM","EM"), # The management strategy is specified in the EM nyrs_vec = c(6, 6), # Years to project OM forward nyrs_assess_vec = c(3, 3), # Years between assessments future_om_list = future_om_list_recdevs_sel, run_parallel = TRUE, # Run iterations in parallel sample_struct_list = sample_struct_list_all, # How to sample data for running the EM. sample_struct_hist_list = NULL, # because this is null, will just use sampling # as in the current OM data file for the historical period. seed = 12345) #Set a fixed integer seed that allows replication See ?run_SSMSE for more details on function arguments. In a real MSE analysis, running 100+ iterations to reflect the full range of uncertainty (given observation and process errors) in the results would be preferred. However, we are only running 5 iterations per scenario in this demonstration to reduce computing time. 2.6 run_SSMSE output run_SSMSE will create new folders in the folders specified in out_dir_scen_vec (note that in this case, we are running both scenarios in the same folder). After is complete, there will be a folder for each scenario in run_res_path (since out_dir_scen_vec = run_res_path in this example). Within each scenario is a folder for each scenario. And within each scenario folder, there are folders containing the SS3 models that were run by run_SSMSE. There should be 1 folder for the OM, which is run multiple times in this same folder during the MSE analysis. There are multiple folders for the EMs, as a new folder is created each time an assessment is done. The first run is the folder with a name ending in init; then, each assessment after is named for the updated end year of the model. With many iterations, the number of files adds up; in the future, we hope to add options to save less output. 2.7 Performance metrics Quantitative performance metrics should be specified before conducting an MSE. Typically, a suite of performance metrics will be examined; however, for simplicity in this example, we will only look at what the achieved relative biomass was for the last 3 years of projection in the MSE to determine how it compares to the intended management target of 40% of unfished Spawning Stock Biomass. Note that we are only running our MSE projection for 6 years, but longer projections are typical in MSE analyses. 2.8 Summarize results The function SSMSE_summary_all can be used to summarize the model results in a list of 3 dataframes, one for scalar outputs (named scalar), one for timeseries outputs (ts), one for derived quantities (dq). This function also creates summary csv files in the folder where the results are stored. # Summarize 1 iteration of output summary <- SSMSE_summary_all(run_res_path) ## Extracting results from 2 scenarios ## Starting h-1 with 5 iterations ## Starting h-ctl with 5 iterations Plotting and data manipulation can then be done with these summaries. For example, SSB over time by model can be plotted. The models include the Operating Model (cod_OM), Estimation model (EM) for the historical period of years 0-100 (cod_EM_init), and the EM run with last year of data in year 103 (cod_EM_103). The operating models are shown in blue or black (depending on the scenario), and the estimation model runs are shown in orange, and the scenarios are shown on different subplots: library(ggplot2) # use install.packages("ggplot2") to install package if needed library(tidyr) # use install.packages("tidyr") to install package if needed library(dplyr) # use install.packages("dplyr") to install package if needed Attaching package: 'dplyr' The following objects are masked from 'package:stats': filter, lag The following objects are masked from 'package:base': intersect, setdiff, setequal, union 2.9 Simple Convergence Check Check there are no params on bounds or SSB that is way too small or way too large check_convergence <- function(summary, min_yr = 101, max_yr = 120, n_EMs = 5) { require(dplyr) # note: not the best way to do this if(any(!is.na(summary$scalar$params_on_bound))) { warning("Params on bounds") } else { message("No params on bounds") } summary$ts$model_type <- ifelse(grepl("_EM_", summary$ts$model_run), "EM", "OM") calc_SSB <- summary$ts %>% filter(year >= min_yr & year <= max_yr) %>% select(iteration, scenario, year, model_run, model_type, SpawnBio) OM_vals <- calc_SSB %>% filter(model_type == "OM") %>% rename(SpawnBio_OM = SpawnBio ) %>% select(iteration, scenario, year, SpawnBio_OM) EM_vals <- calc_SSB %>% filter(model_type == "EM") %>% rename(SpawnBio_EM = SpawnBio) %>% select(iteration, scenario, year, model_run, SpawnBio_EM) bind_vals <- full_join(EM_vals, OM_vals, by = c("iteration", "scenario", "year")) %>% mutate(SSB_ratio = SpawnBio_EM/SpawnBio_OM) filter_SSB <- bind_vals %>% filter(SSB_ratio > 2 | SSB_ratio < 0.5) if(nrow(filter_SSB) > 0 ) { warning("Some large/small SSBs relative to OM") } else { message("All SSBs in EM are no greater than double and no less than half SSB vals in the OM") } return_val <- bind_vals } values <- check_convergence(summary = summary, min_yr = 101, max_yr = 106, n_EMs = 5) No params on bounds All SSBs in EM are no greater than double and no less than half SSB vals in the OM 2.10 Plot Spawning Stock Biomass (SSB) This plot shows that SSB estimated does not match perfectly with the operating model. A similar plot could be made for any parameter of interest. # plot SSB by year and model run ggplot2::ggplot(data = subset(summary$ts, model_run %in% c("cod_OM", "cod_SR_BH_steep_1_OM", "cod_EM_103")), ggplot2::aes(x = year, y = SpawnBio)) + ggplot2::geom_vline(xintercept = 100, color = "gray") + ggplot2::geom_line(ggplot2::aes(linetype = as.character(iteration), color = model_run))+ ggplot2::scale_color_manual(values = c("#D65F00", "black", "blue")) + ggplot2::scale_linetype_manual(values = rep("solid", 50)) + ggplot2::guides(linetype = FALSE) + ggplot2::facet_wrap(. ~ scenario) + ggplot2::theme_classic() Warning: The `<scale>` argument of `guides()` cannot be `FALSE`. Use "none" instead as of ggplot2 3.3.4. This warning is displayed once every 8 hours. Call `lifecycle::last_lifecycle_warnings()` to see where this warning was generated. Now, we calculate and plot the performance metric, which is average spawning stock biomass (SSB) from years 104 to 106. # get_SSB_avg calculates the SSB in each year for each # iteration of the operating model, then takes the average over the years from # min_yr, to max_year. It uses the summary object as input to do these # calculations. get_SSB_avg <- function(summary, min_yr, max_yr) { OM_vals <- unique(summary$ts$model_run) OM_vals <- grep("_OM$", OM_vals, value = TRUE) SSB_yr <- summary$ts %>% filter(year >= min_yr & year <= max_yr) %>% filter(model_run %in% OM_vals) %>% select(iteration, scenario, year, SpawnBio) %>% group_by(iteration, scenario) %>% summarize(avg_SSB = mean(SpawnBio), .groups = "keep") %>% ungroup() SSB_yr } avg_SSB <- get_SSB_avg(summary, min_yr = 104, max_yr = 106) # function to summarize data in plot data_summary <- function(x) { m <- mean(x) ymin <- m - sd(x) ymax <- m + sd(x) return(c(y = m, ymin = ymin, ymax = ymax)) } # Now, plot the average relative spawning stock biomass for years 104 - 106 ggplot(data = avg_SSB, aes(x = scenario, y = avg_SSB)) + stat_summary(fun.data = data_summary, position = position_dodge(width = 0.9), color = "blue") + labs(title = "Long-term average SSB\\n(years 104-106)", x = "Scenario", y = "SSB") + theme_classic() From the above plot, we see differences in the average SSb between the 2 scenarios. 2.11 Example MSE Results We can see from the performance metric that mis-specifying the value of steepness will results in higher realized relative spawning stock biomass than correctly specifying it. This gives us some idea of the consequences of misspecifying steepness in the stock assessment. 2.12 Delete the files If you wish to delete the files created from this example, you can use: unlink(run_SSMSE_dir, recursive = TRUE) "],["advanced-options-use-a-custom-management-strategyprocedure.html", "3 Advanced options: use a custom management strategy/procedure", " 3 Advanced options: use a custom management strategy/procedure Users can outline a custom managment strategy as an R function to use. As long as the correct inputs and outputs are used, any estimation method and management procedure can be used. For example, here is a simple function that just sets future catches as half the sampled catches in a specified year: constant_catch_MS <- function(OM_dat, nyrs_assess, catch_yr = 100, frac_catch = 0.5, ...) { # need to include ... to allow function to work # set catch the same as the previous year (sampled catch). # catch is in the same units as the operating model, in this case it is in # biomass. catch <- data.frame( year = (OM_dat$endyr + 1):(OM_dat$endyr + nyrs_assess), # the years to project the model forward seas = 1, # hard coded from looking at model fleet = 1, # hard coded from looking at model catch = OM_dat$catch[OM_dat$catch$year == catch_yr, "catch"]*frac_catch, catch_se = 0.05) # hard coded from looking at model catch_bio <- catch # catch in biomass. In this case, catch is in biomass for both. Could also be left as NULL catch_F <- NULL # catch in terms of F, can be left as NULL. discards <- NULL # discards can be left as NULL if there are no discards catch_list <- list(catch = catch, catch_bio = catch_bio, catch_F = catch_F, discards = discards) } The function should be created in a separate file. In this case, assume this function is available in a file custom_funs.R. This function can then be used in a call to run_SSMSE: run_result_custom <- run_SSMSE(scen_name_vec = "constant-catch", # name of the scenario out_dir_scen_vec = run_res_path, # directory in which to run the scenario iter_vec = 1, OM_name_vec = "cod", # specify directories instead OM_in_dir_vec = NULL, MS_vec = "constant_catch_MS", # use the custom function custom_MS_source = "custom_funs.R", # File where the custom function is available. nyrs_vec = 6, # Years to project OM forward nyrs_assess_vec = 3, # Years between assessments future_om_list = future_om_list_recdevs_sel, sample_struct_list = list(sample_struct_list[[1]]), # How to sample data for running the MS. seed = 12345) #Set a fixed integer seed that allows replication "],["M-case-study-ex.html", "4 A more complex example, Natural Mortality case study", " 4 A more complex example, Natural Mortality case study This code was used to examine how different natural mortality scenarios performed in two management strategies. # load pkgs set options ---- #devtools::install_github("r4ss/r4ss", ref = "155a521") #devtools::install_github("nmfs-fish-tools/SSMSE@v0.2.5") library(SSMSE) library(r4ss) # functions for convergence and performance metrics, get from other gh repo source("https://raw.githubusercontent.com/k-doering-NOAA/ssmse-afs/master/code/get_metrics.R") # specify locations, create folders ---- cod_mod_path <- system.file("extdata", "models", "cod", package = "SSMSE") datfile_path <- file.path(cod_mod_path, "ss3.dat") fig_path <- "figures" runs_path <- "model_runs" mods_path <- "input_models" dir.create(fig_path) dir.create(runs_path) dir.create(mods_path) # define the scenarios ---- niters <- 100 start_iters <- 1 # the scenarios are: # three levels of M changes in the OM (none, more frequent, less frequent) # 2 different management scenarios # in all scenarios, uncertainty in the selectivity moving forward # metrics: long term avg catch, long term catch variability, long term biomass scen_red_tide <- c("no-red-tide", "low-red-tide", "hi-red-tide") scen_HCR <- c("F-spr-30", "F-spr-45") scenarios <- data.frame( scen_name = c(paste0(scen_red_tide, "-",scen_HCR[1]), paste0(scen_red_tide, "-", scen_HCR[2])), EM_path = rep(c(file.path(mods_path, scen_HCR)), times = c(3,3)) ) # manipulate EM Forecasting ---- # no need to re-run model for the EM, if(start_iters == 1) { # dont need to re run this for each new set of runs for (i in scen_HCR) { tmp_cod_path <- file.path(mods_path, i) file.copy(from = cod_mod_path, to = mods_path, recursive = TRUE) file.rename(from = file.path(mods_path, "cod"), to = tmp_cod_path) fore <- r4ss::SS_readforecast(file.path(tmp_cod_path, "forecast.ss"), verbose = FALSE) forecast_method <- switch(i, "F-msy" = 2, "F-spr-30" = 1, "F-spr-45" = 1) fcast_target <- switch(i, "F-msy" = 0.45, "F-spr-30" = 0.3, "F-spr-45" = 0.45) # manipulate the forecasting file. fore$MSY <- 2 # calculate Fmsy, needed for F-msy scenario fore$SPRtarget <- fcast_target # differs between scenarios fore$Forecast <- forecast_method # differs between scenarios fore$ControlRuleMethod <- 0 # don't use a ramp HCR at all r4ss::SS_writeforecast(fore, tmp_cod_path, verbose = FALSE, overwrite = TRUE) file.remove(file.path(tmp_cod_path, "forecast.ss_new")) # to make sure it is not used. } } # set up the future om deviations ---- # Set this up for the 3 different operating mode scenarios # in all cases, we want to use random fluctuations on selectivity # changing M depends on the scenario. # put together the change for selectivity (random values around the orig val, with # an sd of 0.2) template_mod_change <- create_future_om_list() mod_change_sel <- template_mod_change[[1]] mod_change_sel$scen[2] <- "all" mod_change_sel$input$last_yr_orig_val <- 100 mod_change_sel$input$first_yr_final_val <- 101 mod_change_sel$input$ts_param <- "sd" mod_change_sel$input$value <- 0.2 # put together change for M # more stochasisity could be added, but starting with this is still interesting template_custom_change <- create_future_om_list(example_type = "custom") mod_change_M <- template_custom_change[[1]] M_no_scen <- rep(rep(0.2, 50), times = niters) M_low_scen <- rep(rep(c(0.2, 0.2, 0.2, 0.2, 0.3), length.out = 50), times = niters) M_hi_scen <- rep(rep(c(0.2, 0.2, 0.2, 0.2, 0.4), length.out = 50), times = niters) M_custom_dataframe <- data.frame( par = "NatM_p_1_Fem_GP_1", scen = rep(scenarios$scen_name, times = rep(50*niters, 6)), iter = rep(rep(seq(from = start_iters, to = start_iters + niters - 1), times = rep(50, niters)), times = 6), yr = rep(101:150, times = 6*niters), value = c(M_no_scen, M_low_scen, M_hi_scen, M_no_scen, M_low_scen, M_hi_scen)) mod_change_M$pars <- "NatM_p_1_Fem_GP_1" mod_change_M$scen <- c("replicate", "all") mod_change_M$input <- M_custom_dataframe # add recruitment deviations rec_dev_specify <- template_mod_change[[1]] rec_dev_specify$pars <- "rec_devs" rec_dev_specify$scen <- c("replicate", "all") rec_dev_specify$input$first_yr_averaging <- 1 # use same sd as from the orig model. rec_dev_specify$input$last_yr_averaging <- 100 rec_dev_specify$input$last_yr_orig_val <- 100 rec_dev_specify$input$first_yr_final_val <- 101 rec_dev_specify$input$ts_param <- "sd" rec_dev_specify$input$value <- NA # put together a complete list future_om_list <- list(mod_change_M, mod_change_sel, rec_dev_specify) # get the sampling scheme ---- # use the historical sampling scheme, so don' t need to create one # for sampling scheme in the projections, use the historical sampling scheme to # the extent possible; if no pattern found, then create a manual one. sample_struct <- SSMSE::create_sample_struct(dat = datfile_path, nyrs = 50) sample_struct$meanbodywt <- NULL sample_struct$MeanSize_at_Age_obs <- NULL # modify, because there were NAs sample_struct$lencomp <- data.frame(Yr = seq(105, 150, by = 5), Seas = sample_struct$lencomp$Seas, FltSvy = sample_struct$lencomp$FltSvy, Sex = sample_struct$lencomp$Sex, Part = sample_struct$lencomp$Part, Nsamp = sample_struct$lencomp$Nsamp) sample_struct_list <- list(sample_struct, sample_struct, sample_struct, sample_struct, sample_struct, sample_struct ) # call SSSMSE ---- out <- SSMSE::run_SSMSE(out_dir_scen_vec = rep("model_runs", 6), scen_name_vec = scenarios$scen_name, iter_vec = rep(niters, 6), OM_name_vec = rep("cod", 6), OM_in_dir_vec = NULL, EM_in_dir_vec = scenarios$EM_path, run_EM_last_yr = FALSE, MS_vec = "EM", use_SS_boot_vec = TRUE, nyrs_vec = rep(50, 6), nyrs_assess_vec = rep(5, 6), sample_struct_list = sample_struct_list, future_om_list = future_om_list, verbose = FALSE, seed = 456, # changing each time a chunk of runs is done will help ensure there is stochacisity run_parallel = TRUE, n_cores = 6 ) saveRDS(out, file = file.path("model_runs", "run_SSMSE_out.rda")) # # # look at results ---- summary <- SSMSE::SSMSE_summary_all(dir = "model_runs", run_parallel = TRUE) # #check for errored iterations lapply(out, function(x) x$errored_iterations) "],["SSMSE.html", "5 Options for run_SSMSE 5.1 Scenarios in SSMSE 5.2 Operating model 5.3 The Management Strategy and (if applicable) Estimation model (EM) 5.4 Sampling options 5.5 Future changes to the operating model (#future)", " 5 Options for run_SSMSE Many inputs are possible for the run_SSMSE() option. Here, we will describe some of the options available. For detailed documentation, type ?SSMSE::run_SSMSE() into the R console. 5.1 Scenarios in SSMSE Note that multiple scenarios can be called in run_SSMSE(), often through vector inputs to run_SSMSE(). Below, we will describe inputs needed to run 1 scenario. 5.2 Operating model The operating model (OM) for SSMSE should be a Stock Synthesis model. This could be any fitted Stock Synthesis model, except for models that use Empirical Weight at age. There are 3 built-in models that comes with SSMSE: cod, growth_timevary, and SR_env_block. To use a built in model, specify the name of the model as part of the OM_name_vec. For example, to use the cod model in run_SSMSE for 1 scenario, set OM_name_vec = \"cod\". Otherwise, the path to the OM model should be specified in OM_in_dir_vec. 5.3 The Management Strategy and (if applicable) Estimation model (EM) The management strategy (and EM) can be specified in one of two ways: Using an SS3 model and its forcast file to project catch by fleet Using a custom management strategy via a function in R In theory, any management strategy should work, as long as it can take the data file produced by the OM as output and provide back to SSMSE future catches by fleet. 5.3.1 Specify the Management Strategy in a SS3 model An SS3 model can be set up as the EM for the MSE. To use this option, specify \"EM\" as part of MS_vec. As with the OM, the built-in cod model could be used; just specify \"cod\" in the EM_name_vec. To use any other SS3 model as the EM, specify the path in EM_in_dir_vec.Note that models that use empirical weight at age can not yet be used as estimation models. Also, the .ss_new input files are used for an OM, while the original input files are used for the EM. Future catches will be determined from the forecasting file settings of the SS3 model. SSMSE will change the number of forecast years to match the number of years between assessments, but other specifications need to be made by the user. 5.3.2 Using a custom management strategy/procedure Users can outline a custom managment strategy as an R function to use. As long as the correct inputs and outputs are used, any estimation method and management procedure can be used. For example, here is a simple function that just sets future catches as half the sampled catches in a specified year: constant_catch_MS <- function(OM_dat, nyrs_assess, catch_yr = 100, frac_catch = 0.5, ...) { # need to include ... to allow function to work # set catch the same as the previous year (sampled catch). # catch is in the same units as the operating model, in this case it is in # biomass. catch <- data.frame( year = (OM_dat$endyr + 1):(OM_dat$endyr + nyrs_assess), # the years to project the model forward seas = 1, # hard coded from looking at model fleet = 1, # hard coded from looking at model catch = OM_dat$catch[OM_dat$catch$year == catch_yr, "catch"]*frac_catch, catch_se = 0.05) # hard coded from looking at model catch_bio <- catch # catch in biomass. In this case, catch is in biomass for both. Could also be left as NULL catch_F <- NULL # catch in terms of F, can be left as NULL. discards <- NULL # discards can be left as NULL if there are no discards catch_list <- list(catch = catch, catch_bio = catch_bio, catch_F = catch_F, discards = discards) } Let’s assume this function is saved in a file within the working directory named constant_catch_MS.R. This function can then be used in a call to run_SSMSE(): # define sample structure datfile <- system.file("extdata", "models", "cod", "ss3.dat", package = "SSMSE") sample_struct <- create_sample_struct(dat = datfile, nyrs = 6) # note warning sample_struct$lencomp <- NULL # don't use length sampling # run the SSMSE routine run_result_custom <- run_SSMSE( scen_name_vec = "constant-catch", out_dir_scen_vec = "my_results", iter_vec = 1, OM_name_vec = "cod", OM_in_dir_vec = NULL, MS_vec = "constant_catch_MS", # use custom fun custom_MS_source = "constant_catch_MS.R", use_SS_boot_vec = TRUE, nyrs_vec = 6, nyrs_assess_vec = 3, run_EM_last_yr = FALSE, run_parallel = FALSE, sample_struct_list = list(sample_struct), seed = 12345) 5.4 Sampling options Currently, the only available sampling option is to use the bootstrapping module within SS3 itself. This means specifying use_SS_boot_vec = TRUE. Details on how sampling is done using the bootstrapping module in SS3 is available in the “Bootstrap Data Files” section of the SS3 user manual. Users also need to specify how and which data types should be sampled for each future year in the simulation in sample_struct_list. sample_struct_list is a list of lists. The first level is a list for each scenario; then, for the scenario, there is a list of dataframes, each of which specifying which years and fleets of data should be sampled in the future as well as which standard errors or sample sizes should be used. The helper function create_sample_struct() can be used to help users generate the list of dataframes for a scenario. See an example of this function’s use in the simple example or by typing ?SSMSE::create_sample_struct() into the R console. Currently, users can sample data for catches (treated as fixed if sample_catch = FALSE for the scenario), CPUE (i.e., indices of abundance), length and age composition, conditional length at age compositions, mean size at age, and mean size. It is not yet possible to sample data for generalized size compositions, tagging data, and morph compositions. Users can specify the sampling during the historical period of the model through the sample_struct_hist input to run_SSMSE. This is an optional input and has the same structure as sample_struct_list, but the years will be before the original end year of the OM. 5.5 Future changes to the operating model (#future) By default, SSMSE will simply extend the structure of the OM into the future using the same parameter values as in the last year of the model. However, the user may want to allow changes in parameter values to occur as the OM is extended forward in time. Users can input values into the future_om_list to accomplish this. This input is a list of lists. For the first level of lists, each represents a separate change to make. Within the first level, there are 4 list components that outline the details of the change to make. There are 2 main choices: 1) to specify model changes by telling SSMSE how values should be sampled or 2) to input your own custom values for future values of a given parameter. 5.5.1 The strucutre of future_om_list For example, here is an example list containing just one future change for the model. It shows that the model should be changed to 4.5 in year 103 and afterwards: my_future_om_list <- create_future_om_list(list_length = 1) my_future_om_list [[1]] [[1]]$pars [1] "SizeSel_P_3_Fishery(1)" [[1]]$scen [1] "replicate" "scen2" [[1]]$pattern [1] "model_change" [[1]]$input first_yr_averaging last_yr_averaging last_yr_orig_val first_yr_final_val 1 NA NA 102 103 ts_param method value 1 mean absolute 4.5 length(my_future_om_list) # note has length 1 b/c 1 change [1] 1 length(my_future_om_list[[1]]) # has length 4 because 4 list components, as for each change [1] 4 Note there is just one change specified here. For the change, there are four list items that are required for any specified change. The first list item is named “pars”. It contains a vector of parameter name(s) to apply the change to. The names should be the same as the names in r4ss::SS_read_pars() or can include the following: \"rec_devs\" - For specifying recruitment deviations \"impl_error - For specifying implementation error in transforming a management procedure’s specified future catch into realized catch. \"all\" - Apply the change to all parameters in the model, with the exception of “SR_sigmaR”, “SR_regime”, “SR_autocorr”, and “impl_error” parameters. Changing the first 3 stock recruitment related parameters would conflict with changes in recruitment deviations. Since implementation error is not a parameter specified in the control file and SSMSE does not rely on the implementation error parameter created through the forecasting file, including the implementation error in “all” did not seem appropriate. In this case, we just want to apply the change to the SizeSel_P_3_Fishery(1) parameter of the model. The second item is “scen”, which contains a vector of information about how to apply the changes within and across scenarios. The first value is an option to specify how the change should be applied among scenarios, either “randomize” (use a different value for each iteration of each scenario) or “replicate” (use the same set of values across scenarios for the same number iteration, but each value will be different across iterations within the same scenario). Note that the same values will be applied across iterations and scenarios if there isn’t any stochasticity in the values being drawn (e.g., standard deviation set at 0), regardless if “randomize” or “replicate” is chosen. In the example above, there is no stochasticity, so specifying randomize or replicate does not matter. Following the first value are the names of the scenarios to apply the change to. If the change should be applied to all of the scenarios, “all” can be used in place of naming every scenario. In this example, The change will only be applied to the scenario named “scen2”, so the input for “scen” is c(\"randomize\", \"scen2\"). The “pattern” is a vector of character inputs. The first value should be either “model_change”, meaning that SSMSE will calculate the change values, or “custom” which allows the user to directly put in the values that they want in the model. In this case, “model_change” is used. A second input can also be added when the “model_change” pattern is used, which specifies the distribution to pull from. This could be “normal” or “lognormal”, but if no input is provided, a normal distribution will be used for sampling. The fourth item, named “input”, is a dataframe, which contains different column name values depending on if the “model_change” pattern is used or if the “custom” pattern is used. For the model_change options, a dataframe in input specifies a change for the parameter of a distribution. Because we are using the model change option, we have the following columns: first_yr_averaging: The first year to average historical values from the model, if using for the change. This should be NA if historical averaging will not be used. last_yr_averaging: The last year to average historical values from the model, if using for the change. This should be NA if historical averaging will not be used. last_yr_orig_val: The last year of the future deviations with the original model value. This value will not be changed, but the following year’s value will be. first_yr_final_val: The first year where the final value as specified will be reached. If no future changes are made, the final value will continue forward in the OM through all projected years. For step changes, the first_yr_final_val is just 1 year after the last_yr_orig_val. However, if a more gradual adjustment toward the final value is desired, this could be any number of years after the original value. ts_param: The sampling parameter to modify. Options are “mean”, “ar_1_phi” (if using autocorrelation), “sd”, and “cv”. If not included in the dataframe, “mean” is assumed to be the same as in the previous model year, sd is assumed to be 0, and we assume no autocorrelation (as specified through an autoregressive AR1 process). Phi should be between -1 and 1 (this creates a stationary process) and if phi = 1, this creates a random walk (as random walk is a special case of an AR1 model). Note that both sd and cv cannot be specified within the same input dataframe. method: How to apply the change relative to the historical (if specified) or previous year’s value (if first_yr_averaging and last_yr_averaging for that row are NA). Options are “multiplicative”, “additive”, and “absolute”. This example uses “absolute”, meaning that the number in “value” is directly used. value. The value of the parameter change. This may be NA if historical averaging is used. 5.5.2 Example of future_om_list denoting a gradual change Suppose we wanted to change the value of “SizeSel_P_3_Fishery(1)” in scen2 over 4 years after year 102 to arrive at a value of 4.5 in year 106. We can do this by using my_future_om_list, but changing the value in the first row of first_yr_final_val to 106: my_future_om_list[[1]][["input"]][1, "first_yr_final_val"] <- 106 my_future_om_list[[1]] $pars [1] "SizeSel_P_3_Fishery(1)" $scen [1] "replicate" "scen2" $pattern [1] "model_change" $input first_yr_averaging last_yr_averaging last_yr_orig_val first_yr_final_val 1 NA NA 102 106 ts_param method value 1 mean absolute 4.5 5.5.3 Example of future_om_list with random deviations Suppose we now wanted the value of “SizeSel_P_3_Fishery(1)” to change randomly according to a normal distribution with a standard deviation of 0.1 around a mean of 4.5 from 104 onwards. This can be done by adding a line specifying a change in standard deviation (which for now, has been assumed to be 0) to the data frame: new_vals <- data.frame(first_yr_averaging = NA, last_yr_averaging = NA, last_yr_orig_val = 103, first_yr_final_val = 104, ts_param = "sd", method = "absolute", value = 0.1) my_future_om_list[[1]][["input"]] <- rbind(my_future_om_list[[1]][["input"]], new_vals) my_future_om_list[[1]] $pars [1] "SizeSel_P_3_Fishery(1)" $scen [1] "replicate" "scen2" $pattern [1] "model_change" $input first_yr_averaging last_yr_averaging last_yr_orig_val first_yr_final_val 1 NA NA 102 106 2 NA NA 103 104 ts_param method value 1 mean absolute 4.5 2 sd absolute 0.1 Note that the last_yr_orig_val and first_yr_final_val are different than the line for the mean, which is allowed. 5.5.4 Example of using historical values for determining parameter values This example applies random annual deviations to all parameters for scenarios scen2 and scen3. future_om_list_2 <- vector(mode = "list", length = 1) future_om_list_2 <- lapply(future_om_list_2, function (x) x <- vector(mode = "list", length = 4)) names(future_om_list_2[[1]]) <- c("pars", "scen", "pattern", "input") future_om_list_2[[1]][["pars"]] <- "all" future_om_list_2[[1]][["scen"]] <- c("randomize", "scen2", "scen3") future_om_list_2[[1]][["pattern"]] <- "model_change" # defaults to using normal dist future_om_list_2[[1]][["input"]] <- data.frame(first_yr_averaging = c(1, 1), last_yr_averaging = c(100, 100), last_yr_orig_val = c(100, 100), first_yr_final_val = c(101, 101), ts_param = c("cv", "mean"), method = c("absolute", "multiplier"), value = c(0.1, 1)) Note that the choice of year range for historical values does not matter unless the parameter is already time-varying in the original operating model or has become time varying through a previous change. Otherwise, the base model parameter will be applied. If no historical years are included, then the base parameter value will be the basis of comparison for relative changes (i.e., method = multiplier or additive). 5.5.5 Example using “custom” pattern instead of “model_change” custom_future_om_list <- create_future_om_list(example_type = "custom", list_length = 1) custom_future_om_list [[1]] [[1]]$pars [1] "impl_error" [[1]]$scen [1] "randomize" "all" [[1]]$pattern [1] "custom" [[1]]$input par scen iter yr value 1 impl_error scen1 1 101 1.05 2 impl_error scen1 2 101 1.05 3 impl_error scen1 3 101 1.05 4 impl_error scen1 4 101 1.05 5 impl_error scen1 5 101 1.05 6 impl_error scen1 1 102 1.05 7 impl_error scen1 2 102 1.05 8 impl_error scen1 3 102 1.05 9 impl_error scen1 4 102 1.05 10 impl_error scen1 5 102 1.05 11 impl_error scen1 1 103 1.05 12 impl_error scen1 2 103 1.05 13 impl_error scen1 3 103 1.05 14 impl_error scen1 4 103 1.05 15 impl_error scen1 5 103 1.05 16 impl_error scen1 1 104 1.05 17 impl_error scen1 2 104 1.05 18 impl_error scen1 3 104 1.05 19 impl_error scen1 4 104 1.05 20 impl_error scen1 5 104 1.05 21 impl_error scen1 1 105 1.05 22 impl_error scen1 2 105 1.05 23 impl_error scen1 3 105 1.05 24 impl_error scen1 4 105 1.05 25 impl_error scen1 5 105 1.05 26 impl_error scen1 1 106 1.05 27 impl_error scen1 2 106 1.05 28 impl_error scen1 3 106 1.05 29 impl_error scen1 4 106 1.05 30 impl_error scen1 5 106 1.05 31 impl_error scen2 1 101 1.10 32 impl_error scen2 2 101 1.10 33 impl_error scen2 3 101 1.10 34 impl_error scen2 4 101 1.10 35 impl_error scen2 5 101 1.10 36 impl_error scen2 1 102 1.10 37 impl_error scen2 2 102 1.10 38 impl_error scen2 3 102 1.10 39 impl_error scen2 4 102 1.10 40 impl_error scen2 5 102 1.10 41 impl_error scen2 1 103 1.10 42 impl_error scen2 2 103 1.10 43 impl_error scen2 3 103 1.10 44 impl_error scen2 4 103 1.10 45 impl_error scen2 5 103 1.10 46 impl_error scen2 1 104 1.10 47 impl_error scen2 2 104 1.10 48 impl_error scen2 3 104 1.10 49 impl_error scen2 4 104 1.10 50 impl_error scen2 5 104 1.10 51 impl_error scen2 1 105 1.10 52 impl_error scen2 2 105 1.10 53 impl_error scen2 3 105 1.10 54 impl_error scen2 4 105 1.10 55 impl_error scen2 5 105 1.10 56 impl_error scen2 1 106 1.10 57 impl_error scen2 2 106 1.10 58 impl_error scen2 3 106 1.10 59 impl_error scen2 4 106 1.10 60 impl_error scen2 5 106 1.10 61 impl_error scen3 1 101 1.10 62 impl_error scen3 2 101 1.10 63 impl_error scen3 3 101 1.10 64 impl_error scen3 4 101 1.10 65 impl_error scen3 5 101 1.10 66 impl_error scen3 1 102 1.10 67 impl_error scen3 2 102 1.10 68 impl_error scen3 3 102 1.10 69 impl_error scen3 4 102 1.10 70 impl_error scen3 5 102 1.10 71 impl_error scen3 1 103 1.10 72 impl_error scen3 2 103 1.10 73 impl_error scen3 3 103 1.10 74 impl_error scen3 4 103 1.10 75 impl_error scen3 5 103 1.10 76 impl_error scen3 1 104 1.10 77 impl_error scen3 2 104 1.10 78 impl_error scen3 3 104 1.10 79 impl_error scen3 4 104 1.10 80 impl_error scen3 5 104 1.10 81 impl_error scen3 1 105 1.10 82 impl_error scen3 2 105 1.10 83 impl_error scen3 3 105 1.10 84 impl_error scen3 4 105 1.10 85 impl_error scen3 5 105 1.10 86 impl_error scen3 1 106 1.10 87 impl_error scen3 2 106 1.10 88 impl_error scen3 3 106 1.10 89 impl_error scen3 4 106 1.10 90 impl_error scen3 5 106 1.10 The inputs required for pattern = custom are similar to using pattern = model_change, but there is no need to specify a distribution as the second vector input of “pattern” and the column names in input are different. Also that the change is “randomized”, which indicates that a value for each scenario and each iteration are necessary, whereas if the first value of “scen” was “replicate”, separate values for each scenario should not be specified, but rather a single value for “all” should be used. The columns in the dataframe are: par: the parameter name or “all” if it should be applied to all parameters in the “pars” input scen: which scenario the value should apply to, or “all” if “replicate” is used as the scenario input iter: which number iteration the value should apply to. These are absolute iteration numbers. yr: the year the value applies to. These should be after the original end year of the OM. value: the value to apply to the model 5.5.6 How is the operating modified to accomodate changes as specified in the future_OM_list? All changes are made by converting the parameter(s) with changes to be time varing by using additive annual parameter deviations. Because this is an operating model, Stock Synthesis is not run with estimation, so to get the changes into the OM, values (drawn or calculated in model_changes or specified by the user using “custom”) are directly input as parameter deviations into the ss.par file. If an OM already contains time varying parameters, these parameters will also be converted to additive parameter deviations before applying the changes specified in the future_OM_list. 5.5.7 Example of specifying recruitment deviations This shows some code for putting recruitment deviations with a mean of 0 and the same standard deviation as the historical recruitment deviations in years 1 to 100: template_mod_change <- create_future_om_list(list_length = 1) rec_dev_specify <- template_mod_change[[1]] rec_dev_specify$pars <- "rec_devs" rec_dev_specify$scen <- c("replicate", "all") rec_dev_specify$input$first_yr_averaging <- 1 rec_dev_specify$input$last_yr_averaging <- 100 rec_dev_specify$input$last_yr_orig_val <- 100 rec_dev_specify$input$first_yr_final_val <- 101 rec_dev_specify$input$ts_param <- "sd" rec_dev_specify$input$value <- NA rec_dev_list <- list(rec_dev_specify) rec_dev_list [[1]] [[1]]$pars [1] "rec_devs" [[1]]$scen [1] "replicate" "all" [[1]]$pattern [1] "model_change" [[1]]$input first_yr_averaging last_yr_averaging last_yr_orig_val first_yr_final_val 1 1 100 100 101 ts_param method value 1 sd absolute NA "],["helper.html", "6 Helper functions in SSMSE 6.1 Creating multiple operating models with develop_OMs() 6.2 Set up sampling for a scenario with create_sample_struct() 6.3 Get examples of the future_om_list input with create_future_om_list()", " 6 Helper functions in SSMSE run_SSMSE() requires some detailed inputs that are time consuming to make. In the interest of automating routines as much as possible, helper functions are available in SSMSE. 6.1 Creating multiple operating models with develop_OMs() This function allows users to change a parameter in a Stock Synthesis OM and, if desired, refit the operating model to data. Typically, users will want to create multiple operating models to use in different scenarios in order to account for uncertainties in parameter values, such as biological parameters relating to natural mortality or growth. 6.2 Set up sampling for a scenario with create_sample_struct() run_SSMSE() requires the input sample_struct, which outlines the future sampling structure to use to generate data sets. This object could be difficult to create manually and users may want to just continue sampling as previously done in an existing SS3 model. create_sample_struct() will use the time patterns in sampling for different data types found in an SS3 data file and extend it forward in time. If no pattern is found, the function will return NAs and provide a warning to the user. If any NAs are returned (except to indicate that an entire data type is missing), the user will need to remove or fill in the NA value before usign the list as an input to run_SSMSE(). See use of create_sample_struct() in simple example. 6.3 Get examples of the future_om_list input with create_future_om_list() Users can modify the future structure of their operating model through the future_om_list input to run_SSMSE. The structure and names of this object matter, so examples are provided by calling create_future_om_list(). See more details about the specification of future_om_list objects. "],["output.html", "7 Output and Plots 7.1 Summarizing output 7.2 Checking estimation model convergence 7.3 Calculating performance metrics", " 7 Output and Plots 7.1 Summarizing output Output is summarized using SSMSE_summary_all(): summary_list <- SSMSE_summary_all(dir = "path/to/scenario/dir", scenarios = c("sample_low", "sample_high"), run_parallel = TRUE) Relying on ss3sim::get_results_all(), this function creates: For each scenario, 3 scenario level .csv files For all scenarios, 2 cross-scenario .csv files named by default to SSMSE_tsand SSMSE_scalar. For all scenarios, the function returns a list object containing data frames of timeseries (ts), scalar, and derived quantities (dq) summaries. Note that run_parallel = TRUE is only faster than run_parallel FALSE when there is more than once scenario and none of the scenario-level .csv files have been created yet. By default, if a user doesn’t specify scenarios, all scenarios in dir will be summarized. 7.2 Checking estimation model convergence One of the first checks to do if using an estimation model after running an MSE analysis is to check that the estimation model has converged. A number of checks could be done, but a basic one is checking the gradients for the estimation model, which are added to the SSMSE_scalar summary sheet. 7.3 Calculating performance metrics Typically, a suite of performance metrics are used in MSE. Punt et al. (2016) recommends at least using metrics related to: average catch variation in catch over time population size "],["plotting-output-and-performance-metrics.html", "8 Plotting output and performance metrics", " 8 Plotting output and performance metrics Currently, it has been left up to the user to plot summaries and performance metrics, as the potential options for plots and performance metrics users may desire are extensive. There are currently diagnostic plots to examine how sampled data compares to the operating model values and data sets used to condition the operating models. plot_index_sampling() creates a diagnostic plot for indices, while plot_comp_sampling() creates diagnostic plots for composition data. "],["glossary-of-mse-terms.html", "9 Glossary of MSE terms", " 9 Glossary of MSE terms This glossary is based off of Rademeyer et al. (2007) and Punt et al. (2016). Assessment error - Error that occurs during the process of conducting an assessment, specifically error “which inform the catch control rule that is being evaluated using the MSE: management advice for any system is based on uncertain data. Consequently, the data that inform catch control rules need to be generated in a manner which is as realistic as possible. Uncertainty arises when the model used for conducting assessments and providing management advice differs from the operating model, or the data are too noisy to estimate all key parameters reliably (Punt et al. 2016).” Assessment model (AM) - A fitted and well scrutinized Stock Synthesis stock assessment model. The AM is input for SSMSE so that it can build the operating model. Bootstrapped dataset - After running an SS3 model (and if the user turns on the option in the starter file), the output file data.ss_new contains a “dataset” that has the same form as the input data.ss file, but instead of the input values, contains a sampled dataset. This is derived from using the expected values given the parameter estimates and model structure (see glossary entry on Expected values), uncertainty either estimated or input in the model, and an assumed distribution for sampling. As many bootstrapped data sets as the user desires can be output from an SS3 model. These “bootstrap datasets” are the third or greater set of values in the data.ss_new file. Estimation model (EM) - This refers to the model used within the MSE procedure to represent the stock assessment process. Note that an estimation model proxy could also be used to represent the stock assessment process. Expected values - After running an SS3 model (and if the user turns on the option in the starter file), the output file data.ss_new contains a “dataset” that has the same form as the input data.ss file, but instead of the input values, contains the expected values given the parameter estimates and model structure. This “expected values dataset” is the second set of values in the data.ss_new file. Implementation error - Also called implementation uncertainty or outcome uncertainty. Broadly, this includes error of implementing the management action(s), which is often not done perfectly. Definition from Punt et al. (2016): “The most obvious form of this type of uncertainty is when catches are not the same as the TACs – typically more is taken or the decision-makers do not implement the TACs suggested by the management strategy. However, there are many other sources of outcome uncertainty, such as that associated with catch limits set for recreational fisheries and regulating discards.” Index - An index of abundance. In SS3 files, this is sometimes used interchangably with CPUE. Plural, indices. Management strategy - Following Punt et al. (2016), there are 2 types: model-based management strategies and empirical management strategies. Ideally, SSMSE will allow for users to use either of these management strategies. Model-based management strategies include conducting a stock assessment and using output for determining harvest control rules. Empirical management strategies do not involve conducting a stock assessment but rather setting regulations from data (although summarization is possible). In some cases, management strategies are computationally intensive/time consuming to formally include in the context of an MSE; to speed up the process, it is commen to use management strategy proxies, typically an assumed error distribution about the operating model values. Management strategies are also known as management procedures. Model uncertainty - Definition from Punt et al. (2016): “the form of relationships within an operating model will always be subject to uncertainty. The simplest type of model uncertainty involves, for example, whether the stock–recruitment relationship is Beverton–Holt or Ricker, whether a fixed value for a model parameter is correct, or whether fishery selectivity is asymptotic or dome-shaped. However, there are other more complicated types of model structure uncertainty such as how many stocks are present in the area modelled, the error structure of the data used for assessment purposes, the impact of future climate change on biological relationships such as the stock–recruitment function, and ecosystem impacts on biological and fishery processes.” Observation error - Error that results from not observing the true dynamics of the system. See also bootstrapped dataset. Operating model (OM) - This is an SS3 model that defines the assumed true dynamics of the population and its associated fisheries for the purposes of management strategy evaluation. Most often more than one operating model is necessary in order to adequately characterize the uncertainty in the true dynamics of the system. The SS3 OM may also define how sampling from the true dynamics is done, as SS produces expected values and, if desired, bootstrapped data sets based on sampling assumptions implicit in the model. For more information, see the glossary entries on Expected values and bootstrapped dataset Parameter uncertainty - Definition from Punt et al. (2016): “many operating models are fit to the data available, but the values estimated for the parameters of those operating models (e.g. fishery selectivity-at-age, the parameters of the stock–recruitment relationship and historical deviations in recruitment about the stock–recruitment relationship) are subject to error.” Performance statistics - Definition from Rademeyer et al. (2007): “Statistics that summarize different aspects of the results of a simulation trial used to evaluate how well a specific [management strategy] achieves some or all of the general objectives for management for a particular scenario.” Performance statistics usually fall into one of three categories: catch-related, stability related, or risk related. Process uncertainty - Definition from Punt et al. (2016): “variation (usually assumed to be random, though sometimes incorporating autocorrelation) in parameters often considered fixed in stock assessments such as natural mortality, future recruitment about a stock–recruitment relationship and selectivity.” Stock Synthesis (SS3) - A integrative, general population dynamics modeling program used to assess the effects of fishing on population. Available at the Stock Synthesis website Uncertainty - Incorporating uncertainty into an MSE procedure is extremely important. There are several potential sources of uncertainty, which we have divided as done in Punt et al. (2016) in addition to adding observation error. For more details, see the glossary entries on Process Uncertainty, Parameter uncertainty, Model uncertainty, Assessment error, Implementation error, and observation error. Punt et al. (2016) suggest that MSE should at least consider process uncertainty (particularly deviations from the stock recruitment relationship), parameter uncertainty (particularly which relates to productivity and stock size), and observation error. Note that Rademeyer et al. (2007) divides error into estimation error, implementation error, observation error and process error, which could be used instead. These may be more natural divisions in sources of error. "],["404.html", "Page not found", " Page not found The page you requested cannot be found (perhaps it was moved or renamed). You may want to try searching to find the page's new location, or use the table of contents to find the page you are looking for. "]]
+[["index.html", "SSMSE user manual Preface R session information", " SSMSE user manual Kathryn Doering and Nathan Vaughan 2023-08-10 Preface Logo for the SSMSE package This is the user manual for SSMSE, an R package for Management Strategy Evaluation with Stock Synthesis Operating models. This document is still a work in progress! If you would like to suggest changes, the R markdown files used to generate the user manual are available in a bookdown folder within the SSMSE repository. Please feel free to open an issue or submit a pull request to suggest changes. R session information sessionInfo() R version 4.3.1 (2023-06-16) Platform: x86_64-pc-linux-gnu (64-bit) Running under: Ubuntu 22.04.3 LTS Matrix products: default locale: [1] LC_CTYPE=C.UTF-8 LC_NUMERIC=C LC_TIME=C.UTF-8 [4] LC_COLLATE=C.UTF-8 LC_MONETARY=C.UTF-8 LC_MESSAGES=C.UTF-8 [7] LC_PAPER=C.UTF-8 LC_NAME=C LC_ADDRESS=C [10] LC_TELEPHONE=C LC_MEASUREMENT=C.UTF-8 LC_IDENTIFICATION=C time zone: UTC tzcode source: system (glibc) attached base packages: [1] stats graphics grDevices utils datasets methods base loaded via a namespace (and not attached): [1] ss3sim_1.20.0 r4ss_1.48.1 "],["intro.html", "1 Introduction 1.1 Purpose 1.2 Functions in SSMSE 1.3 Brief description of the SSMSE MSE simulation procedure using run_SSMSE()", " 1 Introduction 1.1 Purpose SSMSE allows users to directly use Stock Synthesis (SS3) as an operating model (OM) in a Management Strategy Evaluation (MSE) through R functions. The approach requires a conditioned Stock Synthesis model, which is treated as the OM. ## Decide if SSMSE is the right tool Another simulation tool available is ss3sim. {ss3sim} helps users conduct simulation studies with SS3 operating models and estimation models. {ss3sim} is different than {SSMSE} beacuse: In {SSMSE}, there is feedback from the estimation method to the operating model and the operating model is projected forward. In {ss3sim}, there is no feedback and the operating model is not projected forward in time. {ss3sim} has more helper functions which modify the operating model and estimation model, but ss3sim is less flexible in the initial Operating models and estimation methods/models that can be used compared to {SSMSE}. {SSMSE} can use custom estimation methods; {ss3sim} must use a Stock Synthesis Estimation Model. Please post this question regarding the suitability of {SSMSE} for your study in SSMSE discussions q+a. 1.2 Functions in SSMSE The main functions users can call in SSMSE are: Function Description run_SSMSE() Run the MSE simulations SSMSE_summary_all() Summarize MSE output The helper functions to create inputs to run_SSMSE are: Helper Function Description create_sample_struct() Helper function to create a list for future sampling from a model to use as input in run_SSMSE() create_future_om_list() Helper function that provides examples of the structure for the future_om_list input to run_SSMSE(). develop_OMs() Helper function to turn one OM into many Exported functions that can be used for writing custom management strategies are: Function Description run_EM() Run an SS3 estimation model (uses run_ss_model) run_ss_model() Run an SS3 model get_bin() Get location of the SS3 binary. parse_MS() Function that runs the management strategy and returns catch by fleet for the projections. A reference function for those setting up custom management strategies. Finally, some plotting functions are available: Plotting Function Description plot_index_sampling() Plot to compare the sampled index values to the operating model expected values and original operating model conditioning index data. plot_comp_sampling() Plot to compare the sampled composition values to the operating model expected values and original operating model conditioning composition data. 1.3 Brief description of the SSMSE MSE simulation procedure using run_SSMSE() In general, the steps for a scenario after calling run_SSMSE is: Create the OM Sample data from the OM Run management strategy Determine if more years need to be projected. If yes, continue; if no, end simulation. Update OM with n years of catch where n is the number of years between assessments Sample n years of data Repeat steps 3-6 until the end of the simulation. The basic steps are contained in functions: User calls run_SSMSE Steps for one iteration are passed to run_SSMSE_scen() and then run_SSMSE_iter(). The OM is created with create_OM() run_OM() runs the OM and creates the dataset to pass to the management strategy Parse_MS() is called, and runs the Management Strategy function (e.g., EM()), which projects catch forward. Update_OM() is called, which puts catch into the operating model. run_OM() runs the OM and creates the dataset to pass to the management strategy Parse_MS() is called, and runs the Management Strategy function (e.g., EM()), which projects catch forward. Determine if more years need to be projected. If yes, continue; if no, end simulation. Repeat steps 5-8. 1.3.1 Conditioning the OM and sampling from the OM For each scenario, SSMSE starts with the user providing a fitted SSS3 model (or selecting an model from the SSMSE package) to use as an OM. For each iteration of the scenario, SSMSE turns the SS fitted model into an OM and runs it once with no estimation with Stock Synthesis in order to get the “true” values and a bootstrapped data set from SS3. Note that any modifications to the OM’s parameters specified by the users as it is extended forward in time are also applied. 1.3.2 First run of the management strategy in the MSE simulation The bootstrapped dataset is then used in a Management strategy to project catch by fleet to use for the next \\(n\\) years, where \\(n\\) is the number of years between assessments. 1.3.3 Feedback from managment strategy into OM: extending model years The catch for the next \\(n\\) years before the next assessment is then added to the OM, as well as any recruitment or time varying parameter deviations. The OM is again run with no estimation where it can be used to produce sampled data for the next \\(n\\) years. These new data values are appended to the original dataset. 1.3.4 Subsequent runs of the management strategy The appended data set is then used in the management strategy again, and new catch by fleet is produced that can then be fed back to the OM. "],["simple.html", "2 A simple example 2.1 Setup R workspace folders 2.2 Create the operating models (OMs) 2.3 Adding process error through recruitment deviations and time-varying selectivity 2.4 Examine the management procedure used 2.5 Run SSMSE 2.6 run_SSMSE output 2.7 Performance metrics 2.8 Summarize results 2.9 Simple Convergence Check 2.10 Plot Spawning Stock Biomass (SSB) 2.11 Example MSE Results 2.12 Delete the files", " 2 A simple example Suppose we want to look at how well we are able to achieve a performance metric under uncertainty in the operating model (OM). We will look 2 scenarios, one where Steepness (h) is specified correctly and one where it is specified incorrectly in an estimation model (EM): Scenario 1. h-ctl: Cod operating model (h = 0.65) with correctly specified cod model EM (fixed h = 0.65). The OM is the same as the EM. Scenario 2. h-1: Cod operating model (h = 1) with misspecified cod model EM (fixed h = 0.65); The OM is not the same as the EM. Note that this is a simple example where the OM and EM structures for both scenarios are identical, except for different steepness between the OM and EM in scenario 2 and some process error we will include in the operating model. We will assume we want to run the MSE loop for 6 years, with a stock assessment occuring every 3 years (and forecasting catch to maintain 40% of unfished spawning stock biomass). The cod model’s last year is 100, so the OM is initially conditioned through year 100. Then, after conditioning the operating model through year 100, assessments will occur in years 100 and 103. The operating model runs through year 106. We chose not to run the assessment in year 106, as there was no need for its output in this example. 2.1 Setup R workspace folders First, we will load the SSMSE package and create a folder in which to run the example: Installing 5 packages: mvtnorm, bdsmatrix, numDeriv, cpp11, bbmle library(SSMSE) #load the package library(r4ss) #install using remotes::install_github("r4ss/r4ss) library(foreach) #if using run_parallel = TRUE library(doParallel) #if using run_parallel = TRUE # Create a folder for the output in the working directory. run_SSMSE_dir <- file.path("run_SSMSE-ex") dir.create(run_SSMSE_dir) 2.2 Create the operating models (OMs) 2.2.1 Specify alternative values for h The cod model with h = 0.65 (as in scenario 1) is included as external package data in SSMSE. However, we will need to modify it to use as an operating model with h = 1 (as in scenario 2). Note in this case that refit_OM is false, so the model is not being refit, just run through without fitting. To condition the new model on the same data as the input model, refit_OM should be TRUE. First, we identify where the base cod model is stored, modify it such that the steepness parameter is 1, and save the modified cod OM for scenario 2 in a new folder in the run_SSMSE_dir directory. cod_mod_path <- system.file("extdata", "models", "cod", package = "SSMSE") # develop_OMs will save a model called "cod_SR_BH_steep_1" in the out_dir # specified develop_OMs(OM_name = "cod", out_dir = run_SSMSE_dir, par_name = "SR_BH_steep", par_vals = 1, refit_OMs = FALSE, hess = FALSE) # OM model for scenario 2 cod_1_path <- file.path(run_SSMSE_dir, "cod_SR_BH_steep_1") 2.3 Adding process error through recruitment deviations and time-varying selectivity Recruitment deviations, implementation error, and changes in parameters in the projection period of the OM can be added through the future_om_list input to run_SSMSE. First, we’ll set up the list to add recruitment deviations in the projection period. The same recruitment deviation patterns are used across scenarios, but different patterns are use across iterations in the same scenario. We also want these deviations to have the same standard deviations as the historical deviations with 0 mean (the assumed default). # Start from a list created by a helper function template_mod_change <- create_future_om_list() # add recruitment deviations rec_dev_specify <- template_mod_change[[1]] rec_dev_specify$pars <- "rec_devs" # apply change to rec devs rec_dev_specify$scen <- c("replicate", "all") # using 1 to 100 means the sd or mean will be calculated by taking the sd across years # from 1 to 100 rec_dev_specify$input$first_yr_averaging <- 1 rec_dev_specify$input$last_yr_averaging <- 100 # The following 2 lines suggest that this change is immediately applied in year # 101, with no transitory period for using sd 0 to the new sd. rec_dev_specify$input$last_yr_orig_val <- 100 rec_dev_specify$input$first_yr_final_val <- 101 rec_dev_specify$input$ts_param <- "sd" # this change is for the sd # no input value needed since it will be calclated from the historical rec devs. rec_dev_specify$input$value <- NA rec_dev_specify $pars [1] "rec_devs" $scen [1] "replicate" "all" $pattern [1] "model_change" $input first_yr_averaging last_yr_averaging last_yr_orig_val first_yr_final_val 1 1 100 100 101 ts_param method value 1 sd absolute NA Next, suppose we want to allow selectivity to vary annually for 1 selectivity parameter of the fishery throughout the projection period. The following specifies that the value for selectivity varies randomly around the base value with a sd of 0.2. # put together the change for selectivity (random values around the orig val, with # an sd of 0.2) mod_change_sel <- template_mod_change[[1]] mod_change_sel$scen[2] <- "all" # apply to all scenarios # The following 2 lines suggest that this change is immediately applied in year # 101, with no transitory period for using sd 0 to the new sd. # historical values are NA in this case, because they are not used to determine # the sd to use. mod_change_sel$input$last_yr_orig_val <- 100 mod_change_sel$input$first_yr_final_val <- 101 mod_change_sel$input$ts_param <- "sd" # this change is for the sd mod_change_sel$input$value <- 0.2 # se to use in the projection period mod_change_sel $pars [1] "SizeSel_P_3_Fishery(1)" $scen [1] "replicate" "all" $pattern [1] "model_change" $input first_yr_averaging last_yr_averaging last_yr_orig_val first_yr_final_val 1 NA NA 100 101 ts_param method value 1 sd absolute 0.2 Finally, add these two changes together into an object to pass to run_SSMSE future_om_list_recdevs_sel <- list(rec_dev_specify, mod_change_sel) 2.3.1 Add observation error through sampling from OM The argument sample_struct specifies the structure for sampling from the OM (and passing to the EM). The function create_sample_struct can be used to construct a simple sampling structure consistent with an input data file: datfile <- system.file("extdata", "models", "cod", "ss3.dat", package = "SSMSE") sample_struct_1_scen <- create_sample_struct(dat = datfile, nyrs = 6) # note warning Warning in FUN(X[[i]], ...): Pattern not found for lencomp: FltSvy 1, Seas 1. Returning NA for Yr in this dataframe. sample_struct_1_scen $catch Yr Seas FltSvy SE 1 101 1 1 0.005 2 102 1 1 0.005 3 103 1 1 0.005 4 104 1 1 0.005 5 105 1 1 0.005 6 106 1 1 0.005 $CPUE Yr Seas FltSvy SE 1 105 7 2 0.2 $lencomp Yr Seas FltSvy Sex Part Nsamp 1 NA 1 1 0 0 125 $agecomp Yr Seas FltSvy Sex Part Ageerr Lbin_lo Lbin_hi Nsamp 1 105 1 2 0 0 1 -1 -1 500 $meanbodywt [1] NA $MeanSize_at_Age_obs [1] NA By default, create_sample_struct identifies sampling patterns from the historical period of the OM and replicates those patterns in the projection period. In our cod example, the sample structure specifies that catch will be added to the estimation model every year (years 101 to 106), but an index of abundance (i.e., CPUE) and age composition (i.e., agecomp) will only be added in year 105. We will use the same sampling scheme for both scenarios, but it is possible to specify different sampling for each scenario. The user could modify this sampling strategy (for example, maybe age composition should also be sampled from FltSvy 2 in Yr 102; the user could add another line to the dataframe in sample_struct$agecomp). Note that length comp (lencomp) includes an NA value for year. This is because no consistent pattern was identified, so the user must define their own input. In this case, we will remove sampling length comps all together: sample_struct_1_scen$lencomp <- NULL # don't use length sampling The same sampling structure will be used for both scenarios, which we define in a list below: sample_struct_list_all <- list("h-ctl" = sample_struct_1_scen, "h-1" = sample_struct_1_scen) 2.4 Examine the management procedure used We will use the same management procedure for both scenarios: Conduct a stock assessment every 3 years to get stock status. Project from this stock assessment using the SS3 forecast file to get projected future catch. Put this projected catch (without implementation error, in the case of this example) back into the OM. Extend the OM forward in time to get the true values for the population. Let’s take a look at step 2 in the management procedure, which is implemented using the forecasting module in SS3. We will examine the forecast file for the estimation model to better understand how catches will be forecasted from the assessment. We will use the same management procedure in both of these scenarios, although for a full MSE analysis, it is likely that multiple management procedures would be compared. fore <- r4ss::SS_readforecast( system.file("extdata", "models", "cod", "forecast.ss", package = "SSMSE"), verbose = FALSE) fore$Forecast [1] 3 fore$Btarget [1] 0.4 fore$Forecast = 3 means our forecasts from the assessment will use fishing mortality (F) to attmpt to achieve a relative (to unfished) spawning stock biomass. Based on fore$Btarget, the relative biomass target is 40% of unfished spawning stock biomass. Note also that the control rule fore$BforconstantF and fore$BfornoF values are set low to make it unlikely that they will be used (these parameters are used for a ramp harvest control rule, which we do not want to use here): fore$BforconstantF [1] 0.03 fore$BfornoF [1] 0.01 Futhermore, fore$Flimitfraction is set to 1 so that the forecasted catch is set equal to the overfishing limit (for simplicity): fore$Flimitfraction [1] 1 Note that the number of forecast years is 1: fore$Nforecastyrs [1] 1 However, an assessment will be conducted every 3 years and thus 3 years of projections is required. SSMSE will automatically modify this value in the estimation model to the appropriate number of forecasting years. More information on using the forecast module in SS3 to forecast catches is available in the Stock Synthesis users manual. Users can also specify their own [custom management procedures] 2.5 Run SSMSE Now, we create a directory to store our results, and use run_SSMSE to run the MSE analysis loop (note this will take some time to run, ~ 20 min): run_res_path <- file.path(run_SSMSE_dir, "results") dir.create(run_res_path) res <- run_SSMSE( scen_name_vec = c("h-ctl", "h-1"),# name of the scenario out_dir_scen_vec = run_res_path, # directory in which to run the scenario iter_vec = c(5,5), # run with 5 iterations each OM_name_vec = NULL, # specify directories instead OM_in_dir_vec = c(cod_mod_path, normalizePath(cod_1_path)), # OM files EM_name_vec = c("cod", "cod"), # cod is included in package data MS_vec = c("EM","EM"), # The management strategy is specified in the EM nyrs_vec = c(6, 6), # Years to project OM forward nyrs_assess_vec = c(3, 3), # Years between assessments future_om_list = future_om_list_recdevs_sel, run_parallel = TRUE, # Run iterations in parallel sample_struct_list = sample_struct_list_all, # How to sample data for running the EM. sample_struct_hist_list = NULL, # because this is null, will just use sampling # as in the current OM data file for the historical period. seed = 12345) #Set a fixed integer seed that allows replication See ?run_SSMSE for more details on function arguments. In a real MSE analysis, running 100+ iterations to reflect the full range of uncertainty (given observation and process errors) in the results would be preferred. However, we are only running 5 iterations per scenario in this demonstration to reduce computing time. 2.6 run_SSMSE output run_SSMSE will create new folders in the folders specified in out_dir_scen_vec (note that in this case, we are running both scenarios in the same folder). After is complete, there will be a folder for each scenario in run_res_path (since out_dir_scen_vec = run_res_path in this example). Within each scenario is a folder for each scenario. And within each scenario folder, there are folders containing the SS3 models that were run by run_SSMSE. There should be 1 folder for the OM, which is run multiple times in this same folder during the MSE analysis. There are multiple folders for the EMs, as a new folder is created each time an assessment is done. The first run is the folder with a name ending in init; then, each assessment after is named for the updated end year of the model. With many iterations, the number of files adds up; in the future, we hope to add options to save less output. 2.7 Performance metrics Quantitative performance metrics should be specified before conducting an MSE. Typically, a suite of performance metrics will be examined; however, for simplicity in this example, we will only look at what the achieved relative biomass was for the last 3 years of projection in the MSE to determine how it compares to the intended management target of 40% of unfished Spawning Stock Biomass. Note that we are only running our MSE projection for 6 years, but longer projections are typical in MSE analyses. 2.8 Summarize results The function SSMSE_summary_all can be used to summarize the model results in a list of 3 dataframes, one for scalar outputs (named scalar), one for timeseries outputs (ts), one for derived quantities (dq). This function also creates summary csv files in the folder where the results are stored. # Summarize 1 iteration of output summary <- SSMSE_summary_all(run_res_path) ## Extracting results from 2 scenarios ## Starting h-1 with 5 iterations ## Starting h-ctl with 5 iterations Plotting and data manipulation can then be done with these summaries. For example, SSB over time by model can be plotted. The models include the Operating Model (cod_OM), Estimation model (EM) for the historical period of years 0-100 (cod_EM_init), and the EM run with last year of data in year 103 (cod_EM_103). The operating models are shown in blue or black (depending on the scenario), and the estimation model runs are shown in orange, and the scenarios are shown on different subplots: library(ggplot2) # use install.packages("ggplot2") to install package if needed library(tidyr) # use install.packages("tidyr") to install package if needed library(dplyr) # use install.packages("dplyr") to install package if needed Attaching package: 'dplyr' The following objects are masked from 'package:stats': filter, lag The following objects are masked from 'package:base': intersect, setdiff, setequal, union 2.9 Simple Convergence Check Check there are no params on bounds or SSB that is way too small or way too large check_convergence <- function(summary, min_yr = 101, max_yr = 120, n_EMs = 5) { require(dplyr) # note: not the best way to do this if(any(!is.na(summary$scalar$params_on_bound))) { warning("Params on bounds") } else { message("No params on bounds") } summary$ts$model_type <- ifelse(grepl("_EM_", summary$ts$model_run), "EM", "OM") calc_SSB <- summary$ts %>% filter(year >= min_yr & year <= max_yr) %>% select(iteration, scenario, year, model_run, model_type, SpawnBio) OM_vals <- calc_SSB %>% filter(model_type == "OM") %>% rename(SpawnBio_OM = SpawnBio ) %>% select(iteration, scenario, year, SpawnBio_OM) EM_vals <- calc_SSB %>% filter(model_type == "EM") %>% rename(SpawnBio_EM = SpawnBio) %>% select(iteration, scenario, year, model_run, SpawnBio_EM) bind_vals <- full_join(EM_vals, OM_vals, by = c("iteration", "scenario", "year")) %>% mutate(SSB_ratio = SpawnBio_EM/SpawnBio_OM) filter_SSB <- bind_vals %>% filter(SSB_ratio > 2 | SSB_ratio < 0.5) if(nrow(filter_SSB) > 0 ) { warning("Some large/small SSBs relative to OM") } else { message("All SSBs in EM are no greater than double and no less than half SSB vals in the OM") } return_val <- bind_vals } values <- check_convergence(summary = summary, min_yr = 101, max_yr = 106, n_EMs = 5) No params on bounds All SSBs in EM are no greater than double and no less than half SSB vals in the OM 2.10 Plot Spawning Stock Biomass (SSB) This plot shows that SSB estimated does not match perfectly with the operating model. A similar plot could be made for any parameter of interest. # plot SSB by year and model run ggplot2::ggplot(data = subset(summary$ts, model_run %in% c("cod_OM", "cod_SR_BH_steep_1_OM", "cod_EM_103")), ggplot2::aes(x = year, y = SpawnBio)) + ggplot2::geom_vline(xintercept = 100, color = "gray") + ggplot2::geom_line(ggplot2::aes(linetype = as.character(iteration), color = model_run))+ ggplot2::scale_color_manual(values = c("#D65F00", "black", "blue")) + ggplot2::scale_linetype_manual(values = rep("solid", 50)) + ggplot2::guides(linetype = FALSE) + ggplot2::facet_wrap(. ~ scenario) + ggplot2::theme_classic() Warning: The `<scale>` argument of `guides()` cannot be `FALSE`. Use "none" instead as of ggplot2 3.3.4. This warning is displayed once every 8 hours. Call `lifecycle::last_lifecycle_warnings()` to see where this warning was generated. Now, we calculate and plot the performance metric, which is average spawning stock biomass (SSB) from years 104 to 106. # get_SSB_avg calculates the SSB in each year for each # iteration of the operating model, then takes the average over the years from # min_yr, to max_year. It uses the summary object as input to do these # calculations. get_SSB_avg <- function(summary, min_yr, max_yr) { OM_vals <- unique(summary$ts$model_run) OM_vals <- grep("_OM$", OM_vals, value = TRUE) SSB_yr <- summary$ts %>% filter(year >= min_yr & year <= max_yr) %>% filter(model_run %in% OM_vals) %>% select(iteration, scenario, year, SpawnBio) %>% group_by(iteration, scenario) %>% summarize(avg_SSB = mean(SpawnBio), .groups = "keep") %>% ungroup() SSB_yr } avg_SSB <- get_SSB_avg(summary, min_yr = 104, max_yr = 106) # function to summarize data in plot data_summary <- function(x) { m <- mean(x) ymin <- m - sd(x) ymax <- m + sd(x) return(c(y = m, ymin = ymin, ymax = ymax)) } # Now, plot the average relative spawning stock biomass for years 104 - 106 ggplot(data = avg_SSB, aes(x = scenario, y = avg_SSB)) + stat_summary(fun.data = data_summary, position = position_dodge(width = 0.9), color = "blue") + labs(title = "Long-term average SSB\\n(years 104-106)", x = "Scenario", y = "SSB") + theme_classic() From the above plot, we see differences in the average SSb between the 2 scenarios. 2.11 Example MSE Results We can see from the performance metric that mis-specifying the value of steepness will results in higher realized relative spawning stock biomass than correctly specifying it. This gives us some idea of the consequences of misspecifying steepness in the stock assessment. 2.12 Delete the files If you wish to delete the files created from this example, you can use: unlink(run_SSMSE_dir, recursive = TRUE) "],["advanced-options-use-a-custom-management-strategyprocedure.html", "3 Advanced options: use a custom management strategy/procedure", " 3 Advanced options: use a custom management strategy/procedure Users can outline a custom managment strategy as an R function to use. As long as the correct inputs and outputs are used, any estimation method and management procedure can be used. For example, here is a simple function that just sets future catches as half the sampled catches in a specified year: constant_catch_MS <- function(OM_dat, nyrs_assess, catch_yr = 100, frac_catch = 0.5, ...) { # need to include ... to allow function to work # set catch the same as the previous year (sampled catch). # catch is in the same units as the operating model, in this case it is in # biomass. catch <- data.frame( year = (OM_dat$endyr + 1):(OM_dat$endyr + nyrs_assess), # the years to project the model forward seas = 1, # hard coded from looking at model fleet = 1, # hard coded from looking at model catch = OM_dat$catch[OM_dat$catch$year == catch_yr, "catch"]*frac_catch, catch_se = 0.05) # hard coded from looking at model catch_bio <- catch # catch in biomass. In this case, catch is in biomass for both. Could also be left as NULL catch_F <- NULL # catch in terms of F, can be left as NULL. discards <- NULL # discards can be left as NULL if there are no discards catch_list <- list(catch = catch, catch_bio = catch_bio, catch_F = catch_F, discards = discards) } The function should be created in a separate file. In this case, assume this function is available in a file custom_funs.R. This function can then be used in a call to run_SSMSE: run_result_custom <- run_SSMSE(scen_name_vec = "constant-catch", # name of the scenario out_dir_scen_vec = run_res_path, # directory in which to run the scenario iter_vec = 1, OM_name_vec = "cod", # specify directories instead OM_in_dir_vec = NULL, MS_vec = "constant_catch_MS", # use the custom function custom_MS_source = "custom_funs.R", # File where the custom function is available. nyrs_vec = 6, # Years to project OM forward nyrs_assess_vec = 3, # Years between assessments future_om_list = future_om_list_recdevs_sel, sample_struct_list = list(sample_struct_list[[1]]), # How to sample data for running the MS. seed = 12345) #Set a fixed integer seed that allows replication "],["M-case-study-ex.html", "4 A more complex example, Natural Mortality case study", " 4 A more complex example, Natural Mortality case study This code was used to examine how different natural mortality scenarios performed in two management strategies. # load pkgs set options ---- #devtools::install_github("r4ss/r4ss", ref = "155a521") #devtools::install_github("nmfs-fish-tools/SSMSE@v0.2.5") library(SSMSE) library(r4ss) # functions for convergence and performance metrics, get from other gh repo source("https://raw.githubusercontent.com/k-doering-NOAA/ssmse-afs/master/code/get_metrics.R") # specify locations, create folders ---- cod_mod_path <- system.file("extdata", "models", "cod", package = "SSMSE") datfile_path <- file.path(cod_mod_path, "ss3.dat") fig_path <- "figures" runs_path <- "model_runs" mods_path <- "input_models" dir.create(fig_path) dir.create(runs_path) dir.create(mods_path) # define the scenarios ---- niters <- 100 start_iters <- 1 # the scenarios are: # three levels of M changes in the OM (none, more frequent, less frequent) # 2 different management scenarios # in all scenarios, uncertainty in the selectivity moving forward # metrics: long term avg catch, long term catch variability, long term biomass scen_red_tide <- c("no-red-tide", "low-red-tide", "hi-red-tide") scen_HCR <- c("F-spr-30", "F-spr-45") scenarios <- data.frame( scen_name = c(paste0(scen_red_tide, "-",scen_HCR[1]), paste0(scen_red_tide, "-", scen_HCR[2])), EM_path = rep(c(file.path(mods_path, scen_HCR)), times = c(3,3)) ) # manipulate EM Forecasting ---- # no need to re-run model for the EM, if(start_iters == 1) { # dont need to re run this for each new set of runs for (i in scen_HCR) { tmp_cod_path <- file.path(mods_path, i) file.copy(from = cod_mod_path, to = mods_path, recursive = TRUE) file.rename(from = file.path(mods_path, "cod"), to = tmp_cod_path) fore <- r4ss::SS_readforecast(file.path(tmp_cod_path, "forecast.ss"), verbose = FALSE) forecast_method <- switch(i, "F-msy" = 2, "F-spr-30" = 1, "F-spr-45" = 1) fcast_target <- switch(i, "F-msy" = 0.45, "F-spr-30" = 0.3, "F-spr-45" = 0.45) # manipulate the forecasting file. fore$MSY <- 2 # calculate Fmsy, needed for F-msy scenario fore$SPRtarget <- fcast_target # differs between scenarios fore$Forecast <- forecast_method # differs between scenarios fore$ControlRuleMethod <- 0 # don't use a ramp HCR at all r4ss::SS_writeforecast(fore, tmp_cod_path, verbose = FALSE, overwrite = TRUE) file.remove(file.path(tmp_cod_path, "forecast.ss_new")) # to make sure it is not used. } } # set up the future om deviations ---- # Set this up for the 3 different operating mode scenarios # in all cases, we want to use random fluctuations on selectivity # changing M depends on the scenario. # put together the change for selectivity (random values around the orig val, with # an sd of 0.2) template_mod_change <- create_future_om_list() mod_change_sel <- template_mod_change[[1]] mod_change_sel$scen[2] <- "all" mod_change_sel$input$last_yr_orig_val <- 100 mod_change_sel$input$first_yr_final_val <- 101 mod_change_sel$input$ts_param <- "sd" mod_change_sel$input$value <- 0.2 # put together change for M # more stochasisity could be added, but starting with this is still interesting template_custom_change <- create_future_om_list(example_type = "custom") mod_change_M <- template_custom_change[[1]] M_no_scen <- rep(rep(0.2, 50), times = niters) M_low_scen <- rep(rep(c(0.2, 0.2, 0.2, 0.2, 0.3), length.out = 50), times = niters) M_hi_scen <- rep(rep(c(0.2, 0.2, 0.2, 0.2, 0.4), length.out = 50), times = niters) M_custom_dataframe <- data.frame( par = "NatM_p_1_Fem_GP_1", scen = rep(scenarios$scen_name, times = rep(50*niters, 6)), iter = rep(rep(seq(from = start_iters, to = start_iters + niters - 1), times = rep(50, niters)), times = 6), yr = rep(101:150, times = 6*niters), value = c(M_no_scen, M_low_scen, M_hi_scen, M_no_scen, M_low_scen, M_hi_scen)) mod_change_M$pars <- "NatM_p_1_Fem_GP_1" mod_change_M$scen <- c("replicate", "all") mod_change_M$input <- M_custom_dataframe # add recruitment deviations rec_dev_specify <- template_mod_change[[1]] rec_dev_specify$pars <- "rec_devs" rec_dev_specify$scen <- c("replicate", "all") rec_dev_specify$input$first_yr_averaging <- 1 # use same sd as from the orig model. rec_dev_specify$input$last_yr_averaging <- 100 rec_dev_specify$input$last_yr_orig_val <- 100 rec_dev_specify$input$first_yr_final_val <- 101 rec_dev_specify$input$ts_param <- "sd" rec_dev_specify$input$value <- NA # put together a complete list future_om_list <- list(mod_change_M, mod_change_sel, rec_dev_specify) # get the sampling scheme ---- # use the historical sampling scheme, so don' t need to create one # for sampling scheme in the projections, use the historical sampling scheme to # the extent possible; if no pattern found, then create a manual one. sample_struct <- SSMSE::create_sample_struct(dat = datfile_path, nyrs = 50) sample_struct$meanbodywt <- NULL sample_struct$MeanSize_at_Age_obs <- NULL # modify, because there were NAs sample_struct$lencomp <- data.frame(Yr = seq(105, 150, by = 5), Seas = sample_struct$lencomp$Seas, FltSvy = sample_struct$lencomp$FltSvy, Sex = sample_struct$lencomp$Sex, Part = sample_struct$lencomp$Part, Nsamp = sample_struct$lencomp$Nsamp) sample_struct_list <- list(sample_struct, sample_struct, sample_struct, sample_struct, sample_struct, sample_struct ) # call SSSMSE ---- out <- SSMSE::run_SSMSE(out_dir_scen_vec = rep("model_runs", 6), scen_name_vec = scenarios$scen_name, iter_vec = rep(niters, 6), OM_name_vec = rep("cod", 6), OM_in_dir_vec = NULL, EM_in_dir_vec = scenarios$EM_path, run_EM_last_yr = FALSE, MS_vec = "EM", use_SS_boot_vec = TRUE, nyrs_vec = rep(50, 6), nyrs_assess_vec = rep(5, 6), sample_struct_list = sample_struct_list, future_om_list = future_om_list, verbose = FALSE, seed = 456, # changing each time a chunk of runs is done will help ensure there is stochacisity run_parallel = TRUE, n_cores = 6 ) saveRDS(out, file = file.path("model_runs", "run_SSMSE_out.rda")) # # # look at results ---- summary <- SSMSE::SSMSE_summary_all(dir = "model_runs", run_parallel = TRUE) # #check for errored iterations lapply(out, function(x) x$errored_iterations) "],["SSMSE.html", "5 Options for run_SSMSE 5.1 Scenarios in SSMSE 5.2 Operating model 5.3 The Management Strategy and (if applicable) Estimation model (EM) 5.4 Sampling options 5.5 Future changes to the operating model (#future)", " 5 Options for run_SSMSE Many inputs are possible for the run_SSMSE() option. Here, we will describe some of the options available. For detailed documentation, type ?SSMSE::run_SSMSE() into the R console. 5.1 Scenarios in SSMSE Note that multiple scenarios can be called in run_SSMSE(), often through vector inputs to run_SSMSE(). Below, we will describe inputs needed to run 1 scenario. 5.2 Operating model The operating model (OM) for SSMSE should be a Stock Synthesis model. This could be any fitted Stock Synthesis model, except for models that use Empirical Weight at age. There are 3 built-in models that comes with SSMSE: cod, growth_timevary, and SR_env_block. To use a built in model, specify the name of the model as part of the OM_name_vec. For example, to use the cod model in run_SSMSE for 1 scenario, set OM_name_vec = \"cod\". Otherwise, the path to the OM model should be specified in OM_in_dir_vec. 5.3 The Management Strategy and (if applicable) Estimation model (EM) The management strategy (and EM) can be specified in one of two ways: Using an SS3 model and its forcast file to project catch by fleet Using a custom management strategy via a function in R In theory, any management strategy should work, as long as it can take the data file produced by the OM as output and provide back to SSMSE future catches by fleet. 5.3.1 Specify the Management Strategy in a SS3 model An SS3 model can be set up as the EM for the MSE. To use this option, specify \"EM\" as part of MS_vec. As with the OM, the built-in cod model could be used; just specify \"cod\" in the EM_name_vec. To use any other SS3 model as the EM, specify the path in EM_in_dir_vec.Note that models that use empirical weight at age can not yet be used as estimation models. Also, the .ss_new input files are used for an OM, while the original input files are used for the EM. Future catches will be determined from the forecasting file settings of the SS3 model. SSMSE will change the number of forecast years to match the number of years between assessments, but other specifications need to be made by the user. 5.3.2 Using a custom management strategy/procedure Users can outline a custom managment strategy as an R function to use. As long as the correct inputs and outputs are used, any estimation method and management procedure can be used. For example, here is a simple function that just sets future catches as half the sampled catches in a specified year: constant_catch_MS <- function(OM_dat, nyrs_assess, catch_yr = 100, frac_catch = 0.5, ...) { # need to include ... to allow function to work # set catch the same as the previous year (sampled catch). # catch is in the same units as the operating model, in this case it is in # biomass. catch <- data.frame( year = (OM_dat$endyr + 1):(OM_dat$endyr + nyrs_assess), # the years to project the model forward seas = 1, # hard coded from looking at model fleet = 1, # hard coded from looking at model catch = OM_dat$catch[OM_dat$catch$year == catch_yr, "catch"]*frac_catch, catch_se = 0.05) # hard coded from looking at model catch_bio <- catch # catch in biomass. In this case, catch is in biomass for both. Could also be left as NULL catch_F <- NULL # catch in terms of F, can be left as NULL. discards <- NULL # discards can be left as NULL if there are no discards catch_list <- list(catch = catch, catch_bio = catch_bio, catch_F = catch_F, discards = discards) } Let’s assume this function is saved in a file within the working directory named constant_catch_MS.R. This function can then be used in a call to run_SSMSE(): # define sample structure datfile <- system.file("extdata", "models", "cod", "ss3.dat", package = "SSMSE") sample_struct <- create_sample_struct(dat = datfile, nyrs = 6) # note warning sample_struct$lencomp <- NULL # don't use length sampling # run the SSMSE routine run_result_custom <- run_SSMSE( scen_name_vec = "constant-catch", out_dir_scen_vec = "my_results", iter_vec = 1, OM_name_vec = "cod", OM_in_dir_vec = NULL, MS_vec = "constant_catch_MS", # use custom fun custom_MS_source = "constant_catch_MS.R", use_SS_boot_vec = TRUE, nyrs_vec = 6, nyrs_assess_vec = 3, run_EM_last_yr = FALSE, run_parallel = FALSE, sample_struct_list = list(sample_struct), seed = 12345) 5.4 Sampling options Currently, the only available sampling option is to use the bootstrapping module within SS3 itself. This means specifying use_SS_boot_vec = TRUE. Details on how sampling is done using the bootstrapping module in SS3 is available in the “Bootstrap Data Files” section of the SS3 user manual. Users also need to specify how and which data types should be sampled for each future year in the simulation in sample_struct_list. sample_struct_list is a list of lists. The first level is a list for each scenario; then, for the scenario, there is a list of dataframes, each of which specifying which years and fleets of data should be sampled in the future as well as which standard errors or sample sizes should be used. The helper function create_sample_struct() can be used to help users generate the list of dataframes for a scenario. See an example of this function’s use in the simple example or by typing ?SSMSE::create_sample_struct() into the R console. Currently, users can sample data for catches (treated as fixed if sample_catch = FALSE for the scenario), CPUE (i.e., indices of abundance), length and age composition, conditional length at age compositions, mean size at age, and mean size. It is not yet possible to sample data for generalized size compositions, tagging data, and morph compositions. Users can specify the sampling during the historical period of the model through the sample_struct_hist input to run_SSMSE. This is an optional input and has the same structure as sample_struct_list, but the years will be before the original end year of the OM. 5.5 Future changes to the operating model (#future) By default, SSMSE will simply extend the structure of the OM into the future using the same parameter values as in the last year of the model. However, the user may want to allow changes in parameter values to occur as the OM is extended forward in time. Users can input values into the future_om_list to accomplish this. This input is a list of lists. For the first level of lists, each represents a separate change to make. Within the first level, there are 4 list components that outline the details of the change to make. There are 2 main choices: 1) to specify model changes by telling SSMSE how values should be sampled or 2) to input your own custom values for future values of a given parameter. 5.5.1 The strucutre of future_om_list For example, here is an example list containing just one future change for the model. It shows that the model should be changed to 4.5 in year 103 and afterwards: my_future_om_list <- create_future_om_list(list_length = 1) my_future_om_list [[1]] [[1]]$pars [1] "SizeSel_P_3_Fishery(1)" [[1]]$scen [1] "replicate" "scen2" [[1]]$pattern [1] "model_change" [[1]]$input first_yr_averaging last_yr_averaging last_yr_orig_val first_yr_final_val 1 NA NA 102 103 ts_param method value 1 mean absolute 4.5 length(my_future_om_list) # note has length 1 b/c 1 change [1] 1 length(my_future_om_list[[1]]) # has length 4 because 4 list components, as for each change [1] 4 Note there is just one change specified here. For the change, there are four list items that are required for any specified change. The first list item is named “pars”. It contains a vector of parameter name(s) to apply the change to. The names should be the same as the names in r4ss::SS_read_pars() or can include the following: \"rec_devs\" - For specifying recruitment deviations \"impl_error - For specifying implementation error in transforming a management procedure’s specified future catch into realized catch. \"all\" - Apply the change to all parameters in the model, with the exception of “SR_sigmaR”, “SR_regime”, “SR_autocorr”, and “impl_error” parameters. Changing the first 3 stock recruitment related parameters would conflict with changes in recruitment deviations. Since implementation error is not a parameter specified in the control file and SSMSE does not rely on the implementation error parameter created through the forecasting file, including the implementation error in “all” did not seem appropriate. In this case, we just want to apply the change to the SizeSel_P_3_Fishery(1) parameter of the model. The second item is “scen”, which contains a vector of information about how to apply the changes within and across scenarios. The first value is an option to specify how the change should be applied among scenarios, either “randomize” (use a different value for each iteration of each scenario) or “replicate” (use the same set of values across scenarios for the same number iteration, but each value will be different across iterations within the same scenario). Note that the same values will be applied across iterations and scenarios if there isn’t any stochasticity in the values being drawn (e.g., standard deviation set at 0), regardless if “randomize” or “replicate” is chosen. In the example above, there is no stochasticity, so specifying randomize or replicate does not matter. Following the first value are the names of the scenarios to apply the change to. If the change should be applied to all of the scenarios, “all” can be used in place of naming every scenario. In this example, The change will only be applied to the scenario named “scen2”, so the input for “scen” is c(\"randomize\", \"scen2\"). The “pattern” is a vector of character inputs. The first value should be either “model_change”, meaning that SSMSE will calculate the change values, or “custom” which allows the user to directly put in the values that they want in the model. In this case, “model_change” is used. A second input can also be added when the “model_change” pattern is used, which specifies the distribution to pull from. This could be “normal” or “lognormal”, but if no input is provided, a normal distribution will be used for sampling. The fourth item, named “input”, is a dataframe, which contains different column name values depending on if the “model_change” pattern is used or if the “custom” pattern is used. For the model_change options, a dataframe in input specifies a change for the parameter of a distribution. Because we are using the model change option, we have the following columns: first_yr_averaging: The first year to average historical values from the model, if using for the change. This should be NA if historical averaging will not be used. last_yr_averaging: The last year to average historical values from the model, if using for the change. This should be NA if historical averaging will not be used. last_yr_orig_val: The last year of the future deviations with the original model value. This value will not be changed, but the following year’s value will be. first_yr_final_val: The first year where the final value as specified will be reached. If no future changes are made, the final value will continue forward in the OM through all projected years. For step changes, the first_yr_final_val is just 1 year after the last_yr_orig_val. However, if a more gradual adjustment toward the final value is desired, this could be any number of years after the original value. ts_param: The sampling parameter to modify. Options are “mean”, “ar_1_phi” (if using autocorrelation), “sd”, and “cv”. If not included in the dataframe, “mean” is assumed to be the same as in the previous model year, sd is assumed to be 0, and we assume no autocorrelation (as specified through an autoregressive AR1 process). Phi should be between -1 and 1 (this creates a stationary process) and if phi = 1, this creates a random walk (as random walk is a special case of an AR1 model). Note that both sd and cv cannot be specified within the same input dataframe. method: How to apply the change relative to the historical (if specified) or previous year’s value (if first_yr_averaging and last_yr_averaging for that row are NA). Options are “multiplicative”, “additive”, and “absolute”. This example uses “absolute”, meaning that the number in “value” is directly used. value. The value of the parameter change. This may be NA if historical averaging is used. 5.5.2 Example of future_om_list denoting a gradual change Suppose we wanted to change the value of “SizeSel_P_3_Fishery(1)” in scen2 over 4 years after year 102 to arrive at a value of 4.5 in year 106. We can do this by using my_future_om_list, but changing the value in the first row of first_yr_final_val to 106: my_future_om_list[[1]][["input"]][1, "first_yr_final_val"] <- 106 my_future_om_list[[1]] $pars [1] "SizeSel_P_3_Fishery(1)" $scen [1] "replicate" "scen2" $pattern [1] "model_change" $input first_yr_averaging last_yr_averaging last_yr_orig_val first_yr_final_val 1 NA NA 102 106 ts_param method value 1 mean absolute 4.5 5.5.3 Example of future_om_list with random deviations Suppose we now wanted the value of “SizeSel_P_3_Fishery(1)” to change randomly according to a normal distribution with a standard deviation of 0.1 around a mean of 4.5 from 104 onwards. This can be done by adding a line specifying a change in standard deviation (which for now, has been assumed to be 0) to the data frame: new_vals <- data.frame(first_yr_averaging = NA, last_yr_averaging = NA, last_yr_orig_val = 103, first_yr_final_val = 104, ts_param = "sd", method = "absolute", value = 0.1) my_future_om_list[[1]][["input"]] <- rbind(my_future_om_list[[1]][["input"]], new_vals) my_future_om_list[[1]] $pars [1] "SizeSel_P_3_Fishery(1)" $scen [1] "replicate" "scen2" $pattern [1] "model_change" $input first_yr_averaging last_yr_averaging last_yr_orig_val first_yr_final_val 1 NA NA 102 106 2 NA NA 103 104 ts_param method value 1 mean absolute 4.5 2 sd absolute 0.1 Note that the last_yr_orig_val and first_yr_final_val are different than the line for the mean, which is allowed. 5.5.4 Example of using historical values for determining parameter values This example applies random annual deviations to all parameters for scenarios scen2 and scen3. future_om_list_2 <- vector(mode = "list", length = 1) future_om_list_2 <- lapply(future_om_list_2, function (x) x <- vector(mode = "list", length = 4)) names(future_om_list_2[[1]]) <- c("pars", "scen", "pattern", "input") future_om_list_2[[1]][["pars"]] <- "all" future_om_list_2[[1]][["scen"]] <- c("randomize", "scen2", "scen3") future_om_list_2[[1]][["pattern"]] <- "model_change" # defaults to using normal dist future_om_list_2[[1]][["input"]] <- data.frame(first_yr_averaging = c(1, 1), last_yr_averaging = c(100, 100), last_yr_orig_val = c(100, 100), first_yr_final_val = c(101, 101), ts_param = c("cv", "mean"), method = c("absolute", "multiplier"), value = c(0.1, 1)) Note that the choice of year range for historical values does not matter unless the parameter is already time-varying in the original operating model or has become time varying through a previous change. Otherwise, the base model parameter will be applied. If no historical years are included, then the base parameter value will be the basis of comparison for relative changes (i.e., method = multiplier or additive). 5.5.5 Example using “custom” pattern instead of “model_change” custom_future_om_list <- create_future_om_list(example_type = "custom", list_length = 1) custom_future_om_list [[1]] [[1]]$pars [1] "impl_error" [[1]]$scen [1] "randomize" "all" [[1]]$pattern [1] "custom" [[1]]$input par scen iter yr value 1 impl_error scen1 1 101 1.05 2 impl_error scen1 2 101 1.05 3 impl_error scen1 3 101 1.05 4 impl_error scen1 4 101 1.05 5 impl_error scen1 5 101 1.05 6 impl_error scen1 1 102 1.05 7 impl_error scen1 2 102 1.05 8 impl_error scen1 3 102 1.05 9 impl_error scen1 4 102 1.05 10 impl_error scen1 5 102 1.05 11 impl_error scen1 1 103 1.05 12 impl_error scen1 2 103 1.05 13 impl_error scen1 3 103 1.05 14 impl_error scen1 4 103 1.05 15 impl_error scen1 5 103 1.05 16 impl_error scen1 1 104 1.05 17 impl_error scen1 2 104 1.05 18 impl_error scen1 3 104 1.05 19 impl_error scen1 4 104 1.05 20 impl_error scen1 5 104 1.05 21 impl_error scen1 1 105 1.05 22 impl_error scen1 2 105 1.05 23 impl_error scen1 3 105 1.05 24 impl_error scen1 4 105 1.05 25 impl_error scen1 5 105 1.05 26 impl_error scen1 1 106 1.05 27 impl_error scen1 2 106 1.05 28 impl_error scen1 3 106 1.05 29 impl_error scen1 4 106 1.05 30 impl_error scen1 5 106 1.05 31 impl_error scen2 1 101 1.10 32 impl_error scen2 2 101 1.10 33 impl_error scen2 3 101 1.10 34 impl_error scen2 4 101 1.10 35 impl_error scen2 5 101 1.10 36 impl_error scen2 1 102 1.10 37 impl_error scen2 2 102 1.10 38 impl_error scen2 3 102 1.10 39 impl_error scen2 4 102 1.10 40 impl_error scen2 5 102 1.10 41 impl_error scen2 1 103 1.10 42 impl_error scen2 2 103 1.10 43 impl_error scen2 3 103 1.10 44 impl_error scen2 4 103 1.10 45 impl_error scen2 5 103 1.10 46 impl_error scen2 1 104 1.10 47 impl_error scen2 2 104 1.10 48 impl_error scen2 3 104 1.10 49 impl_error scen2 4 104 1.10 50 impl_error scen2 5 104 1.10 51 impl_error scen2 1 105 1.10 52 impl_error scen2 2 105 1.10 53 impl_error scen2 3 105 1.10 54 impl_error scen2 4 105 1.10 55 impl_error scen2 5 105 1.10 56 impl_error scen2 1 106 1.10 57 impl_error scen2 2 106 1.10 58 impl_error scen2 3 106 1.10 59 impl_error scen2 4 106 1.10 60 impl_error scen2 5 106 1.10 61 impl_error scen3 1 101 1.10 62 impl_error scen3 2 101 1.10 63 impl_error scen3 3 101 1.10 64 impl_error scen3 4 101 1.10 65 impl_error scen3 5 101 1.10 66 impl_error scen3 1 102 1.10 67 impl_error scen3 2 102 1.10 68 impl_error scen3 3 102 1.10 69 impl_error scen3 4 102 1.10 70 impl_error scen3 5 102 1.10 71 impl_error scen3 1 103 1.10 72 impl_error scen3 2 103 1.10 73 impl_error scen3 3 103 1.10 74 impl_error scen3 4 103 1.10 75 impl_error scen3 5 103 1.10 76 impl_error scen3 1 104 1.10 77 impl_error scen3 2 104 1.10 78 impl_error scen3 3 104 1.10 79 impl_error scen3 4 104 1.10 80 impl_error scen3 5 104 1.10 81 impl_error scen3 1 105 1.10 82 impl_error scen3 2 105 1.10 83 impl_error scen3 3 105 1.10 84 impl_error scen3 4 105 1.10 85 impl_error scen3 5 105 1.10 86 impl_error scen3 1 106 1.10 87 impl_error scen3 2 106 1.10 88 impl_error scen3 3 106 1.10 89 impl_error scen3 4 106 1.10 90 impl_error scen3 5 106 1.10 The inputs required for pattern = custom are similar to using pattern = model_change, but there is no need to specify a distribution as the second vector input of “pattern” and the column names in input are different. Also that the change is “randomized”, which indicates that a value for each scenario and each iteration are necessary, whereas if the first value of “scen” was “replicate”, separate values for each scenario should not be specified, but rather a single value for “all” should be used. The columns in the dataframe are: par: the parameter name or “all” if it should be applied to all parameters in the “pars” input scen: which scenario the value should apply to, or “all” if “replicate” is used as the scenario input iter: which number iteration the value should apply to. These are absolute iteration numbers. yr: the year the value applies to. These should be after the original end year of the OM. value: the value to apply to the model 5.5.6 How is the operating modified to accomodate changes as specified in the future_OM_list? All changes are made by converting the parameter(s) with changes to be time varing by using additive annual parameter deviations. Because this is an operating model, Stock Synthesis is not run with estimation, so to get the changes into the OM, values (drawn or calculated in model_changes or specified by the user using “custom”) are directly input as parameter deviations into the ss.par file. If an OM already contains time varying parameters, these parameters will also be converted to additive parameter deviations before applying the changes specified in the future_OM_list. 5.5.7 Example of specifying recruitment deviations This shows some code for putting recruitment deviations with a mean of 0 and the same standard deviation as the historical recruitment deviations in years 1 to 100: template_mod_change <- create_future_om_list(list_length = 1) rec_dev_specify <- template_mod_change[[1]] rec_dev_specify$pars <- "rec_devs" rec_dev_specify$scen <- c("replicate", "all") rec_dev_specify$input$first_yr_averaging <- 1 rec_dev_specify$input$last_yr_averaging <- 100 rec_dev_specify$input$last_yr_orig_val <- 100 rec_dev_specify$input$first_yr_final_val <- 101 rec_dev_specify$input$ts_param <- "sd" rec_dev_specify$input$value <- NA rec_dev_list <- list(rec_dev_specify) rec_dev_list [[1]] [[1]]$pars [1] "rec_devs" [[1]]$scen [1] "replicate" "all" [[1]]$pattern [1] "model_change" [[1]]$input first_yr_averaging last_yr_averaging last_yr_orig_val first_yr_final_val 1 1 100 100 101 ts_param method value 1 sd absolute NA "],["helper.html", "6 Helper functions in SSMSE 6.1 Creating multiple operating models with develop_OMs() 6.2 Set up sampling for a scenario with create_sample_struct() 6.3 Get examples of the future_om_list input with create_future_om_list()", " 6 Helper functions in SSMSE run_SSMSE() requires some detailed inputs that are time consuming to make. In the interest of automating routines as much as possible, helper functions are available in SSMSE. 6.1 Creating multiple operating models with develop_OMs() This function allows users to change a parameter in a Stock Synthesis OM and, if desired, refit the operating model to data. Typically, users will want to create multiple operating models to use in different scenarios in order to account for uncertainties in parameter values, such as biological parameters relating to natural mortality or growth. 6.2 Set up sampling for a scenario with create_sample_struct() run_SSMSE() requires the input sample_struct, which outlines the future sampling structure to use to generate data sets. This object could be difficult to create manually and users may want to just continue sampling as previously done in an existing SS3 model. create_sample_struct() will use the time patterns in sampling for different data types found in an SS3 data file and extend it forward in time. If no pattern is found, the function will return NAs and provide a warning to the user. If any NAs are returned (except to indicate that an entire data type is missing), the user will need to remove or fill in the NA value before usign the list as an input to run_SSMSE(). See use of create_sample_struct() in simple example. 6.3 Get examples of the future_om_list input with create_future_om_list() Users can modify the future structure of their operating model through the future_om_list input to run_SSMSE. The structure and names of this object matter, so examples are provided by calling create_future_om_list(). See more details about the specification of future_om_list objects. "],["output.html", "7 Output and Plots 7.1 Summarizing output 7.2 Checking estimation model convergence 7.3 Calculating performance metrics", " 7 Output and Plots 7.1 Summarizing output Output is summarized using SSMSE_summary_all(): summary_list <- SSMSE_summary_all(dir = "path/to/scenario/dir", scenarios = c("sample_low", "sample_high"), run_parallel = TRUE) Relying on ss3sim::get_results_all(), this function creates: For each scenario, 3 scenario level .csv files For all scenarios, 2 cross-scenario .csv files named by default to SSMSE_tsand SSMSE_scalar. For all scenarios, the function returns a list object containing data frames of timeseries (ts), scalar, and derived quantities (dq) summaries. Note that run_parallel = TRUE is only faster than run_parallel FALSE when there is more than once scenario and none of the scenario-level .csv files have been created yet. By default, if a user doesn’t specify scenarios, all scenarios in dir will be summarized. 7.2 Checking estimation model convergence One of the first checks to do if using an estimation model after running an MSE analysis is to check that the estimation model has converged. A number of checks could be done, but a basic one is checking the gradients for the estimation model, which are added to the SSMSE_scalar summary sheet. 7.3 Calculating performance metrics Typically, a suite of performance metrics are used in MSE. Punt et al. (2016) recommends at least using metrics related to: average catch variation in catch over time population size "],["plotting-output-and-performance-metrics.html", "8 Plotting output and performance metrics", " 8 Plotting output and performance metrics Currently, it has been left up to the user to plot summaries and performance metrics, as the potential options for plots and performance metrics users may desire are extensive. There are currently diagnostic plots to examine how sampled data compares to the operating model values and data sets used to condition the operating models. plot_index_sampling() creates a diagnostic plot for indices, while plot_comp_sampling() creates diagnostic plots for composition data. "],["glossary-of-mse-terms.html", "9 Glossary of MSE terms", " 9 Glossary of MSE terms This glossary is based off of Rademeyer et al. (2007) and Punt et al. (2016). Assessment error - Error that occurs during the process of conducting an assessment, specifically error “which inform the catch control rule that is being evaluated using the MSE: management advice for any system is based on uncertain data. Consequently, the data that inform catch control rules need to be generated in a manner which is as realistic as possible. Uncertainty arises when the model used for conducting assessments and providing management advice differs from the operating model, or the data are too noisy to estimate all key parameters reliably (Punt et al. 2016).” Assessment model (AM) - A fitted and well scrutinized Stock Synthesis stock assessment model. The AM is input for SSMSE so that it can build the operating model. Bootstrapped dataset - After running an SS3 model (and if the user turns on the option in the starter file), the output file data.ss_new contains a “dataset” that has the same form as the input data.ss file, but instead of the input values, contains a sampled dataset. This is derived from using the expected values given the parameter estimates and model structure (see glossary entry on Expected values), uncertainty either estimated or input in the model, and an assumed distribution for sampling. As many bootstrapped data sets as the user desires can be output from an SS3 model. These “bootstrap datasets” are the third or greater set of values in the data.ss_new file. Estimation model (EM) - This refers to the model used within the MSE procedure to represent the stock assessment process. Note that an estimation model proxy could also be used to represent the stock assessment process. Expected values - After running an SS3 model (and if the user turns on the option in the starter file), the output file data.ss_new contains a “dataset” that has the same form as the input data.ss file, but instead of the input values, contains the expected values given the parameter estimates and model structure. This “expected values dataset” is the second set of values in the data.ss_new file. Implementation error - Also called implementation uncertainty or outcome uncertainty. Broadly, this includes error of implementing the management action(s), which is often not done perfectly. Definition from Punt et al. (2016): “The most obvious form of this type of uncertainty is when catches are not the same as the TACs – typically more is taken or the decision-makers do not implement the TACs suggested by the management strategy. However, there are many other sources of outcome uncertainty, such as that associated with catch limits set for recreational fisheries and regulating discards.” Index - An index of abundance. In SS3 files, this is sometimes used interchangably with CPUE. Plural, indices. Management strategy - Following Punt et al. (2016), there are 2 types: model-based management strategies and empirical management strategies. Ideally, SSMSE will allow for users to use either of these management strategies. Model-based management strategies include conducting a stock assessment and using output for determining harvest control rules. Empirical management strategies do not involve conducting a stock assessment but rather setting regulations from data (although summarization is possible). In some cases, management strategies are computationally intensive/time consuming to formally include in the context of an MSE; to speed up the process, it is commen to use management strategy proxies, typically an assumed error distribution about the operating model values. Management strategies are also known as management procedures. Model uncertainty - Definition from Punt et al. (2016): “the form of relationships within an operating model will always be subject to uncertainty. The simplest type of model uncertainty involves, for example, whether the stock–recruitment relationship is Beverton–Holt or Ricker, whether a fixed value for a model parameter is correct, or whether fishery selectivity is asymptotic or dome-shaped. However, there are other more complicated types of model structure uncertainty such as how many stocks are present in the area modelled, the error structure of the data used for assessment purposes, the impact of future climate change on biological relationships such as the stock–recruitment function, and ecosystem impacts on biological and fishery processes.” Observation error - Error that results from not observing the true dynamics of the system. See also bootstrapped dataset. Operating model (OM) - This is an SS3 model that defines the assumed true dynamics of the population and its associated fisheries for the purposes of management strategy evaluation. Most often more than one operating model is necessary in order to adequately characterize the uncertainty in the true dynamics of the system. The SS3 OM may also define how sampling from the true dynamics is done, as SS produces expected values and, if desired, bootstrapped data sets based on sampling assumptions implicit in the model. For more information, see the glossary entries on Expected values and bootstrapped dataset Parameter uncertainty - Definition from Punt et al. (2016): “many operating models are fit to the data available, but the values estimated for the parameters of those operating models (e.g. fishery selectivity-at-age, the parameters of the stock–recruitment relationship and historical deviations in recruitment about the stock–recruitment relationship) are subject to error.” Performance statistics - Definition from Rademeyer et al. (2007): “Statistics that summarize different aspects of the results of a simulation trial used to evaluate how well a specific [management strategy] achieves some or all of the general objectives for management for a particular scenario.” Performance statistics usually fall into one of three categories: catch-related, stability related, or risk related. Process uncertainty - Definition from Punt et al. (2016): “variation (usually assumed to be random, though sometimes incorporating autocorrelation) in parameters often considered fixed in stock assessments such as natural mortality, future recruitment about a stock–recruitment relationship and selectivity.” Stock Synthesis (SS3) - A integrative, general population dynamics modeling program used to assess the effects of fishing on population. Available at the Stock Synthesis website Uncertainty - Incorporating uncertainty into an MSE procedure is extremely important. There are several potential sources of uncertainty, which we have divided as done in Punt et al. (2016) in addition to adding observation error. For more details, see the glossary entries on Process Uncertainty, Parameter uncertainty, Model uncertainty, Assessment error, Implementation error, and observation error. Punt et al. (2016) suggest that MSE should at least consider process uncertainty (particularly deviations from the stock recruitment relationship), parameter uncertainty (particularly which relates to productivity and stock size), and observation error. Note that Rademeyer et al. (2007) divides error into estimation error, implementation error, observation error and process error, which could be used instead. These may be more natural divisions in sources of error. "],["404.html", "Page not found", " Page not found The page you requested cannot be found (perhaps it was moved or renamed). You may want to try searching to find the page's new location, or use the table of contents to find the page you are looking for. "]]
diff --git a/manual/simple.html b/manual/simple.html
index 2fef7a35..4a5d4350 100644
--- a/manual/simple.html
+++ b/manual/simple.html
@@ -23,7 +23,7 @@
 
 
 
-
+
 
   
   
@@ -236,7 +236,7 @@ 

2 A simple example

2.1 Setup R workspace folders

First, we will load the SSMSE package and create a folder in which to run the example:

-
Installing 4 packages: mvtnorm, bdsmatrix, numDeriv, bbmle
+
Installing 5 packages: mvtnorm, bdsmatrix, numDeriv, cpp11, bbmle
library(SSMSE) #load the package
 library(r4ss) #install using remotes::install_github("r4ss/r4ss)
 library(foreach) #if using run_parallel = TRUE