portalcasting (development version)
Version numbers follow Semantic Versioning.
Automating editing of dockerhub description
- Using a 3rd party action while docker gets this functionality integrated
- addresses #361
- Tightening and improving language in Title and Description fields in DESCRIPTION file
Testing docker images prior to pushing them
- A simple “can you run
library(portalcasting)” to start, can use this entry point to expand the testing suite on the image in the future
Pointing the dockerfile to the current build SHA
- Previously, we were using default settings for
remotes::install_github, which actually points to HEAD, when testing the build and push for the docker action. that was fine (although not great) when we were only running that action on tagged version releases, as the main branch was typically up-to-date with the tag and such, but this isn’t exact and it also means any testing of the build on a PR was actually still grabbing from main, which is not what we want.
- We now use the SHA for the specific event that triggers the build by passing it into the docker file as an ARG
Include CI tests of examples and eval=FALSE vignettes
- Because of the long run time of some of the code, we wrap most documentation examples and vignette code in a way to prevent evaluation in real-time. As a result, much of the documentation code isn’t run and therefore would not break builds if it would error.
- To address this, we add two scripts in the new inst/extra_testing folder and a github action runner for each.
Bringing the app code into the package
- Improves robustness of building the app (includes code and dependencies in the docker image, allows for unit testing app components, etc.)
- Also allows users to spin-up a local version with
Evaluation figures now read from evaluations file
- Avoids computing evaluations while generating plots
Elimination of model-named functions
- The models are now implemented using
caston their fit and cast elements in their control lists
- Only models the need new functions have them (
fit_runjags) for fitting
forecastmethod used generally now for
- introduction of
- introduction of
Shifting covariates to a daily-level build initial step
- need to shift this to level daily so we can manage when a newmoon is split between historic and forecast days
Putting species under dataset in the models’ controls lists
- This is more articulated and allows for finer control to help avoid fitting issues, etc.
Moving arguments into functionalities
cast_dateis not an argument anymore, just filled automatically
datasetarguments are also being removed as possible to streamline (just pull from model controls)
Model functions are now species-level
- To facilitate a lot of downstream functionality, we’re breaking up the model functions to operate on the species-level rather than the dataset-level, according to the new control lists
- Species that were failing the nb GARCH models (s or not) have been removed, since that throws a warning and then fits a Poisson version, and we are now fitting Poisson versions of everyone.
process_model_output replaces save_cast_output and various model processing bits
- provides a much more general way to produce a forecast that can be integrated in the system, leveraging the metadata files
casts metadata table includes new column
- to facilitate backwards compatibility, filled with NA for previous tables if missing when loaded
updates to prefab models to 13 time steps forward (addressing issue 297)
- pevGARCH, nbGARCH, nbsGARCH all get past_mean set to 13
- all models set with lead_time of 13
Settings updates to avoid redownloading archive
FALSEby default to manage the version match decision making
- the directory resources portalpredictions version is updated to be the correct value (
overwritesetting is temporarily removed from the file saving functions to prevent argument name confusion
- we need to have an external location of file version to compare
dm_controls as a separate dataset
- Following the removal of the
- Now model controls indicate species to which their applied
Major updates to JAGS models
- Modeling and tracking sigma not tau (sd, not precision)
- No max caps for density or counts (aka removing guardrails)
- No use of
- 0.1for logging, managing it differently
- Chain increase from 2 to 4, silent jags is now FALSE
- Removal of
jags_SSwrapper, which limited adaptation of the model
Subdirectory internal naming changed to remove spaces
model fitsnow back to
model scriptsback to
Building out evaluation pipeline
- starting with what is already occurring, but formalizing as such as part of an
evaluate_castpair of functions
evaluate_castsfunction now works automatically to evaluate all the casts using
evaluate_cast, generating the error table as it does when being used, but nothing is saved out or updated.
- there is also no filter on evaluated casts by default, so the output from the forecasts without observations to evaluate is a table with a single row with NaN, and then they get wrapped up into the list.
- no errors, just noteworthy
Relocation of prefab controls
- Moved from source code scripts to
write_functions for both rodent and model controls lists
setup_dirnow takes a
settingsargument that is a
listof the arguments
directory_settingsfunction now quickly and cleanly collapses the settings that go into
Generalized functionality for models and rodent data sets
- Control lists are now structured for use with
Codebase formatting [work in progress]
- No longer concerned about the 80 char line limit
- Long argument lists, etc. are now formatted for quick top-to-bottom reading, via alignment on the
- Function redesigned to align with
messagedirectly argument for argument with the addition of the
- Now allows for multiple message arguments via
...that become pasted together
jags_logistic model added
- invoked as
jags_RW, applied to
- Building upon the jags_RW model, jags_logistic expands upon the “process model” underlying the Poisson observations.
- There are four process parameters: mu (the density of the species at the beginning of the time series) and tau (the precision (inverse variance) of the random walk, which is Gaussian on the log scale) for the starting value and r (growth rate) and K (carrying capacity) of the dynamic population. The observation model has no additional parameters.
- Shift to default downloading archive from GitHub
- setup_dir does not download archive by default, but setup_production does
- download function being broken out into components – work is still ongoing, but now have a separate function for each of the portaldata portalpredictions and climate forecasts
- addresses #132 #199
- Previous versions used Hao’s fork of the
rEDMpackages, which has been deprecated and now breaks because of the updates in Rcpp
- Switching to CRAN version
rEDMto CRAN does not fix the issue, so these models cannot be used in prefab set
- Removed from the prefab control list and removed the documentation
- No longer exported from the NAMESPACE
Improving GitHub Actions Running
- Including no running of examples (needed to be explicitly stated); addressing 206
- Use RStudio package manager to speed up running; addressing 206
Addresses issues with covariate data
- Missing data from weather stations caused issues
- Now if there is a missing set of data for a month of covariates, the saved covariate forecasts are used
dm_controls_interp to prefab data sets
- For use in the basic single-species process models
- Note lowercase name! Using capitals in the actual name of the data set creation will cause problems because
tolowergets used elsewhere!
Change in format for saving out
model_castsout as serialized
.jsonfiles now instead of
- More reliable and generalized.
- Also added functions for reading them in (
- Added a vignette that describes how to use the JAGS/runjags API within portalcasting.
Pulls code for
match.call.defaults into the package
- Use of it from
DesignLibrarycauses a problematic dependency chain with the docker image building
Full writing of
control_files in model scripts
- Previously, the controls list for the files in the model scripts was taken from the environment in which the script was run, which opens the script to everything, which is undesirable.
- After the need to include a control list for runjags models forced an explicit writing of the list inputs, the code was available to transfer to the files control list.
- This does mean that the function calls in the scripts are now super long and explicit, but that’s ok.
- To avoid super long model script lines (where event default inputs are repeated in the list functions), a function
control_list_argwas made to generalize what was coded up from the runjags list for use also with the files control list. This function writes a script component that only includes arguments to the list function that are different from the formal definition.
portalcast updates model scripts according to
- Previously, if you changed any controls of a prefab model, you had to manually re-write the models using
fill_modelswould result in hand-made scripts being overwritten, so a specific function (
update_models) for updating the models was created.
update_modelsby default only updates the models listed in the
controls_modelinput, to avoid overwriting model scripts. To change this behavior and also update all of the prefab models’ scripts, set
update_prefab_models = TRUE. This is particularly handy when changing a global (with respect to model scripts) argument:
Messaging around trying to use not-complete directory improved
- Indication now made that a component of the directory is missing and suggestion is made to run
Patching data set bug in plotting
- There was a bug with matching the interpolated to the non interpolated data sets within the ensembling, which has been fixed.
Changed behavior of
- Now there is no
start_moonargument, and all of the data prior to
- This aligns the rodents prep functions with the other (moons, covariates) prep functions.
- Facilitates use of data prior to
start_moonin forecasting models (e.g., for distributions of starting state variables).
- Requires that model functions now explicitly trim the rodents table being used. This has been added to all prefab models.
Fixed codecov targets
- Previous targets were restrictively high due to earlier near-perfect coverage.
- A codecov.yml file is now included in the repo (and ignored for the R build) which sets the target arbitrarily at the still-quite-high-but-not-restrictively-so 95%.
- It can be changed if needed in the future.
Simple EDM model added
JAGS infrastructure added
- Using the runjags package, with extensive access to the API of
- Currently in place with a very simple random walk model.
Prepared rodents table includes more content
- Expanded back in time to the start.
- Added effort columns (all default options in
effort = TRUE).
Updated adding a model and data vignette
- Added section at the end about just extending existing models to new data sets.
- Associated with the reconfiguration of portalcasting from v0.8.1 to 0.9.0, ensembling was removed temporarily.
- A basic ensemble is reintroduced, now as an unweighted average across all selected models, allowing us to have an ensemble but not have it be tied to AIC weighting (because AIC weighting is no longer possible with the split between interpolated and non-interpolated data for model fitting).
- In a major departure from v0.8.1 and earlier, the ensemble’s output is not saved like the actual models’. Rather, it is only calculated when needed on the fly.
- In plotting, it is now the default to use the ensemble for
plot_cast_pointand for the ensemble to be included in
Model evaluation and ensembling added back in
- Were removed with the updated version from 0.8.1 to 0.9.0 to allow time to develop the code with the new infrastructure.
- Model evaluation happens within the cast tab output as before.
Temporarily removed figures returned
- Associated with the evaluation.
- Plotting of error as a function of lead time for multiple species and multiple models. Now has a fall-back arrangement that works for a single species-model combination.
- Plotting RMSE and coverage within species-model combinations.
Flexing model controls to allow user-defined lists for prefab models
- For sandboxing with existing models, it is useful to be able to change a parameter in the model’s controls, such as the data sets. Previously, that would require a lot of hacking around. Now, it’s as simple as inputting the desired controls and flipping
arg_checks = FALSE.
Major API update: increase in explicit top-level arguments
- Moved key arguments to focal top-level inputs, rather than nested within control options list. Allows full control, but with default settings working cleanly. addresses
- Restructuring of the controls lists, retained usage in situations where necessary: model construction, data set construction, file naming, climate data downloading.
- Openness for new
setupfunctions, in particular
- Simplification of model naming inputs. Just put the names in you need, only use the
model_namesfunctions when you need to (usually in coding inside of functions or for setting default argument levels). addresses
Directory tree structure simplified
base(both as a function and a concept) was removed. To make that structure use
main = "./name".
- “PortalData” has been removed as a sub and replaced with “raw”, which includes all raw versions of files (post unzipping) downloaded: Portal Data and Portal Predictions and covariate forecasts (whose saving is also new here).
- Expanded use of
verboseconnected throughout the pipeline.
- Additional messaging functions to reduce code clutter.
- Formatting of messages to reduce clutter and highlight the outline structure.
Download capacity generalized
- Flexible interface to downloading capacity through a url, with generalized and flexible functions for generating Zenodo API urls (for retrieving the raw data and historical predictions) and NMME API urls (for retrieving weather forecasts) to port into the
downloadfunction. addresses and addresses and addresses
Changes for users adding their own models to the prefab set
- Substantial reduction in effort for users who wish to add models (i.e. anyone who is sandboxing). You can even just plunk your own R script (which could be a single line calling out to an external program if desired) without having to add any model script writing controls, and just add the name of the model to the models argument in
portalcastand it will run it with everything else.
- Outlined in the updated Getting Started and Adding a Model/Data vignettes.
- Users adding models to the prefab suite should now permanently add their model’s control options to the source code in
model_script_controlsrather than write their own control functions.
- Users adding models to the prefab suite should permanently add their model’s function code to the
prefab_modelsscript (reusing and adding to the documentation in
prefab_model_functions), rather than to its own script.
- Users should still add their model’s name to the source code in
Relaxed model requirements
- Models are no longer forced to use interpolated data.
- Models are no longer required to output a rigidly formatted data-table. Presently, the requirement is just a list, but soon some specifications will be added to improve reliability.
- Outlined in the updated Adding a Model/Data vignette.
More organization via metadata
- Generalized cast output is now tracked using a unique id in the file name associated with the cast, which is related to a row in a metadata table, newly included here. addresses and addresses and addresses
- Additional control information (like data set setup) is sent to the model metadata and saved out.
- Directory setting up configuration information is now tracked in a
dir_config.yamlfile, which is pulled from to save information about what was used to create, setup, and run the particular casts.
Changes for users interested in analyzing their own data sets not in the standard data set configuration
- Users are now able to define rodent observation data sets that are not part of the standard data set (“all” and “controls”, each also with interpolation of missing data) by giving the name in the
data_setsargument and the controls defining the data set (used by portalr’s
summarize_rodent_datafunction) in the
- In order to actualize this, a user will need to flip off the argument checking (the default in a sandbox setting, if using a standard or production setting, set
arg_checks = FALSEin the relevant function).
- Users interested in permanently adding the treatment level to the available data sets should add the source code to the
rodents_controlsfunction, just like with the models.
- Internal code points the pipeline to the files named via the data set inputs. The other data files are pointed to using the
file_controls) input list, which allows for some general flexibility with respect to what files the pipeline is reading in from the
Split of standard data sets
- The prefab
controlswere both default being interpolated for all models because of the use of AIC for model comparison and ensemble building. That forced all models to use interpolated data.
- Starting in this version, the models are not required to have been fit in the same fashion (due to generalization of comparison and post-processing code), and so interpolation is not required if not needed, and we have split out the data to standard and interpolated versions.
Application of specific models to specific data sets now facilitated
data_setsargument that is used to write the code out, replacing the hard code requirement of analyzing “all” and “controls” for every model. Now, users who wish to analyze a particular data component can easily add it to the analysis pipeline.
Generalization of code terms
- Throughout the codebase, terminology has been generalized from “fcast”/“forecast”/“hindcast” to “cast” except where a clear distinction is needed (here primarily due to where the covariate values used come from).
- Nice benefits: highlights commonality between the two (see next section) and reduces code volume.
“Hindcasting” becomes more similar to “forecasting”
- In the codebase now, “hindcasting” is functionally “forecasting” with a forecast origin (
end_moon) that is not the most recently occurring moon.
- Indeed, “hindcast” is nearly entirely removed from the codebase and “forecast” is nearly exclusively retained in documentation (and barely in the code itself), with both functionally being replaced with the generalized (and shorter) “cast”.
cast_typeis retained in the metadata file for posterity, but functionality is more generally checked by considering
last_moonin combination, where
end_moonis the forecast origin and
last_moonis the most recent
- Rather than the complex machinery used to iterate through multiple forecasts (“hindcasting”) that involved working backwards and skipping certain moons (which didn’t need to be skipped anymore due to updated code from a while back that allows us to forecast fine even without the most recent samples yet), a simple for loop is able to manage iterating. This is also facilitated by the downloading of the raw portalPredictions repository from Zenodo and critically its retention in the “raw” subdirectory, which allows quick re-calculation of historic predictions of covariates. addresses
cast_typehas been removed as an input, it’s auto determined now based on
end_moonand the last moon available (if they’re equal it’s a “forecast”, if not it’s a “hindcast”).
Softer handling of model failure
cast, the model scripts are now sourced within a for-loop (rather than sapply) to allow for simple error catching of each script. addresses
Improved argument checking flow
- Arg checking is now considerably tighter, code-wise.
- Each argument is either recognized and given a set of attributes (from an internally defined list) or unrecognized and stated to the user that it’s not being checked (to help notify anyone building in the code that there’s a new argument).
- The argument’s attributes define the logical checking flow through a series of pretty simple options.
- There is also now a
arg_checkslogical argument that goes into
check_argsto turn off all of the underlying code, enabling the user to go off the production restrictions that would otherwise through errors, even though they might technically work under the hood.
Substantial re-writes of the vignettes
drop_sppis now changed to
species(so focus on inclusion, not exclusion). addresses
- Improved examples, also now as
- Tightened testing with
skip_on_cranused judiciously. addresses
- No longer building the AIC-based ensemble. addresses
- Default confidence limit is now the more standard 0.95.
plot_cov_RMSE_mod_spp now only plots the most recent -cast by default
cast_dates = NULL(the default), the plot only uses the most recent -cast to avoid swamping more current -casts with historic -casts.
Added specific checks for no casts returned in plot functions
- There’s a bit of leeway with respect to argument validity, in particular around model names (to facilitate users making new models with new names, we don’t want to hardwire a naming scheme in
check_arg), so now there are checks to see if the tables returned from
select_castshave any rows or not.
Handling the edge cases in model function testing
- The trimming of the data sets for model function testing (happens in the AutoArima test script) now includes addition of some dummy values for edge cases (all 0 observations and nearly-all-0 observations), which allows better coverage of testing for the -GARCH model functions in particular.
nbsGARCH when even the Poisson fallback fails
nbGARCHand then extended into
nbsGARCH, the models fall back to a Poisson distribution if the negative binomial fit fails. Previously (with only
nbGARCH) the Poisson fit always succeeded in those back-ups, but now (with
nbsGARCH) that sometimes isn’t the case (because the predictor model is more complex) and even the Poisson fit can fail. So now for both models, if that fit fails, we follow what occurs in
pevGARCHwhich is to use the
fcast0forecast of 0s and an arbitrarily high AIC (
Addressing covariate forecasts in
pevGARCH under hindcasting
pevGARCH()was not set up to leverage the
- It’s now set up with a toggle based on the
cast_typein the metadata list (which has replaced the formerly named
filename_suffixelement) to load the
covariate_forecastsfile (using a new
read_covariate_forecastsfunction) and then select the specific hindcast based on the
date_madecolumns as selected by new elements in the metadata list (
foy()calculates the fraction of the year for a given date or set of dates.
Move to usage of CRAN portalr
- To aid with stability, we’re now using the CRAN release of portalr
Including the package version message in
- Including a simple message to report the version of portalcasting loaded in top level functions.
Tidied functionality for checking function arguments
- Introduction of
check_argwhich collaborate to check the validity of function arguments using a standardized set of requirements based on the argument names, thereby helping to unify and standardize the use of the codebase’s arguments.
Updated function names
read_datahas been split out into
sub_pathshave been merged into
sub_paths, which returns all if
Updated argument (names to leverage
- In multiple functions
datahas been replaced with
rodentsto be specific.
CI_levelis now subsumed by
nameis now subsumed by
setis not split into
- The order of arguments in
model_namesis now back to
- The default
- The four model functions have a reduced set of inputs to leverage the directory tree, and the script generation is updated to match.
- Updating the
cast_is_validand removing the
messageqfunction is added to tidy code around messages being printed based on the
Bug fix in
plot_cast_tsdid not cleanly plot time series where observations had been made after the start of the prediction window.
- The function has been set up to now split observations that occurred during the prediction window out, execute the plot as if they didn’t exist, then add them on top.
- Functionality has now been added to allow the toggling on and off of those points via the
add_obsinput (defaults to
Completed migration of plotting code
plot_cast_tsand is now fully vetted and tested
plotcastts_xaxisprovide tidied functions for producing the y label and x axis (respectively) for
plot_cast_pointis now added to replace
plotcastpoint_yaxisprovides tidied functionality for the y axis of
select_most_ab_sppallows for a simple selection of the most abundant species from a -cast.
plot_cov_RMSE_mod_sppnow added to replace the raw code in the evaluation page.
Processing of forecasts
read_casts(old) is now
read_castand specifically works for only one -cast.
read_casts(new) reads in multiple -casts.
select_castsand allows a more flexible selection by default.
make_ensemblenow returns a set of predictions with non-
NAbounds when only one model is included (it returns that model as the ensemble).
most_recent_castreturns the date of the most recent -cast. Can be dependent on the presence of a census.
forecast_is_validfrom the repo codebase.
verify_castis a logical wrapper on
cast_is_validthat facilitates a pipeline integration.
cast_is_validdoes the major set of checks of the cast data frame.
append_observed_to_castis provided to add the observed data to the forecasts and add columns for the raw error, in-forecast-window, and lead time as well.
measure_cast_errorallows for summarization of errors at the -cast level.
- Argument order in
modelsis reversed (
set) and defaults in general are now
set = "prefab"within the options functions, to make it easy to run a novel model set.
- Argument order in
subdirsis reversed (
type) and defaults in general are now
type = "portalcasting"within options functions and
dirtreeto make it easier to manage a single subdirectory.
fdateargument has been replaced throughout with
Beginning to migrate plotting code
plot_castis developed but not yet fully vetted and tested, nor integrated in the main repository. It will replace
forecast_vizas a main plotting function.
read_data’s options have been expanded to include
- Not fully implemented everywhere, but now available.
- All of the code is now tested via
- Test coverage is tracked via Codecov.
- The only functionality not covered in testing on Codecov is associated with
download_predictions(), which intermittently hangs on Travis. Testing is available, but requires manual toggling of the
"local"in the relevant test scripts (02-directory and 12-prepare_predictions).
Enforcement of inputs
- Most of the functions previously did not have any checks on input argument classes, sizes, etc.
- Now all functions specifically check each argument’s value for validity and throw specific errors.
- All of the functions have fleshed out documentation that specify argument requirements, link to each other and externally, and include more information.
- To smooth checking of different data structures, we now define data objects with classes in addition to their existing (
- These classes do not presently have any specified methods or functions.
Options list classes
- To smooth checking of different list structures, we now define the options list objects with classes in addition to their existing
- Each of these classes is created by a function of that name.
- These classes do not presently have any specified methods or functions that operate on them.
classy()allows for easy application of classes to objects in a
read_data()provides simple interface for loading and applying classes to model-ready data files.
remove_incompletes()removes any incomplete entries in a table, as defined by an
NAin a specific column.
check_options_args()provides a tidy way to check the input arguments (wrt class, length, numeric limitations, etc.) to the options functions.
- Three vignettes were added:
- current models vignette was brought from the forecasting website.
codebase vignette was created from the earlier
- adding a model vignette was constructed based on two pages from the Portal Predictions repository wiki (1 and 2) and with substantial additional text added.
Retention of all forecasts of covariates
- Previous versions retained only one covariate forecast per newmoon.
- We now enable retention of multiple covariate forecasts per newmoon and tag the forecast date with a time stamp as well.
- Added a website driven by pkgdown.
- Added code of conduct and contribution guidelines to the repository.
true code edits 2018-12-14, version number updated 2019-01-02
Migration from Portal Predictions repository
- Code was brought over from the forecasting repository to be housed in its own package.
- Multiple updates to the codebase were included, but intentionally just “under the hood”, meaning little or no change to the output and simplification of the input.
- A major motivation here was also to facilitate model development, which requires being able to set up a local version of the repository to play with in what we might consider a “sandbox”. This will allow someone to develop and test new forecasting models in a space that isn’t the forecasting repo itself (or a clone or a fork), but a truly novel location. At this point, the sandbox setup isn’t fully robust from within this package, but rather requires some additional steps (to be documented).
Development of code pipeline
- The previous implementation of the R codebase driving the forecasting (housed within the portalPredictions repo) was a mix of functions and loose code and hard-coded to the repo.
- The package implementation generalizes the functionality and organizes the code into a set of hierarchical functions that drive creation and use of the code within the repo or elsewhere.
- See the codebase vignette for further details.
Explicit directory tree
- To facilitate portability of the package (a necessity for smooth sandboxing and the development of new models), we now include explicit, controllable definition of the forecasting directory tree.
- See the codebase vignette for further details.
Introduction of options lists
- To facilitate simple control via defaults argument inputs and flexibility to changes to inputs throughout the code hierarchy, we include a set of functions that default options for all aspects of the codebase.
- See the codebase vignette for further details.
- This is the last iteration of the code that now exists in portalcasting in its previous home within the portalPredictions repo.
- It was not referred to by the name portalcasting at the time.