Trim text in model vignette
- Vignette now pulls text from the the model list, and loops the model list, rather than have the raw text in the markdown doc.
- This locates the description in a place that is accessible to other content generation (e.g., the portal forecast website)
Data interpolation moved from “dataset” to “within model”
- Patching, not all hit with previous release
Data interpolation moved from “dataset” to “within model”
- Previously, datasets included, for example
all_interp. Now, only
all exists and models interpolate as needed.
Argument / nomenclature updates
read_rodents default settings update
- Now pulls all datasets using
Subdirectory internal naming changed to remove spaces
model fits now back to
model scripts back to
cast_evaluations file now saved
- Flattened version of the generated list of evaluations
- Rather crude options of saving or not and overwriting the whole file or not based on settings
- No file saving occurs when a single cast is evaluated
download_timeout now set to default of 600 for
- Allows download of larger directory archive without timeout
If there’s only one model, don’t ensemble
- Prevents warnings / errors
Building out evaluation pipeline
- starting with what is already occurring, but formalizing as such as part of an
evaluate_cast pair of functions
evaluate_casts function now works automatically to evaluate all the casts using
evaluate_cast, generating the error table as it does when being used, but nothing is saved out or updated.
- there is also no filter on evaluated casts by default, so the output from the forecasts without observations to evaluate is a table with a single row with NaN, and then they get wrapped up into the list.
- no errors, just noteworthy
- Now includes
type argument with
- No more
most_abundant_species function, as we’re not using it on the website.
Updating model controls
Developing evaluate functions
evaluate_cast currently just placeholders
- No longer used, internal R code (e.g.,
tempdir) provides needed functionality
- Also removing
setup_production defaults to
verbose = TRUE
Relocation of prefab controls
- Moved from source code scripts to
.yaml files in
write_ functions for both rodent and model controls lists
Updating / rectifying terminology
setup_dir now takes a
settings argument that is a
list of the arguments
directory_settings function now quickly and cleanly collapses the settings that go into
Generalized functionality for models and rodent data sets
- Control lists are now structured for use with
Codebase formatting [work in progress]
- No longer concerned about the 80 char line limit
- Long argument lists, etc. are now formatted for quick top-to-bottom reading, via alignment on the
Removal of superfluous
- Use of base R functions is sufficient
- Internalized auto-checking relieves user of need to dictate checking
Temporary removal of “adding a model and data” vignette
- Need to update with new API
- Also need to add alt-text to all images
- Argument needed to be removed still
- Function redesigned to align with
message directly argument for argument with the addition of the
- Now allows for multiple message arguments via
... that become pasted together
Removal of specialized message functions
- Minimize unnecessary functions
Simplified directory creation function pipeline
jags_logistic model added
- invoked as
jags_RW, applied to
DM controls dataset.
- Building upon the jags_RW model, jags_logistic expands upon the “process model” underlying the Poisson observations.
- There are four process parameters: mu (the density of the species at the beginning of the time series) and tau (the precision (inverse variance) of the random walk, which is Gaussian on the log scale) for the starting value and r (growth rate) and K (carrying capacity) of the dynamic population. The observation model has no additional parameters.
Docker build check issues
Further removal of vestigial rEDM code
- Commenting out as needed to prevent build breaks
Updating title to match JOSS
Getting latest version of portalr
- tagging to update Docker image with portalr 0.3.9
Tidying for JOSS ms
- adding source and version args to check args
- software context added to ms
- overview added to readme
- updating the getting started vignette to use production to allow for historical uses
- Shift to default downloading archive from GitHub
- setup_dir does not download archive by default, but setup_production does
- download function being broken out into components – work is still ongoing, but now have a separate function for each of the portaldata portalpredictions and climate forecasts
- addresses #132 #199
patching issue with ndvi preparation
- the ndvi data stream is not filling in with new content, resulting in NAs for the latter half of 2021
- using a forecast call to fill in the missing values as a temporary patch
- Previous versions used Hao’s fork of the
rEDM packages, which has been deprecated and now breaks because of the updates in Rcpp
- Switching to CRAN version
rEDM to CRAN does not fix the issue, so these models cannot be used in prefab set
- Removed from the prefab control list and removed the documentation
- No longer exported from the NAMESPACE
Edit tests for ensembling and figure making
- A few edge case issues arose in test because of fixed moons
- Should be resolved through edits to test scripts without altering functions
Add Henry to the DESCRIPTION file
Add git2r to the docker container
Improving GitHub Actions Running
- Including no running of examples (needed to be explicitly stated); addressing 206
- Use RStudio package manager to speed up running; addressing 206
Patch “NA” plotting issue
Highlighting of species in plotting
Stops saving model fits in the portalPredictions repository
Addresses issues with covariate data
- Missing data from weather stations caused issues
- Now if there is a missing set of data for a month of covariates, the saved covariate forecasts are used
dm_controls_interp to prefab data sets
- For use in the basic single-species process models
- Note lowercase name! Using capitals in the actual name of the data set creation will cause problems because
tolower gets used elsewhere!
Patches issue with
check_args when using
- Addressing issues with updated dplyr
Setting the Docker build up with its own folder
Bringing the Dockerfile over from
Addition of GPEDM (model and function)
- Gaussian processes using Empirical Dynamic Modeling
- Actually does this (0.17.0 had a snag)
Addition of GPEDM (model and function)
- Gaussian processes using Empirical Dynamic Modeling
Change in format for saving out
model_casts out as serialized
.json files now instead of
- More reliable and generalized.
- Also added functions for reading them in (
Using github version of portalr
- Due to backwards incompatible changes in portalr and it not being on CRAN yet
- To address a Zenodo hiccup
- Added a vignette that describes how to use the JAGS/runjags API within portalcasting.
Pulls code for
match.call.defaults into the package
- Use of it from
DesignLibrary causes a problematic dependency chain with the docker image building
Patch bug in
- Wasn’t using the species name function, and so was pulling in the traps column, which was causing a break in plotting.
Adds exclosure data to the prefab models
Full writing of
control_files in model scripts
- Previously, the controls list for the files in the model scripts was taken from the environment in which the script was run, which opens the script to everything, which is undesirable.
- After the need to include a control list for runjags models forced an explicit writing of the list inputs, the code was available to transfer to the files control list.
- This does mean that the function calls in the scripts are now super long and explicit, but that’s ok.
- To avoid super long model script lines (where event default inputs are repeated in the list functions), a function
control_list_arg was made to generalize what was coded up from the runjags list for use also with the files control list. This function writes a script component that only includes arguments to the list function that are different from the formal definition.
Fixes to the pkgdown site
- rmarkdown v1.16.0 has some issues with rendering images, so forcing use of v1.16.1 for now.
- Inclusion of new functions in reference list.
portalcast updates model scripts according to
- Previously, if you changed any controls of a prefab model, you had to manually re-write the models using
fill_models before running
fill_models would result in hand-made scripts being overwritten, so a specific function (
update_models) for updating the models was created.
update_models by default only updates the models listed in the
controls_model input, to avoid overwriting model scripts. To change this behavior and also update all of the prefab models’ scripts, set
update_prefab_models = TRUE. This is particularly handy when changing a global (with respect to model scripts) argument:
Messaging around trying to use not-complete directory improved
- Indication now made that a component of the directory is missing and suggestion is made to run
Patching data set bug in plotting
- There was a bug with matching the interpolated to the non interpolated data sets within the ensembling, which has been fixed.
- Moved most of the messaging into tidied functions.
Changed behavior of
- Now there is no
start_moon argument, and all of the data prior to
end_moon are returned.
- This aligns the rodents prep functions with the other (moons, covariates) prep functions.
- Facilitates use of data prior to
start_moon in forecasting models (e.g., for distributions of starting state variables).
- Requires that model functions now explicitly trim the rodents table being used. This has been added to all prefab models.
Fixed codecov targets
- Previous targets were restrictively high due to earlier near-perfect coverage.
- A codecov.yml file is now included in the repo (and ignored for the R build) which sets the target arbitrarily at the still-quite-high-but-not-restrictively-so 95%.
- It can be changed if needed in the future.
JAGS infrastructure added
- Using the runjags package, with extensive access to the API of
run.jags via a
- Currently in place with a very simple random walk model.
Prepared rodents table includes more content
- Expanded back in time to the start.
- Added effort columns (all default options in
effort = TRUE).
Updated adding a model and data vignette
- Added section at the end about just extending existing models to new data sets.
- Associated with the reconfiguration of portalcasting from v0.8.1 to 0.9.0, ensembling was removed temporarily.
- A basic ensemble is reintroduced, now as an unweighted average across all selected models, allowing us to have an ensemble but not have it be tied to AIC weighting (because AIC weighting is no longer possible with the split between interpolated and non-interpolated data for model fitting).
- In a major departure from v0.8.1 and earlier, the ensemble’s output is not saved like the actual models’. Rather, it is only calculated when needed on the fly.
- In plotting, it is now the default to use the ensemble for
plot_cast_point and for the ensemble to be included in
- Function used to select the most common species.
- Now uses the actual data and not the casts to determine the species.
Model evaluation and ensembling added back in
- Were removed with the updated version from 0.8.1 to 0.9.0 to allow time to develop the code with the new infrastructure.
- Model evaluation happens within the cast tab output as before.
Temporarily removed figures returned
- Associated with the evaluation.
- Plotting of error as a function of lead time for multiple species and multiple models. Now has a fall-back arrangement that works for a single species-model combination.
- Plotting RMSE and coverage within species-model combinations.
Flexing model controls to allow user-defined lists for prefab models
- For sandboxing with existing models, it is useful to be able to change a parameter in the model’s controls, such as the data sets. Previously, that would require a lot of hacking around. Now, it’s as simple as inputting the desired controls and flipping
arg_checks = FALSE.
Major API update: increase in explicit top-level arguments
- Moved key arguments to focal top-level inputs, rather than nested within control options list. Allows full control, but with default settings working cleanly. addresses
- Restructuring of the controls lists, retained usage in situations where necessary: model construction, data set construction, file naming, climate data downloading.
- Openness for new
setup functions, in particular
- Simplification of model naming inputs. Just put the names in you need, only use the
model_names functions when you need to (usually in coding inside of functions or for setting default argument levels). addresses
Directory tree structure simplified
dirtree was removed
base (both as a function and a concept) was removed. To make that structure use
main = "./name".
- “PortalData” has been removed as a sub and replaced with “raw”, which includes all raw versions of files (post unzipping) downloaded: Portal Data and Portal Predictions and covariate forecasts (whose saving is also new here).
- Expanded use of
verbose connected throughout the pipeline.
- Additional messaging functions to reduce code clutter.
- Formatting of messages to reduce clutter and highlight the outline structure.
Download capacity generalized
- Flexible interface to downloading capacity through a url, with generalized and flexible functions for generating Zenodo API urls (for retrieving the raw data and historical predictions) and NMME API urls (for retrieving weather forecasts) to port into the
download function. addresses and addresses and addresses
Changes for users adding their own models to the prefab set
- Substantial reduction in effort for users who wish to add models (i.e. anyone who is sandboxing). You can even just plunk your own R script (which could be a single line calling out to an external program if desired) without having to add any model script writing controls, and just add the name of the model to the models argument in
portalcast and it will run it with everything else.
- Outlined in the updated Getting Started and Adding a Model/Data vignettes.
- Users adding models to the prefab suite should now permanently add their model’s control options to the source code in
model_script_controls rather than write their own control functions.
- Users adding models to the prefab suite should permanently add their model’s function code to the
prefab_models script (reusing and adding to the documentation in
prefab_model_functions), rather than to its own script.
- Users should still add their model’s name to the source code in
Relaxed model requirements
- Models are no longer forced to use interpolated data.
- Models are no longer required to output a rigidly formatted data-table. Presently, the requirement is just a list, but soon some specifications will be added to improve reliability.
- Outlined in the updated Adding a Model/Data vignette.
- Generalized cast output is now tracked using a unique id in the file name associated with the cast, which is related to a row in a metadata table, newly included here. addresses and addresses and addresses
- Additional control information (like data set setup) is sent to the model metadata and saved out.
- Directory setting up configuration information is now tracked in a
dir_config.yaml file, which is pulled from to save information about what was used to create, setup, and run the particular casts.
Changes for users interested in analyzing their own data sets not in the standard data set configuration
- Users are now able to define rodent observation data sets that are not part of the standard data set (“all” and “controls”, each also with interpolation of missing data) by giving the name in the
data_sets argument and the controls defining the data set (used by portalr’s
summarize_rodent_data function) in the
- In order to actualize this, a user will need to flip off the argument checking (the default in a sandbox setting, if using a standard or production setting, set
arg_checks = FALSE in the relevant function).
- Users interested in permanently adding the treatment level to the available data sets should add the source code to the
rodents_controls function, just like with the models.
- Internal code points the pipeline to the files named via the data set inputs. The other data files are pointed to using the
file_controls) input list, which allows for some general flexibility with respect to what files the pipeline is reading in from the
Split of standard data sets
- The prefab
controls were both default being interpolated for all models because of the use of AIC for model comparison and ensemble building. That forced all models to use interpolated data.
- Starting in this version, the models are not required to have been fit in the same fashion (due to generalization of comparison and post-processing code), and so interpolation is not required if not needed, and we have split out the data to standard and interpolated versions.
Application of specific models to specific data sets now facilitated
model_template have a
data_sets argument that is used to write the code out, replacing the hard code requirement of analyzing “all” and “controls” for every model. Now, users who wish to analyze a particular data component can easily add it to the analysis pipeline.
Generalization of code terms
- Throughout the codebase, terminology has been generalized from “fcast”/“forecast”/“hindcast” to “cast” except where a clear distinction is needed (here primarily due to where the covariate values used come from).
- Nice benefits: highlights commonality between the two (see next section) and reduces code volume.
start_newmoon is now
“Hindcasting” becomes more similar to “forecasting”
- In the codebase now, “hindcasting” is functionally “forecasting” with a forecast origin (
end_moon) that is not the most recently occurring moon.
- Indeed, “hindcast” is nearly entirely removed from the codebase and “forecast” is nearly exclusively retained in documentation (and barely in the code itself), with both functionally being replaced with the generalized (and shorter) “cast”.
cast_type is retained in the metadata file for posterity, but functionality is more generally checked by considering
last_moon in combination, where
end_moon is the forecast origin and
last_moon is the most recent
- Rather than the complex machinery used to iterate through multiple forecasts (“hindcasting”) that involved working backwards and skipping certain moons (which didn’t need to be skipped anymore due to updated code from a while back that allows us to forecast fine even without the most recent samples yet), a simple for loop is able to manage iterating. This is also facilitated by the downloading of the raw portalPredictions repository from Zenodo and critically its retention in the “raw” subdirectory, which allows quick re-calculation of historic predictions of covariates. addresses
cast_type has been removed as an input, it’s auto determined now based on
end_moon and the last moon available (if they’re equal it’s a “forecast”, if not it’s a “hindcast”).
Softer handling of model failure
cast, the model scripts are now sourced within a for-loop (rather than sapply) to allow for simple error catching of each script. addresses
Improved argument checking flow
- Arg checking is now considerably tighter, code-wise.
- Each argument is either recognized and given a set of attributes (from an internally defined list) or unrecognized and stated to the user that it’s not being checked (to help notify anyone building in the code that there’s a new argument).
- The argument’s attributes define the logical checking flow through a series of pretty simple options.
- There is also now a
arg_checks logical argument that goes into
check_args to turn off all of the underlying code, enabling the user to go off the production restrictions that would otherwise through errors, even though they might technically work under the hood.
Substantial re-writes of the vignettes
- Done in general to update with the present version of the codebase.
- Broke the
adding a model or data vignette into “working locally” and “adding to the pipeline”, also added checklists and screen shots. addresses
- Reorganized the
getting started vignette to an order that makes sense. addresses
drop_spp is now changed to
species (so focus on inclusion, not exclusion). addresses
- Improved examples, also now as
- Tightened testing with
skip_on_cran used judiciously. addresses
- No longer building the AIC-based ensemble. addresses
- Default confidence limit is now the more standard 0.95.
Hookup with Zenodo
Inclusion of json file and some minor editing of documentation, but no functional coding changes
plot_cov_RMSE_mod_spp now only plots the most recent -cast by default
cast_dates = NULL (the default), the plot only uses the most recent -cast to avoid swamping more current -casts with historic -casts.
Added specific checks for no casts returned in plot functions
- There’s a bit of leeway with respect to argument validity, in particular around model names (to facilitate users making new models with new names, we don’t want to hardwire a naming scheme in
check_arg), so now there are checks to see if the tables returned from
select_casts have any rows or not.
Handling the edge cases in model function testing
- The trimming of the data sets for model function testing (happens in the AutoArima test script) now includes addition of some dummy values for edge cases (all 0 observations and nearly-all-0 observations), which allows better coverage of testing for the -GARCH model functions in particular.
Fixing a typo bug within
- There was a mismatch between
forecast for one of the edge cases.
nbsGARCH when even the Poisson fallback fails
nbGARCH and then extended into
nbsGARCH, the models fall back to a Poisson distribution if the negative binomial fit fails. Previously (with only
nbGARCH) the Poisson fit always succeeded in those back-ups, but now (with
nbsGARCH) that sometimes isn’t the case (because the predictor model is more complex) and even the Poisson fit can fail. So now for both models, if that fit fails, we follow what occurs in
pevGARCH which is to use the
fcast0 forecast of 0s and an arbitrarily high AIC (
Addressing covariate forecasts in
pevGARCH under hindcasting
pevGARCH() was not set up to leverage the
- It’s now set up with a toggle based on the
cast_type in the metadata list (which has replaced the formerly named
filename_suffix element) to load the
covariate_forecasts file (using a new
read_covariate_forecasts function) and then select the specific hindcast based on the
date_made columns as selected by new elements in the metadata list (
nbsGARCH has been added to the base set of models.
foy() calculates the fraction of the year for a given date or set of dates.
Move to usage of CRAN portalr
- Provides a simple way to list the scripts in the
Including the package version message in
- Including a simple message to report the version of portalcasting loaded in top level functions.
- Adding plot (from pre-constructed images) to the how-to vignette.
Patching a bug in
- There was a lingering old name from the argument switch over that was causing model templates to be written with a
"" argument for the
model model name input into
Tidied functionality for checking function arguments
- Introduction of
check_arg which collaborate to check the validity of function arguments using a standardized set of requirements based on the argument names, thereby helping to unify and standardize the use of the codebase’s arguments.
Updated function names
prep_rodents is now
rodents_data is now
update_rodents is now
read_data has been split out into
model_path is now
sub_paths have been merged into
sub_paths, which returns all if
specific_subs is NULL
lag_data is now
Updated argument (names to leverage
- In multiple functions
data has been replaced with
rodents to be specific.
CI_level is now subsumed by
name is now subsumed by
set is not split into
- The order of arguments in
model_names is now back to
- The default
subdirs is now
- The four model functions have a reduced set of inputs to leverage the directory tree, and the script generation is updated to match.
- Updating the
cast argument to
cast_is_valid and removing the
verbose argument from
verify_cast to allow
check_arg to leverage
Removal of classes
models class has been removed.
subdirs class has been removed.
messageq function is added to tidy code around messages being printed based on the
"wEnsemble" as an option in
- Produces the
prefab list with an
"Ensemble" entry added, to allow for that simply without using the
NULL options, which collects all model names.
- This facilitated addition of
models as an argument in the evaluations plots.
Bug fix in
plot_cast_ts did not cleanly plot time series where observations had been made after the start of the prediction window.
- The function has been set up to now split observations that occurred during the prediction window out, execute the plot as if they didn’t exist, then add them on top.
- Functionality has now been added to allow the toggling on and off of those points via the
add_obs input (defaults to
Completed migration of plotting code
plot_cast is now
plot_cast_ts and is now fully vetted and tested
plotcastts_xaxis provide tidied functions for producing the y label and x axis (respectively) for
plot_cast_point is now added to replace
plotcastpoint_yaxis provides tidied functionality for the y axis of
select_most_ab_spp allows for a simple selection of the most abundant species from a -cast.
plot_cov_RMSE_mod_spp now added to replace the raw code in the evaluation page.
Processing of forecasts
read_casts (old) is now
read_cast and specifically works for only one -cast.
read_casts (new) reads in multiple -casts.
select_cast is now
select_casts and allows a more flexible selection by default.
make_ensemble now returns a set of predictions with non-
NA bounds when only one model is included (it returns that model as the ensemble).
most_recent_cast returns the date of the most recent -cast. Can be dependent on the presence of a census.
forecast_is_valid from the repo codebase.
verify_cast is a logical wrapper on
cast_is_valid that facilitates a pipeline integration.
cast_is_valid does the major set of checks of the cast data frame.
append_observed_to_cast is provided to add the observed data to the forecasts and add columns for the raw error, in-forecast-window, and lead time as well.
measure_cast_error allows for summarization of errors at the -cast level.
Processing of data
most_recent_census returns the date of the most recent census.
- Argument order in
models is reversed (
set) and defaults in general are now
set = "prefab" within the options functions, to make it easy to run a novel model set.
- Argument order in
subdirs is reversed (
type) and defaults in general are now
type = "portalcasting" within options functions and
dirtree to make it easier to manage a single subdirectory.
fdate argument has been replaced throughout with
cast_date for generality.
na_conformer provides tidy functionality for converting non-character
NA entries (can get read in from the data due to the
"NA" species) to
"NA". Works for both vectors and data frames.
Beginning to migrate plotting code
plot_cast is developed but not yet fully vetted and tested, nor integrated in the main repository. It will replace
forecast_viz as a main plotting function.
read_data’s options have been expanded to include
- Not fully implemented everywhere, but now available.
Bug fix in
interpolate_data was using
rodent_spp in a way that assumed the
"NA" species was coded as
"NA.", which it wasn’t.
- Expansion of
rodent_spp to include an
nadot logical argument, with default value of
Bug fix in
read_data was reading the All rodents file for Controls as well, which caused the forecasts for the Controls to be duplicated of the All forecasts.
- Simple correction here.
- All of the code is now tested via
- Test coverage is tracked via Codecov.
- The only functionality not covered in testing on Codecov is associated with
download_predictions(), which intermittently hangs on Travis. Testing is available, but requires manual toggling of the
test_location value to
"local" in the relevant test scripts (02-directory and 12-prepare_predictions).
Enforcement of inputs
- Most of the functions previously did not have any checks on input argument classes, sizes, etc.
- Now all functions specifically check each argument’s value for validity and throw specific errors.
- All of the functions have fleshed out documentation that specify argument requirements, link to each other and externally, and include more information.
- To smooth checking of different data structures, we now define data objects with classes in addition to their existing (
- These classes do not presently have any specified methods or functions.
Options list classes
- To smooth checking of different list structures, we now define the options list objects with classes in addition to their existing
- Each of these classes is created by a function of that name.
- These classes do not presently have any specified methods or functions that operate on them.
classy() allows for easy application of classes to objects in a
read_data() provides simple interface for loading and applying classes to model-ready data files.
remove_incompletes() removes any incomplete entries in a table, as defined by an
NA in a specific column.
check_options_args() provides a tidy way to check the input arguments (wrt class, length, numeric limitations, etc.) to the options functions.
- Three vignettes were added:
Retention of all forecasts of covariates
- Previous versions retained only one covariate forecast per newmoon.
- We now enable retention of multiple covariate forecasts per newmoon and tag the forecast date with a time stamp as well.
- Developed this changelog as part of the package.
Addressing Portal Data download
- Setting default back to the Zenodo link via updated portalr function.
fill_PortalData() and new
PortalData_options() allow for control over download source.
true code edits 2018-12-14, version number updated 2019-01-02
Migration from Portal Predictions repository
- Code was brought over from the forecasting repository to be housed in its own package.
- Multiple updates to the codebase were included, but intentionally just “under the hood”, meaning little or no change to the output and simplification of the input.
- A major motivation here was also to facilitate model development, which requires being able to set up a local version of the repository to play with in what we might consider a “sandbox”. This will allow someone to develop and test new forecasting models in a space that isn’t the forecasting repo itself (or a clone or a fork), but a truly novel location. At this point, the sandbox setup isn’t fully robust from within this package, but rather requires some additional steps (to be documented).
Development of code pipeline
- The previous implementation of the R codebase driving the forecasting (housed within the portalPredictions repo) was a mix of functions and loose code and hard-coded to the repo.
- The package implementation generalizes the functionality and organizes the code into a set of hierarchical functions that drive creation and use of the code within the repo or elsewhere.
- See the codebase vignette for further details.
Explicit directory tree
- To facilitate portability of the package (a necessity for smooth sandboxing and the development of new models), we now include explicit, controllable definition of the forecasting directory tree.
- See the codebase vignette for further details.
Introduction of options lists
- To facilitate simple control via defaults argument inputs and flexibility to changes to inputs throughout the code hierarchy, we include a set of functions that default options for all aspects of the codebase.
- See the codebase vignette for further details.