polle: Policy Learning

Framework for evaluating user-specified finite stage policies and learning realistic policies via doubly robust loss functions. Policy learning methods include doubly robust restricted Q-learning, sequential policy tree learning and outcome weighted learning. See Nordland and Holst (2022) <doi:10.48550/arXiv.2212.02335> for documentation and references.

Version: 1.3
Depends: R (≥ 4.0), SuperLearner
Imports: data.table (≥ 1.14.5), lava (≥ 1.7.0), future.apply, progressr, methods, policytree (≥ 1.2.0), survival, targeted, DynTxRegime
Suggests: DTRlearn2, glmnet (≥ 4.1-6), mgcv, xgboost, knitr, ranger, rmarkdown, testthat (≥ 3.0), ggplot2
Published: 2023-07-06
Author: Andreas Nordland [aut, cre], Klaus Holst ORCID iD [aut]
Maintainer: Andreas Nordland <andreasnordland at gmail.com>
BugReports: https://github.com/AndreasNordland/polle/issues
License: Apache License (≥ 2)
NeedsCompilation: no
Citation: polle citation info
Materials: NEWS
CRAN checks: polle results

Documentation:

Reference manual: polle.pdf

Downloads:

Package source: polle_1.3.tar.gz
Windows binaries: r-prerel: polle_1.3.zip, r-release: polle_1.3.zip, r-oldrel: polle_1.3.zip
macOS binaries: r-prerel (arm64): polle_1.3.tgz, r-release (arm64): polle_1.3.tgz, r-oldrel (arm64): polle_1.3.tgz, r-prerel (x86_64): polle_1.3.tgz, r-release (x86_64): polle_1.3.tgz
Old sources: polle archive

Linking:

Please use the canonical form https://CRAN.R-project.org/package=polle to link to this page.