CRAN Package Check Results for Package modelbased

Last updated on 2025-03-08 01:51:21 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 0.9.0 5.33 97.15 102.48 ERROR
r-devel-linux-x86_64-debian-gcc 0.9.0 3.83 80.11 83.94 ERROR
r-devel-linux-x86_64-fedora-clang 0.9.0 201.58 ERROR
r-devel-linux-x86_64-fedora-gcc 0.9.0 206.36 ERROR
r-devel-macos-arm64 0.9.0 45.00 OK
r-devel-macos-x86_64 0.9.0 119.00 OK
r-devel-windows-x86_64 0.9.0 6.00 111.00 117.00 ERROR
r-patched-linux-x86_64 0.9.0 ERROR
r-release-linux-x86_64 0.9.0 5.24 110.60 115.84 ERROR
r-release-macos-arm64 0.9.0 40.00 OK
r-release-macos-x86_64 0.9.0 141.00 OK
r-release-windows-x86_64 0.9.0 7.00 110.00 117.00 ERROR
r-oldrel-macos-arm64 0.9.0 54.00 OK
r-oldrel-macos-x86_64 0.9.0 103.00 OK
r-oldrel-windows-x86_64 0.9.0 10.00 50.00 60.00 OK --no-examples --no-tests --no-vignettes

Check Details

Version: 0.9.0
Check: tests
Result: ERROR Running ‘testthat.R’ [41s/22s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 1 | WARN 0 | SKIP 21 | PASS 141 ] ══ Skipped tests (21) ══════════════════════════════════════════════════════════ • On CRAN (13): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-predict-dpar.R:1:1', 'test-vcov.R:1:1' • On Linux (3): 'test-plot-facet.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1' • {rstanarm} is not installed (5): 'test-estimate_predicted.R:3:3', 'test-estimate_predicted.R:27:3', 'test-estimate_predicted.R:67:3', 'test-estimate_predicted.R:127:3', 'test-estimate_predicted.R:163:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 [ FAIL 1 | WARN 0 | SKIP 21 | PASS 141 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-debian-clang

Version: 0.9.0
Check: tests
Result: ERROR Running ‘testthat.R’ [34s/19s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] ══ Skipped tests (17) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:56:3' • On CRAN (13): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-predict-dpar.R:1:1', 'test-vcov.R:1:1' • On Linux (3): 'test-plot-facet.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-debian-gcc

Version: 0.9.0
Check: tests
Result: ERROR Running ‘testthat.R’ [82s/205s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 4 | WARN 22 | SKIP 17 | PASS 165 ] ══ Skipped tests (17) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:56:3' • On CRAN (13): 'test-brms.R:1:1', 'test-brms-marginaleffects.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-predict-dpar.R:1:1', 'test-vcov.R:1:1' • On Linux (3): 'test-plot-facet.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:204:3'): estimate_expectation - predicting RE works ── out$Predicted (`actual`) not equal to c(...) (`expected`). `actual`: 12.2617 12.0693 11.1560 11.6318 11.1657 10.3811 11.1074 11.0749 `expected`: 12.2064 12.0631 11.2071 11.6286 11.2327 10.5839 11.2085 11.1229 [ FAIL 4 | WARN 22 | SKIP 17 | PASS 165 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-clang

Version: 0.9.0
Check: tests
Result: ERROR Running ‘testthat.R’ [80s/121s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 4 | WARN 22 | SKIP 17 | PASS 165 ] ══ Skipped tests (17) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:56:3' • On CRAN (13): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-predict-dpar.R:1:1', 'test-vcov.R:1:1' • On Linux (3): 'test-plot-facet.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:204:3'): estimate_expectation - predicting RE works ── out$Predicted (`actual`) not equal to c(...) (`expected`). `actual`: 12.2617 12.0693 11.1560 11.6318 11.1657 10.3811 11.1074 11.0749 `expected`: 12.2064 12.0631 11.2071 11.6286 11.2327 10.5839 11.2085 11.1229 [ FAIL 4 | WARN 22 | SKIP 17 | PASS 165 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-gcc

Version: 0.9.0
Check: tests
Result: ERROR Running 'testthat.R' [28s] Running the tests in 'tests/testthat.R' failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 3 | WARN 0 | SKIP 23 | PASS 168 ] ══ Skipped tests (23) ══════════════════════════════════════════════════════════ • On CRAN (23): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-plot-facet.R:7:1', 'test-plot.R:7:1', 'test-predict-dpar.R:1:1', 'test-print.R:14:3', 'test-print.R:26:3', 'test-print.R:37:3', 'test-print.R:50:3', 'test-print.R:65:3', 'test-print.R:78:5', 'test-print.R:92:3', 'test-print.R:106:3', 'test-vcov.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 [ FAIL 3 | WARN 0 | SKIP 23 | PASS 168 ] Error: Test failures Execution halted Flavor: r-devel-windows-x86_64

Version: 0.9.0
Check: tests
Result: ERROR Running ‘testthat.R’ [48s/26s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] ══ Skipped tests (17) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:56:3' • On CRAN (13): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-predict-dpar.R:1:1', 'test-vcov.R:1:1' • On Linux (3): 'test-plot-facet.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] Error: Test failures Execution halted Flavor: r-patched-linux-x86_64

Version: 0.9.0
Check: tests
Result: ERROR Running ‘testthat.R’ [49s/29s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] ══ Skipped tests (17) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:56:3' • On CRAN (13): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-predict-dpar.R:1:1', 'test-vcov.R:1:1' • On Linux (3): 'test-plot-facet.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] Error: Test failures Execution halted Flavor: r-release-linux-x86_64

Version: 0.9.0
Check: tests
Result: ERROR Running 'testthat.R' [30s] Running the tests in 'tests/testthat.R' failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 3 | WARN 0 | SKIP 23 | PASS 168 ] ══ Skipped tests (23) ══════════════════════════════════════════════════════════ • On CRAN (23): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-plot-facet.R:7:1', 'test-plot.R:7:1', 'test-predict-dpar.R:1:1', 'test-print.R:14:3', 'test-print.R:26:3', 'test-print.R:37:3', 'test-print.R:50:3', 'test-print.R:65:3', 'test-print.R:78:5', 'test-print.R:92:3', 'test-print.R:106:3', 'test-vcov.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 [ FAIL 3 | WARN 0 | SKIP 23 | PASS 168 ] Error: Test failures Execution halted Flavor: r-release-windows-x86_64