The {bidux} package helps Shiny developers create more
effective dashboards using the Behavioral Insight Design (BID)
Framework. If you’ve ever wondered why users struggle with your
carefully crafted dashboards, or why your beautifully visualized data
doesn’t drive the decisions you expected, this package is for you.
The core insight: Technical excellence ≠ User success. Even the most sophisticated analysis can fail if users can’t quickly understand and act on it.
The BID framework bridges this gap by integrating behavioral science, UX best practices, and data storytelling techniques into a systematic 5-stage process. Think of it as applying the same rigor you use for data validation to user experience design.
The BID framework consists of 5 sequential stages that mirror how you might approach a data analysis project:
Key insight for data professionals: Just as you wouldn’t skip exploratory data analysis before modeling, don’t skip understanding user cognition before building interfaces.
Each stage builds on insights from previous stages, creating a systematic approach to dashboard design that’s both evidence-based and user-centered.
The BID framework is built on established science and design
principles. To explore these concepts, use bid_concepts()
to list all available concepts, or search for specific terms:
# List all concepts
all_concepts <- bid_concepts()
head(select(all_concepts, concept, category, description), 3)
# Search for specific concepts
bid_concepts("cognitive") |>
select(concept, description, implementation_tips)For detailed information about a specific concept, use
bid_concept():
# Get information about a specific concept
bid_concept("Processing Fluency") |>
select(concept, description, implementation_tips)The bid_concept() function supports case-insensitive and
partial matching:
# Case-insensitive matching
bid_concept("hick's law") |>
select(concept, description)
# Partial matching
bid_concept("proximity") |>
select(concept, description)You can also explore concepts that are new to the BID framework:
Let’s walk through a complete example of using the BID framework to document and improve a dashboard project.
Start by clarifying the central question your dashboard needs to answer and structure the data story:
# Document the user's need using new_data_story() with flat API (recommended)
interpret_result <- bid_interpret(
central_question = "How are our marketing campaigns performing across different channels?",
data_story = new_data_story(
hook = "Recent campaign performance varies significantly across channels",
context = "We've invested in 6 different marketing channels over the past quarter",
tension = "ROI metrics show inconsistent results, with some channels underperforming",
resolution = "Identify top-performing channels and key performance drivers",
audience = "Marketing team and executives",
metrics = "Channel ROI, Conversion Rate, Cost per Acquisition",
visual_approach = "Comparative analysis with historical benchmarks"
),
# Recommended: use data.frame for personas (cleaner, more explicit)
user_personas = data.frame(
name = c("Marketing Manager", "CMO"),
goals = c(
"Optimize marketing spend across channels",
"Strategic oversight of marketing effectiveness"
),
pain_points = c(
"Difficulty comparing performance across different metrics",
"Needs high-level insights without technical details"
),
technical_level = c("intermediate", "basic"),
stringsAsFactors = FALSE
)
)
interpret_result |>
select(central_question, hook, tension, resolution)The function evaluates our data story elements and provides
suggestions for improvement (in the suggestions column).
We’ve also added user personas to better target our design.
Now identify the specific problems users are encountering with your dashboard or interface:
# Document the problem
notice_result <- bid_notice(
previous_stage = interpret_result,
problem = "Users are overwhelmed by too many filter options and struggle to find relevant insights",
evidence = "User testing shows 65% of first-time users fail to complete their intended task within 2 minutes"
)
notice_result |>
select(problem, theory, evidence)Notice that the function automatically selected an appropriate theory
based on our problem description. It also provides suggestions for
addressing cognitive load which you can access from the
suggestions column.
Next, identify potential cognitive biases that might affect how users interpret your dashboard:
# Document bias mitigation strategies
anticipate_result <- bid_anticipate(
previous_stage = notice_result,
bias_mitigations = list(
anchoring = "Include previous period performance as reference points",
framing = "Provide toggle between ROI improvement vs. ROI gap views",
confirmation_bias = "Highlight unexpected patterns that contradict common assumptions"
)
)
anticipate_result |>
select(bias_mitigations)The function evaluates our bias mitigation strategies, providing implementation suggestions.
Now determine the layout and key design principles to implement:
# Document the dashboard structure
structure_result <- bid_structure(previous_stage = anticipate_result)
structure_result |>
select(layout, concepts, suggestions)The function automatically selects an appropriate layout based on the content from previous stages and provides ranked, actionable suggestions organized by UX concepts. The layout selection is transparent with clear rationale for why a particular layout was chosen.
Finally, document how you’ll ensure users leave with clear insights and the ability to collaborate:
# Document validation approach
validate_result <- bid_validate(
previous_stage = structure_result,
summary_panel = "Executive summary highlighting top and bottom performers, key trends, and recommended actions for the next marketing cycle",
collaboration = "Team annotation capability allowing marketing team members to add context and insights to specific data points",
next_steps = c(
"Review performance of bottom 2 channels",
"Increase budget for top-performing channel",
"Schedule team meeting to discuss optimization strategy",
"Export findings for quarterly marketing review"
)
)
validate_result |>
select(summary_panel, collaboration, next_steps)The validate function acknowledges our implementation of the Peak-End Rule through next steps and provides suggestions for refining our approach.
Once you’ve documented your dashboard with the BID framework, you can generate concrete suggestions for implementing the principles using common R packages:
# Get {bslib} component suggestions
bid_suggest_components(structure_result, package = "bslib") |>
select(component, description) |>
head(2)
# Get {reactable} suggestions for showing data
bid_suggest_components(structure_result, package = "reactable") |>
select(component, description) |>
head(2)
# Get suggestions from all supported packages
all_suggestions <- bid_suggest_components(validate_result, package = "all")
table(all_suggestions$package)You can generate a complete report summarizing all stages of your BID process:
Here’s how to integrate the BID framework into your development process:
bid_suggest_components() to get package-specific
implementation ideasbid_report() to maintain comprehensive
documentationThe {bidux} package makes it easier to apply behavioral
science and UX best practices to your Shiny dashboards. By following the
5-stage BID framework, you can create applications that are more
intuitive, engaging, and effective for your users.
Future versions of {bidux} will include:
If you have telemetry data from user interactions (e.g., from the
{shiny.telemetry} package), {bidux} can help
transform it into actionable BID insights by automatically detecting UX
friction patterns.
{bidux} provides two complementary approaches to
telemetry analysis:
bid_telemetry() - Modern Tidy API
(Recommended) - Returns a clean tibble of issues for analysis -
Best for new workflows and data exploration - Integrates seamlessly with
dplyr pipelines - Introduced in version 0.3.1
bid_ingest_telemetry() - Legacy Compatible
API - Returns a hybrid object that works as both a list and
enhanced object - Maintains backward compatibility with pre-0.3.1 code -
Provides same analysis as bid_telemetry() with additional
list interface - Will be soft-deprecated in 0.4.0
Both functions analyze the same telemetry patterns: - Unused inputs - UI controls rarely or never used - Delayed interactions - Users taking too long to engage - Error patterns - Recurring errors affecting users - Navigation drop-offs - Pages/tabs with low visit rates - Confusion patterns - Rapid repeated changes suggesting uncertainty
The bid_telemetry_presets() function provides three
pre-configured sensitivity levels, making it easy to adjust how
aggressively issues are detected without manually tuning thresholds:
# STRICT: Detects even minor issues - use for critical applications
strict_issues <- bid_telemetry(
"path/to/telemetry.sqlite",
thresholds = bid_telemetry_presets("strict")
)
# - Flags inputs used by < 2% of sessions
# - Flags delays > 20 seconds to first action
# - Flags errors in > 5% of sessions
# - Flags pages visited by < 10% of users
# MODERATE: Balanced default - appropriate for most applications
moderate_issues <- bid_telemetry(
"path/to/telemetry.sqlite",
thresholds = bid_telemetry_presets("moderate")
)
# - Flags inputs used by < 5% of sessions (default)
# - Flags delays > 30 seconds to first action (default)
# - Flags errors in > 10% of sessions (default)
# - Flags pages visited by < 20% of users (default)
# RELAXED: Only detects major issues - use for mature, stable dashboards
relaxed_issues <- bid_telemetry(
"path/to/telemetry.sqlite",
thresholds = bid_telemetry_presets("relaxed")
)
# - Flags inputs used by < 10% of sessions
# - Flags delays > 60 seconds to first action
# - Flags errors in > 20% of sessions
# - Flags pages visited by < 30% of usersDifferent presets can identify different numbers of issues from the same data:
# Analyze with all three presets
strict <- bid_telemetry(
"path/to/telemetry.sqlite",
thresholds = bid_telemetry_presets("strict")
)
moderate <- bid_telemetry(
"path/to/telemetry.sqlite",
thresholds = bid_telemetry_presets("moderate")
)
relaxed <- bid_telemetry(
"path/to/telemetry.sqlite",
thresholds = bid_telemetry_presets("relaxed")
)
# Compare issue counts
data.frame(
preset = c("strict", "moderate", "relaxed"),
total_issues = c(nrow(strict), nrow(moderate), nrow(relaxed)),
critical_issues = c(
sum(strict$severity == "critical"),
sum(moderate$severity == "critical"),
sum(relaxed$severity == "critical")
)
)
# Strict preset typically finds 2-3x more issues than relaxed
# Use strict during initial development, relaxed for stable dashboardsbid_telemetry()The recommended approach for new projects:
# 1. Analyze telemetry with appropriate sensitivity
issues <- bid_telemetry(
"path/to/telemetry.sqlite",
thresholds = bid_telemetry_presets("moderate")
)
# 2. Triage and review issues (returns organized summary)
print(issues)
# 3. Filter to high-priority issues using dplyr
library(dplyr)
critical_issues <- issues |>
filter(severity %in% c("critical", "high")) |>
arrange(desc(impact_rate))
# 4. Convert top issues to Notice stages for BID workflow
notices <- bid_notices(
issues = critical_issues,
previous_stage = interpret_result,
max_issues = 3
)
# 5. Extract telemetry flags for informed decisions
flags <- bid_flags(issues)
flags$has_critical_issues # TRUE/FALSE
flags$has_navigation_issues # TRUE/FALSE
flags$session_count # Number of sessions analyzed
# 6. Use flags to inform Structure stage
structure_result <- bid_structure(
previous_stage = anticipate_result,
telemetry_flags = flags
)bid_ingest_telemetry()For backward compatibility with existing code:
# Returns hybrid object that works as both list and enhanced object
legacy_issues <- bid_ingest_telemetry(
"path/to/telemetry.sqlite",
thresholds = bid_telemetry_presets("moderate")
)
# Legacy list interface (backward compatible)
length(legacy_issues) # Number of issues as list length
legacy_issues[[1]] # First issue as bid_stage object
names(legacy_issues) # Issue identifiers
# Enhanced features (new in 0.3.1)
as_tibble(legacy_issues) # Get tidy issues view
bid_flags(legacy_issues) # Extract global flags
print(legacy_issues) # Shows organized triage summary
# Both interfaces work on same objectHere’s a full example showing how telemetry analysis integrates with the BID framework:
# Step 1: Analyze telemetry to identify friction points
issues <- bid_telemetry(
"path/to/telemetry.sqlite",
thresholds = bid_telemetry_presets("strict") # Catch everything during development
)
# Step 2: Start BID workflow with central question
interpret_result <- bid_interpret(
central_question = "How can we reduce user friction identified in telemetry?",
data_story = new_data_story(
hook = "Telemetry shows multiple UX friction points",
context = glue::glue("Analysis of {bid_flags(issues)$session_count} user sessions"),
tension = "Users struggling with specific UI elements and workflows",
resolution = "Systematically address high-impact issues using BID framework"
)
)
# Step 3: Address highest-impact issue first
top_issue <- issues |>
arrange(desc(impact_rate)) |>
slice(1)
notice_result <- bid_notices(
issues = top_issue,
previous_stage = interpret_result
)[[1]]
# Step 4: Anticipate biases related to the issue
anticipate_result <- bid_anticipate(
previous_stage = notice_result,
bias_mitigations = list(
anchoring = "Provide clear default values based on common use cases",
confirmation_bias = "Show data that challenges user assumptions"
)
)
# Step 5: Structure with telemetry-informed decisions
structure_result <- bid_structure(
previous_stage = anticipate_result,
telemetry_flags = bid_flags(issues) # Informs layout selection
)
# Step 6: Validate with telemetry references
validate_result <- bid_validate(
previous_stage = structure_result,
summary_panel = "Dashboard improvements based on analysis of real user behavior patterns",
next_steps = c(
"Address remaining high-severity telemetry issues",
"Re-run telemetry analysis after changes to measure improvement",
"Monitor key metrics: time-to-first-action, error rates, navigation patterns"
)
)Use bid_telemetry() when you: - Are
starting a new project or workflow - Want clean, tidy data for analysis
and visualization - Prefer working with tibbles and dplyr -
Don’t need backward compatibility
Use bid_ingest_telemetry() when you: -
Have existing code from bidux < 0.3.1 - Need the legacy list
interface for compatibility - Want both list and tibble access in the
same object
Note: Both functions perform identical telemetry analysis and support the same presets and thresholds. The only difference is the return format.
Visit github.com/jrwinget/bidux for updates and to contribute to the package development. We welcome feedback and suggestions to help make the BID framework even more effective for Shiny developers.
Remember that good dashboard design is an iterative process that benefits from continuous user feedback. The BID framework provides structure to this process while ensuring common principles are incorporated throughout your development workflow.