chatRater: Rating and Evaluating Texts Using Large Language Models
Generates ratings and psycholinguistic metrics for textual stimuli using large language models.
It enables users to evaluate idioms and other language materials by combining context, prompts, and stimulus inputs.
It supports multiple LLM APIs (such as 'OpenAI', 'DeepSeek', 'Anthropic', 'Cohere', 'Google PaLM', and 'Ollama')
by allowing users to switch models with a single parameter. In addition to generating numeric ratings,
'chatRater' provides functions for obtaining detailed psycholinguistic metrics including word frequency (with optional corpus input),
lexical coverage (with customizable vocabulary size and test basis), Zipf metric, Levenshtein distance, and semantic transparency.
Documentation:
Downloads:
Linking:
Please use the canonical form
https://CRAN.R-project.org/package=chatRater
to link to this page.