mini007

mini007 provides a lightweight and extensible framework for multi-agents orchestration processes capable of decomposing complex tasks and assigning them to specialized agents.

Each agent is an extension of an ellmer object. mini007 relies heavily on the excellent ellmer package but aims to make it easy to create a process where multiple specialized agents help each other sequentially in order to execute a task.

mini007 provides two types of agents:

Highlights

🧠 Memory and identity for each agent via uuid and message history.

βš™οΈ Built-in task decomposition and delegation via LLM.

πŸ”„ Agent-to-agent orchestration with result chaining.

🌐 Compatible with any chat model supported by ellmer.

You can install the development version of mini007 like so:

devtools::install_github("feddelegrand7/mini007")
library(mini007)

Creating an Agent

An Agent is built upon an LLM object created by the ellmer package, in the following examples, we’ll work with the OpenAI models, however you can use any model/combination of models you want:

# no need to provide the system prompt, it will be set when creating the
# agent (see the 'instruction' parameter)

openai_4_1_mini <- ellmer::chat(
  name = "openai/gpt-4.1-mini",
  api_key = Sys.getenv("OPENAI_API_KEY"), 
  echo = "none"
)

After initializing the ellmer LLM object, creating the Agent is straightforward:

polar_bear_researcher <- Agent$new(
  name = "POLAR BEAR RESEARCHER",
  instruction = "You are an expert in polar bears, you task is to collect information about polar bears. Answer in 1 sentence max.",
  llm_object = openai_4_1_mini
)

Each created Agent has an agent_id (among other meta information):

polar_bear_researcher$agent_id
#> [1] "ea8564be-e312-4c56-a187-7cdf8927286f"

At any time, you can tweak the llm_object:

polar_bear_researcher$llm_object
#> <Chat OpenAI/gpt-4.1-mini turns=1 tokens=0/0 $0.00>
#> ── system [0] ──────────────────────────────────────────────────────────────────
#> You are an expert in polar bears, you task is to collect information about polar bears. Answer in 1 sentence max.

An agent can provide the answer to a prompt using the invoke method:

polar_bear_researcher$invoke("Are polar bears dangerous for humans?")
#> Yes, polar bears can be dangerous to humans as they are powerful predators and 
#> may attack if threatened or hungry.

You can also retrieve a list that displays the history of the agent:

polar_bear_researcher$messages
#> [[1]]
#> [[1]]$role
#> [1] "system"
#> 
#> [[1]]$content
#> [1] "You are an expert in polar bears, you task is to collect information about polar bears. Answer in 1 sentence max."
#> 
#> 
#> [[2]]
#> [[2]]$role
#> [1] "user"
#> 
#> [[2]]$content
#> [1] "Are polar bears dangerous for humans?"
#> 
#> 
#> [[3]]
#> [[3]]$role
#> [1] "assistant"
#> 
#> [[3]]$content
#> Yes, polar bears can be dangerous to humans as they are powerful predators and 
#> may attack if threatened or hungry.

Or the ellmer way:

polar_bear_researcher$llm_object
#> <Chat OpenAI/gpt-4.1-mini turns=3 tokens=43/22 $0.00>
#> ── system [0] ──────────────────────────────────────────────────────────────────
#> You are an expert in polar bears, you task is to collect information about polar bears. Answer in 1 sentence max.
#> ── user [43] ───────────────────────────────────────────────────────────────────
#> Are polar bears dangerous for humans?
#> ── assistant [22] ──────────────────────────────────────────────────────────────
#> Yes, polar bears can be dangerous to humans as they are powerful predators and may attack if threatened or hungry.

Creating a multi-agents orchestraction

We can create as many Agents as we want, the LeadAgent will dispatch the instructions to the agents and provide with the final answer back. Let’s create three Agents, a researcher, a summarizer and a translator:


researcher <- Agent$new(
  name = "researcher",
  instruction = "You are a research assistant. Your job is to answer factual questions with detailed and accurate information. Do not answer with more than 2 lines",
  llm_object = openai_4_1_mini
)

summarizer <- Agent$new(
  name = "summarizer",
  instruction = "You are agent designed to summarise a give text into 3 distinct bullet points.",
  llm_object = openai_4_1_mini
)

translator <- Agent$new(
  name = "translator",
  instruction = "Your role is to translate a text from English to German",
  llm_object = openai_4_1_mini
)

Now, the most important part is to create a LeadAgent:

lead_agent <- LeadAgent$new(
  name = "Leader", 
  llm_object = openai_4_1_mini
)

Note that the LeadAgent cannot receive an instruction as it has already the necessary instructions.

Next, we need to assign the Agents to LeadAgent, we do it as follows:

lead_agent$register_agents(c(researcher, summarizer, translator))
#> βœ” Agent(s) successfully registered.
lead_agent$agents
#> [[1]]
#> <Agent>
#>   Public:
#>     agent_id: 68f595e3-45d5-4e11-a40f-5c5163393487
#>     broadcast_history: list
#>     clone: function (deep = FALSE) 
#>     initialize: function (name, instruction, llm_object) 
#>     instruction: You are a research assistant. Your job is to answer fact ...
#>     invoke: function (prompt) 
#>     llm_object: Chat, R6
#>     messages: list
#>     model_name: gpt-4.1-mini
#>     model_provider: OpenAI
#>     name: researcher
#>   Private:
#>     .add_assistant_message: function (message, type = "assistant") 
#>     .add_message: function (message, type) 
#>     .add_user_message: function (message, type = "user") 
#> 
#> [[2]]
#> <Agent>
#>   Public:
#>     agent_id: 4df4da63-2e4b-48a7-a6b3-f76dbf649a2b
#>     broadcast_history: list
#>     clone: function (deep = FALSE) 
#>     initialize: function (name, instruction, llm_object) 
#>     instruction: You are agent designed to summarise a give text into 3 d ...
#>     invoke: function (prompt) 
#>     llm_object: Chat, R6
#>     messages: list
#>     model_name: gpt-4.1-mini
#>     model_provider: OpenAI
#>     name: summarizer
#>   Private:
#>     .add_assistant_message: function (message, type = "assistant") 
#>     .add_message: function (message, type) 
#>     .add_user_message: function (message, type = "user") 
#> 
#> [[3]]
#> <Agent>
#>   Public:
#>     agent_id: 5195c82e-3fa7-4922-9516-90a54583f61a
#>     broadcast_history: list
#>     clone: function (deep = FALSE) 
#>     initialize: function (name, instruction, llm_object) 
#>     instruction: Your role is to translate a text from English to German
#>     invoke: function (prompt) 
#>     llm_object: Chat, R6
#>     messages: list
#>     model_name: gpt-4.1-mini
#>     model_provider: OpenAI
#>     name: translator
#>   Private:
#>     .add_assistant_message: function (message, type = "assistant") 
#>     .add_message: function (message, type) 
#>     .add_user_message: function (message, type = "user")

Before executing your prompt, you can ask the LeadAgent to generate a plan so that you can see which Agent will be used for which prompt, you can do it as follows:

prompt_to_execute <- "Tell me about the economic situation in Algeria, summarize it in 3 bullet points, then translate it into German."

plan <- lead_agent$generate_plan(prompt_to_execute)
#> βœ” Plan successfully generated.
plan
#> [[1]]
#> [[1]]$agent_id
#> 68f595e3-45d5-4e11-a40f-5c5163393487
#> 
#> [[1]]$agent_name
#> [1] "researcher"
#> 
#> [[1]]$model_provider
#> [1] "OpenAI"
#> 
#> [[1]]$model_name
#> [1] "gpt-4.1-mini"
#> 
#> [[1]]$prompt
#> [1] "Research the current economic situation in Algeria, including GDP growth, key industries, and challenges."
#> 
#> 
#> [[2]]
#> [[2]]$agent_id
#> 4df4da63-2e4b-48a7-a6b3-f76dbf649a2b
#> 
#> [[2]]$agent_name
#> [1] "summarizer"
#> 
#> [[2]]$model_provider
#> [1] "OpenAI"
#> 
#> [[2]]$model_name
#> [1] "gpt-4.1-mini"
#> 
#> [[2]]$prompt
#> [1] "Summarize the gathered information into 3 clear and concise bullet points in English."
#> 
#> 
#> [[3]]
#> [[3]]$agent_id
#> 5195c82e-3fa7-4922-9516-90a54583f61a
#> 
#> [[3]]$agent_name
#> [1] "translator"
#> 
#> [[3]]$model_provider
#> [1] "OpenAI"
#> 
#> [[3]]$model_name
#> [1] "gpt-4.1-mini"
#> 
#> [[3]]$prompt
#> [1] "Translate the 3 bullet points from English into German."

Now, in order now to execute the workflow, we just need to call the invoke method which will behind the scene delegate the prompts to suitable Agents and retrieve back the final information:

response <- lead_agent$invoke("Tell me about the economic situation in Algeria, summarize it in 3 bullet points, then translate it into German.")
response
#> - Die algerische Wirtschaft wΓ€chst moderat um 2-3 % jΓ€hrlich, hauptsΓ€chlich 
#> angetrieben durch den Γ–l- und Gassektor.
#> - Wichtige Industriezweige sind Energie, Landwirtschaft und verarbeitendes 
#> Gewerbe, wobei Kohlenwasserstoffe den Export und die Staatseinnahmen 
#> dominieren.
#> - Wirtschaftliche Herausforderungen umfassen die AbhΓ€ngigkeit von den volatilen
#> Γ–lpreisen, den Bedarf an Diversifizierung und hohe Arbeitslosenquoten.

If you want to inspect the multi-agents orchestration, you have access to the agents_interaction object:

lead_agent$agents_interaction
#> [[1]]
#> [[1]]$agent_id
#> 68f595e3-45d5-4e11-a40f-5c5163393487
#> 
#> [[1]]$agent_name
#> [1] "researcher"
#> 
#> [[1]]$model_name
#> [1] "gpt-4.1-mini"
#> 
#> [[1]]$model_provider
#> [1] "OpenAI"
#> 
#> [[1]]$prompt
#> [1] "Research the current economic situation in Algeria, including GDP growth, key industries, and challenges."
#> 
#> [[1]]$response
#> As of 2024, Algeria's GDP growth is moderate, around 2-3% annually, driven by 
#> hydrocarbons (oil and gas) which dominate exports and government revenue. Key 
#> industries include energy, agriculture, and manufacturing; challenges involve 
#> economic diversification, high unemployment, and reliance on volatile oil 
#> prices.
#> 
#> [[1]]$edited_by_hitl
#> [1] FALSE
#> 
#> 
#> [[2]]
#> [[2]]$agent_id
#> 4df4da63-2e4b-48a7-a6b3-f76dbf649a2b
#> 
#> [[2]]$agent_name
#> [1] "summarizer"
#> 
#> [[2]]$model_name
#> [1] "gpt-4.1-mini"
#> 
#> [[2]]$model_provider
#> [1] "OpenAI"
#> 
#> [[2]]$prompt
#> [1] "Summarize the gathered information into 3 clear and concise bullet points in English."
#> 
#> [[2]]$response
#> - Algeria's economy grows moderately at 2-3% annually, primarily fueled by the 
#> oil and gas sector.
#> - Major industries include energy, agriculture, and manufacturing, with 
#> hydrocarbons leading exports and government income.
#> - Economic challenges encompass dependence on volatile oil prices, the need for
#> diversification, and high unemployment rates.
#> 
#> [[2]]$edited_by_hitl
#> [1] FALSE
#> 
#> 
#> [[3]]
#> [[3]]$agent_id
#> 5195c82e-3fa7-4922-9516-90a54583f61a
#> 
#> [[3]]$agent_name
#> [1] "translator"
#> 
#> [[3]]$model_name
#> [1] "gpt-4.1-mini"
#> 
#> [[3]]$model_provider
#> [1] "OpenAI"
#> 
#> [[3]]$prompt
#> [1] "Translate the 3 bullet points from English into German."
#> 
#> [[3]]$response
#> - Die algerische Wirtschaft wΓ€chst moderat um 2-3 % jΓ€hrlich, hauptsΓ€chlich 
#> angetrieben durch den Γ–l- und Gassektor.
#> - Wichtige Industriezweige sind Energie, Landwirtschaft und verarbeitendes 
#> Gewerbe, wobei Kohlenwasserstoffe den Export und die Staatseinnahmen 
#> dominieren.
#> - Wirtschaftliche Herausforderungen umfassen die AbhΓ€ngigkeit von den volatilen
#> Γ–lpreisen, den Bedarf an Diversifizierung und hohe Arbeitslosenquoten.
#> 
#> [[3]]$edited_by_hitl
#> [1] FALSE

The above example is extremely simple, the usefulness of mini007 would shine in more complex processes where a multi-agent sequential orchestration has a higher value added.

Broadcasting

If you want to compare several LLM models, the LeadAgent provides a broadcast method that allows you to send a prompt to several different agents and get the result for each agent back in order to make a comparison and potentially choose the best agent/model for the defined prompt:

Let’s go through an example:

openai_4_1 <- ellmer::chat(
  name = "openai/gpt-4.1",
  api_key = Sys.getenv("OPENAI_API_KEY"), 
  echo = "none"
)

openai_4_1_agent <- Agent$new(
  name = "openai_4_1_agent", 
  instruction = "You are an AI assistant. Answer in 1 sentence max.", 
  llm_object = openai_4_1
)

openai_4_1_nano <- ellmer::chat(
  name = "openai/gpt-4.1-nano",
  api_key = Sys.getenv("OPENAI_API_KEY"), 
  echo = "none"
)

openai_4_1_nano_agent <- Agent$new(
  name = "openai_4_1_nano_agent", 
  instruction = "You are an AI assistant. Answer in 1 sentence max.", 
  llm_object = openai_4_1_nano
)

lead_agent$clear_agents() # removing previous agents
lead_agent$register_agents(c(openai_4_1_agent, openai_4_1_nano_agent))
#> βœ” Agent(s) successfully registered.
lead_agent$broadcast(prompt = "If I were Algerian, which song would I like to sing when running under the rain? how about a flower?")
#> [[1]]
#> [[1]]$agent_id
#> [1] "2dac58a4-3ea2-4454-8369-0dea0b3785f8"
#> 
#> [[1]]$agent_name
#> [1] "openai_4_1_agent"
#> 
#> [[1]]$model_provider
#> [1] "OpenAI"
#> 
#> [[1]]$model_name
#> [1] "gpt-4.1"
#> 
#> [[1]]$response
#> As an Algerian, you might sing "Ya Rayah" when running under the rain, while a 
#> flower would "sing" by blooming quietly into the fresh droplets.
#> 
#> 
#> [[2]]
#> [[2]]$agent_id
#> [1] "705fa37b-3ba3-4b35-8410-34cd432c162a"
#> 
#> [[2]]$agent_name
#> [1] "openai_4_1_nano_agent"
#> 
#> [[2]]$model_provider
#> [1] "OpenAI"
#> 
#> [[2]]$model_name
#> [1] "gpt-4.1-nano"
#> 
#> [[2]]$response
#> You might enjoy singing "Ayoune" by Cheb Khaled when running under the rain, 
#> and "Oud El Dahab" by Khaled or a traditional Algerian song for a flower.

You can also access the history of the broadcasting using the broadcast_history attribute:

lead_agent$broadcast_history
#> [[1]]
#> [[1]]$prompt
#> [1] "If I were Algerian, which song would I like to sing when running under the rain? how about a flower?"
#> 
#> [[1]]$responses
#> [[1]]$responses[[1]]
#> [[1]]$responses[[1]]$agent_id
#> [1] "2dac58a4-3ea2-4454-8369-0dea0b3785f8"
#> 
#> [[1]]$responses[[1]]$agent_name
#> [1] "openai_4_1_agent"
#> 
#> [[1]]$responses[[1]]$model_provider
#> [1] "OpenAI"
#> 
#> [[1]]$responses[[1]]$model_name
#> [1] "gpt-4.1"
#> 
#> [[1]]$responses[[1]]$response
#> As an Algerian, you might sing "Ya Rayah" when running under the rain, while a 
#> flower would "sing" by blooming quietly into the fresh droplets.
#> 
#> 
#> [[1]]$responses[[2]]
#> [[1]]$responses[[2]]$agent_id
#> [1] "705fa37b-3ba3-4b35-8410-34cd432c162a"
#> 
#> [[1]]$responses[[2]]$agent_name
#> [1] "openai_4_1_nano_agent"
#> 
#> [[1]]$responses[[2]]$model_provider
#> [1] "OpenAI"
#> 
#> [[1]]$responses[[2]]$model_name
#> [1] "gpt-4.1-nano"
#> 
#> [[1]]$responses[[2]]$response
#> You might enjoy singing "Ayoune" by Cheb Khaled when running under the rain, 
#> and "Oud El Dahab" by Khaled or a traditional Algerian song for a flower.

Tool specification

As mentioned previously, an Agent is an extension of an ellmer object. As such, you can define a tool that will be used, the exact same way as in ellmer. Suppose, we want to get the weather in Algiers through a function (Tool). Let’s first create the Agents:

openai_llm_object <- ellmer::chat(
  name = "openai/gpt-4.1-mini",
  api_key = Sys.getenv("OPENAI_API_KEY"), 
  echo = "none"
)

assistant <- Agent$new(
  name = "assistant",
  instruction = "You are an AI assistant that answers question. Do not answer with more than 1 sentence.",
  llm_object = openai_llm_object
)

weather_assistant <- Agent$new(
  name = "weather_assistant",
  instruction = "You role is to provide weather assistance.",
  llm_object = openai_llm_object
)

Now, let’s define the tool that we’ll be using, using ellmer it’s quite straightforward:

get_weather_in_algiers <- ellmer::tool(
  function() {
    "35 degrees Celcius, it's sunny and there's no precipitation."
  },
  name = "get_weather_in_algiers",
  description = "Provide the current weather in Algiers, Algeria."
)

Our tool defined, the next step is to register it within the suitable Agent, in our case, the weather_assistant Agent:

weather_assistant$llm_object$register_tool(get_weather_in_algiers)

That’s it, now the last step is to create the LeadAgent, register the Agents that we need and call the invoke method:

lead_agent <- LeadAgent$new(
  name = "Leader", 
  llm_object = openai_llm_object
)

lead_agent$register_agents(c(assistant, weather_assistant))
#> βœ” Agent(s) successfully registered.

lead_agent$invoke(
  "Tell me about the economic situation in Algeria, then tell me how's the weather in Algiers?"
)
#> The current weather in Algiers is clear and sunny with a temperature of 35 
#> degrees Celsius. There is no precipitation at the moment, indicating dry 
#> conditions.

Code of Conduct

Please note that the mini007 project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.