Skip to content

feat(llm): Add tool loop support to LLM.call() with structured LLMResult#5624

Open
alex-clawd wants to merge 7 commits intomainfrom
feat/llm-tool-loop
Open

feat(llm): Add tool loop support to LLM.call() with structured LLMResult#5624
alex-clawd wants to merge 7 commits intomainfrom
feat/llm-tool-loop

Conversation

@alex-clawd
Copy link
Copy Markdown
Contributor

Summary

When LLM.call() is invoked with both tools and available_functions, it now runs a tool loop — calling the model, executing requested tools, and feeding results back — until the model responds with text or max_iterations is reached.

This makes LLM the flexible primitive underneath Agent:

  • LLM — call a model. Optionally with tools. Returns text + metadata.
  • Agent — LLM + identity + memory. Reasons as a character.
  • Crew — multiple Agents collaborating on Tasks.
  • Flow — orchestrate any of the above.

Changes

New: llm_result.py

  • LLMResult — structured result with text, tool_calls, usage, cost_usd, iterations
  • ToolCallRecord — record of each tool call (name, input, output, duration_ms, is_error)
  • Cost estimation based on model name and token counts (covers Claude, GPT-4o, Gemini)

Modified: llm.py

  • LLM.call() gains a max_iterations parameter (default 10)
  • When called without tools: returns str (100% backwards compatible)
  • When called with tools + available_functions: returns LLMResult
  • Internal refactor: original call logic moved to _call_single(), new loop in _call_with_tool_loop()

Modified: __init__.py

  • Exports LLMResult and ToolCallRecord

New: tests/test_llm_tool_loop.py

17 tests covering:

  • LLMResult/ToolCallRecord model defaults and construction
  • Cost estimation (known models, unknown models, provider prefixes, partial matches)
  • Backwards compatibility (call without tools returns str)
  • Single tool call then text response
  • Multiple tool calls in sequence across iterations
  • max_iterations stops the loop
  • Tool error handling (exception captured in record)
  • Unknown function error handling
  • Cost estimation populated in result
  • Immediate text response with tools provided

All tests use mocked LLM calls — no real API traffic.

⚠️ Do NOT merge — for review only

When LLM.call() is invoked with both tools and available_functions,
it now runs a tool loop — calling the model, executing requested tools,
and feeding results back — until the model responds with text or
max_iterations is reached.

Changes:
- New llm_result.py with LLMResult and ToolCallRecord models
- LLM.call() returns LLMResult (structured) when tools are provided,
  str when not (fully backwards compatible)
- Tool loop with max_iterations parameter (default 10)
- Cost estimation based on model name and token counts
- Comprehensive test suite (17 tests, all mocked)
- Exports LLMResult and ToolCallRecord from crewai.__init__
import json
from types import SimpleNamespace
from typing import Any
from unittest.mock import MagicMock, patch
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 5837f8e — restored the mock imports (MagicMock + patch). The earlier ruff autofix removed the line too aggressively.

- LLM.call() return type -> str | Any (keeps callers happy)
- Add type: ignore for runtime-compatible dict -> LLMMessage cast
- Add missing typing.Any import to llm_result.py
- Fix dict -> dict[str, Any] for type params
- Restore unittest.mock imports in tests
- All 17 tests passing
…e-newer

litellm 1.83.0 has MCP stdio command injection vuln (CVE-2026-30623).
Fixed in 1.83.7-stable. Also bumps exclude-newer to 2026-04-26 so
the resolver can find the newer version.

Note: GHSA-58qw-9mgm-455v (pip) requires a workflow file change to
add --ignore-vuln, which needs the workflow OAuth scope.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants