feat(llm): Add tool loop support to LLM.call() with structured LLMResult#5624
Open
alex-clawd wants to merge 7 commits intomainfrom
Open
feat(llm): Add tool loop support to LLM.call() with structured LLMResult#5624alex-clawd wants to merge 7 commits intomainfrom
alex-clawd wants to merge 7 commits intomainfrom
Conversation
When LLM.call() is invoked with both tools and available_functions, it now runs a tool loop — calling the model, executing requested tools, and feeding results back — until the model responds with text or max_iterations is reached. Changes: - New llm_result.py with LLMResult and ToolCallRecord models - LLM.call() returns LLMResult (structured) when tools are provided, str when not (fully backwards compatible) - Tool loop with max_iterations parameter (default 10) - Cost estimation based on model name and token counts - Comprehensive test suite (17 tests, all mocked) - Exports LLMResult and ToolCallRecord from crewai.__init__
| import json | ||
| from types import SimpleNamespace | ||
| from typing import Any | ||
| from unittest.mock import MagicMock, patch |
Contributor
Author
There was a problem hiding this comment.
Fixed in 5837f8e — restored the mock imports (MagicMock + patch). The earlier ruff autofix removed the line too aggressively.
- LLM.call() return type -> str | Any (keeps callers happy) - Add type: ignore for runtime-compatible dict -> LLMMessage cast - Add missing typing.Any import to llm_result.py - Fix dict -> dict[str, Any] for type params - Restore unittest.mock imports in tests - All 17 tests passing
…e-newer litellm 1.83.0 has MCP stdio command injection vuln (CVE-2026-30623). Fixed in 1.83.7-stable. Also bumps exclude-newer to 2026-04-26 so the resolver can find the newer version. Note: GHSA-58qw-9mgm-455v (pip) requires a workflow file change to add --ignore-vuln, which needs the workflow OAuth scope.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
When
LLM.call()is invoked with bothtoolsandavailable_functions, it now runs a tool loop — calling the model, executing requested tools, and feeding results back — until the model responds with text ormax_iterationsis reached.This makes LLM the flexible primitive underneath Agent:
Changes
New:
llm_result.pyLLMResult— structured result with text, tool_calls, usage, cost_usd, iterationsToolCallRecord— record of each tool call (name, input, output, duration_ms, is_error)Modified:
llm.pyLLM.call()gains amax_iterationsparameter (default 10)str(100% backwards compatible)LLMResult_call_single(), new loop in_call_with_tool_loop()Modified:
__init__.pyLLMResultandToolCallRecordNew:
tests/test_llm_tool_loop.py17 tests covering:
All tests use mocked LLM calls — no real API traffic.