Skip to content

Conversation

@jhaynie
Copy link
Member

@jhaynie jhaynie commented Oct 4, 2025

Summary by CodeRabbit

  • New Features
    • Added tool execution to the AI workflow, enabling the assistant to call tools when needed.
    • Improved streaming responses to deliver final answers more reliably.
  • Bug Fixes
    • Clearer error messages returned to users for easier troubleshooting.
  • Refactor
    • Standardized prompt structure for more consistent interactions.
    • Unified default model behavior by removing custom temperature settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 4, 2025

Walkthrough

This PR removes explicit temperature settings across JS and Python model clients, modifies prompt templates in JS, updates Python autogen client signature to drop temperature, and substantially extends Python LangGraph with a tool-enabled state graph, conditional routing, streaming finalization, and improved error handling.

Changes

Cohort / File(s) Summary of Changes
JS: ChatOpenAI config and prompt tweaks
common/js/langchain.ts, common/js/langgraph.ts
Removed temperature option from ChatOpenAI initialization. Adjusted prompt templates: refined wording/line breaks in langchain; added explicit “Human: {input}” line before “Assistant:” in langgraph.
Python: Autogen OpenAI client update
common/py/autogen.py
Dropped temperature parameter when constructing OpenAIChatCompletionClient (model="gpt-4o-mini" only). Reformatted message construction and dicts (trailing commas, multiline strings). Notes indicate underlying client __init__ changed to def __init__(self, model: str).
Python: LangGraph workflow and streaming
common/py/langgraph.py
Added tool-enabled workflow: START → agent → tools → agent with conditional should_continue. Implemented call_model, tool_node, and streaming of final state, selecting last AIMessage. Improved user input handling and error logging/return strings. Minor formatting updates.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant U as User
  participant R as Runner (run)
  participant G as LangGraph (StateGraph)
  participant A as Agent (Model)
  participant T as Tools

  U->>R: provide input (or fallback default)
  R->>G: start with Human message
  G->>A: call_model(state)
  A-->>G: AI message (may include tool_calls)

  alt tool_calls present
    G->>T: execute tool_calls
    T-->>G: tool results
    G->>A: call_model(updated state)
    A-->>G: AI message
  else no tool_calls
    Note over G: Flow proceeds to END
  end

  loop stream
    G-->>R: chunks (messages/state updates)
    R->>R: track last AIMessage
  end

  R-->>U: return last AI message content or error string
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

Hops on the graph where the tool-calls play,
I nudge the prompts and whisk temps away.
Streams trickle in, a silver thread,
Until the last AI word is said.
Nose twitch—workflow tidy and bright,
Bugs beware, I lunch on byte.
(_/)> ✅

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title directly references the removal of the temperature override, which aligns with the core change applied across multiple modules in this pull request.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch temp-fixes

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fa5aa3f and 5bf10af.

📒 Files selected for processing (4)
  • common/js/langchain.ts (1 hunks)
  • common/js/langgraph.ts (1 hunks)
  • common/py/autogen.py (1 hunks)
  • common/py/langgraph.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (7)
common/{js,py}/**/*

📄 CodeRabbit inference engine (AGENT.md)

Place reusable template code under common/js and common/py

Files:

  • common/py/autogen.py
  • common/js/langchain.ts
  • common/js/langgraph.ts
  • common/py/langgraph.py
common/{js,py}/**/*.{js,ts,tsx,py}

📄 CodeRabbit inference engine (AGENT.md)

common/{js,py}/**/*.{js,ts,tsx,py}: Follow existing template patterns: include a welcome() function and an async run() function
Use proper error handling and logging via context.logger.error() in templates

Files:

  • common/py/autogen.py
  • common/js/langchain.ts
  • common/js/langgraph.ts
  • common/py/langgraph.py
common/py/**/*.py

📄 CodeRabbit inference engine (AGENT.md)

Prefer async patterns in Python templates; wrap blocking calls with asyncio.to_thread()

common/py/**/*.py: Prefer importing types from the agentuity package
All code should follow Python best practices and include type hints
Use the provided logger from AgentContext for logging (e.g., context.logger.info("msg: %s", arg)) instead of ad-hoc logging
Use type hints for better IDE support
Import types from agentuity
Use structured error handling with try/except blocks

Files:

  • common/py/autogen.py
  • common/py/langgraph.py
**/*.{ts,tsx,js}

📄 CodeRabbit inference engine (AGENT.md)

**/*.{ts,tsx,js}: Use Biome for linting and formatting for all JavaScript/TypeScript code
Enforce 2-space indentation, single quotes, ES5 trailing commas, and always use semicolons in JS/TS

Files:

  • common/js/langchain.ts
  • common/js/langgraph.ts
common/js/**/*.{js,ts,tsx,jsx}

📄 CodeRabbit inference engine (AGENT.md)

common/js/**/*.{js,ts,tsx,jsx}: Use async/await correctly in JS/TS templates
For text responses, return resp.text(string)
For JSON responses, return resp.json(object)
When handling array results, extract the first element (e.g., results[0]) before using resp.text()
Always provide meaningful fallback responses
Use Vercel AI SDK streaming with streamText and respond via resp.stream() when streaming

Files:

  • common/js/langchain.ts
  • common/js/langgraph.ts
common/js/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENT.md)

Resolve TypeScript import name conflicts using aliases (e.g., Agent as PraisonAgent)

Files:

  • common/js/langchain.ts
  • common/js/langgraph.ts
common/js/**/*.ts

📄 CodeRabbit inference engine (common/js/AGENTS.md)

common/js/**/*.ts: Prefer importing types from the @agentuity/sdk package (from node_modules)
Each agent file should export a default function
Prefer naming the default-exported function Agent or the specific Agent name based on the Agent description
All code should be written in TypeScript
Use the provided logger from AgentContext (e.g., ctx.logger.info(...))
Use TypeScript for better type safety and IDE support
Import types from @agentuity/sdk
Use structured error handling with try/catch blocks
Leverage the provided logger for consistent logging
Use the storage APIs (context.kv and context.vector) for persisting data

Files:

  • common/js/langchain.ts
  • common/js/langgraph.ts
🧬 Code graph analysis (2)
common/py/autogen.py (1)
common/py/langgraph.py (2)
  • welcome (9-22)
  • run (95-126)
common/py/langgraph.py (2)
common/js/langgraph.ts (1)
  • welcome (6-21)
common/py/autogen.py (2)
  • welcome (9-22)
  • run (25-61)
🪛 Ruff (0.13.3)
common/py/autogen.py

57-57: Do not catch blind exception: Exception

(BLE001)


58-58: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

common/py/langgraph.py

49-49: f-string without any placeholders

Remove extraneous f prefix

(F541)


108-108: Unnecessary key check before dictionary access

Replace with dict.get

(RUF019)


122-122: Do not catch blind exception: Exception

(BLE001)


123-123: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

🔇 Additional comments (3)
common/js/langchain.ts (1)

39-45: Prompt adjustments look good

Wording and structure remain coherent after removing the temperature override; nothing else looks off.

common/py/autogen.py (1)

32-40: Temperature removal aligns with the client change

OpenAIChatCompletionClient is now created with just the model, matching the updated signature—looks consistent across the flow.

common/js/langgraph.ts (1)

39-45: Prompt restructuring looks solid

The explicit Human: cue matches the new formatting elsewhere; removal of the temperature parameter keeps the client consistent.


Comment @coderabbitai help to get the list of available commands and usage tips.

@jhaynie jhaynie merged commit 62aff06 into main Oct 4, 2025
2 checks passed
@jhaynie jhaynie deleted the temp-fixes branch October 4, 2025 20:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants