Skip to content

[BUG] Error message in chat when try to use local LLM (Ollama) #120

Open
@BalooSP23

Description

@BalooSP23

Description

Ok so first thank you to anybody who could help me to solve this.

So everything I start a chat, there is this error who return: 'This app has encountered an error. The original error message is redacted to prevent data leaks. Full error details have been recorded in the logs (if you're on Streamlit Cloud, click on 'Manage app' in the lower right of your app).'

For info, my LLM and archon run in locker on unraid.
For the LLM I use Ollama who work with other app that I use inside unraid.

I really don't know where I could look now to fix the issue.

Steps to Reproduce

  1. Follow the procedure in the GUI
  2. Click on 'Chat'
  3. Start a message
  4. See error

Expected Behavior

Should return me a message but I don't know exactly what it should look like because it never worked yet.

Actual Behavior

See the description, plus the log:

`text error warn system array login

INFO:httpx:HTTP Request: GET https://iyvwmgqzmwparumsvmii.supabase.co/rest/v1/site_pages?select=%2A "HTTP/2 200 OK"
INFO:httpx:HTTP Request: GET https://iyvwmgqzmwparumsvmii.supabase.co/rest/v1/site_pages?select=id&limit=1 "HTTP/2 200 OK"
INFO:httpx:HTTP Request: GET https://iyvwmgqzmwparumsvmii.supabase.co/rest/v1/site_pages?select=%2A "HTTP/2 200 OK"
INFO:httpx:HTTP Request: GET https://iyvwmgqzmwparumsvmii.supabase.co/rest/v1/site_pages?select=id&limit=1 "HTTP/2 200 OK"
INFO:httpx:HTTP Request: GET https://iyvwmgqzmwparumsvmii.supabase.co/rest/v1/site_pages?select=%2A "HTTP/2 200 OK"
INFO:httpx:HTTP Request: GET https://iyvwmgqzmwparumsvmii.supabase.co/rest/v1/site_pages?select=count&metadata-%3E%3Esource=eq.pydantic_ai_docs "HTTP/2 206 Partial Content"
INFO:httpx:HTTP Request: GET https://iyvwmgqzmwparumsvmii.supabase.co/rest/v1/site_pages?select=url&metadata-%3E%3Esource=eq.pydantic_ai_docs "HTTP/2 200 OK"
INFO:openai._base_client:Retrying request to /chat/completions in 0.399786 seconds
INFO:httpx:HTTP Request: POST http://192.168.0.211:11434/v1/chat/completions "HTTP/1.1 400 Bad Request"
2025-04-19 13:05:42.763 Uncaught app execution
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/streamlit/runtime/scriptrunner/exec_code.py", line 88, in exec_func_with_error_handling
result = func()
^^^^^^
File "/usr/local/lib/python3.12/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 579, in code_to_exec
exec(code, module.dict)
File "/app/streamlit_ui.py", line 114, in
asyncio.run(main())
File "/usr/local/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/app/streamlit_ui.py", line 93, in main
await chat_tab()
File "/app/streamlit_pages/chat.py", line 81, in chat_tab
async for chunk in run_agent_with_streaming(user_input):
File "/app/streamlit_pages/chat.py", line 36, in run_agent_with_streaming
async for msg in agentic_flow.astream(
File "/usr/local/lib/python3.12/site-packages/langgraph/pregel/init.py", line 2007, in astream
async for _ in runner.atick(
File "/usr/local/lib/python3.12/site-packages/langgraph/pregel/runner.py", line 527, in atick
_panic_or_proceed(
File "/usr/local/lib/python3.12/site-packages/langgraph/pregel/runner.py", line 619, in _panic_or_proceed
raise exc
File "/usr/local/lib/python3.12/site-packages/langgraph/pregel/retry.py", line 128, in arun_with_retry
return await task.proc.ainvoke(task.input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langgraph/utils/runnable.py", line 532, in ainvoke
input = await step.ainvoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langgraph/utils/runnable.py", line 320, in ainvoke
ret = await asyncio.create_task(coro, context=context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/archon/archon_graph.py", line 140, in advisor_with_examples
result = await advisor_agent.run(state['latest_user_message'], deps=deps)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pydantic_ai/agent.py", line 340, in run
end_result, _ = await graph.run(
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pydantic_graph/graph.py", line 187, in run
next_node = await self.next(next_node, history, state=state, deps=deps, infer_name=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pydantic_graph/graph.py", line 263, in next
next_node = await node.run(ctx)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 254, in run
model_response, request_usage = await agent_model.request(ctx.state.message_history, model_settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pydantic_ai/models/openai.py", line 167, in request
response = await self._completions_create(messages, False, cast(OpenAIModelSettings, model_settings or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pydantic_ai/models/openai.py", line 203, in _completions_create
return await self.client.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 1720, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1849, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1543, in request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1644, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': 'registry.ollama.ai/library/gemma3:4b does not support tools', 'type': 'api_error', 'param': None, 'code': None}}
During task with name 'advisor_with_examples' and id '27e3fe46-4b78-e50c-f3f0-0cc50d661436'
13:05:42.703 reasoner run prompt=
User AI Agent Request: Build me an AI agent that can sear...o creating this agent for the user in the scope document.

13:05:42.703 preparing model and tools run_step=1
13:05:42.704 model request
13:05:42.706 advisor_agent run prompt=Build me an AI agent that can search the web with the Brave API.
13:05:42.707 preparing model and tools run_step=1
13:05:42.707 model request
`

Screenshots

Image Image Image Image Image Image Image Image

Environment

  • OS: Docker inside Unraid
  • Python Version: Python 3.13

Additional Context

Add any other context about the problem here, such as:

  • It's happening all the time and I already tried to reinstall archon but still have the issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions