Skip to content

failed on process longer paper #5

@shenhai-ran

Description

@shenhai-ran

Hello,

I am using AutoPage to generate summary for a bit long paper, which is the Clip paper here.
I use free tier Gemini 2.5 flash, for both text and image LLM.

Could you please help me to figure out is there any alternatives?

Thanks!

the error message is here:

==================================================
STEP 2: Generate project page content
==================================================
2025-11-18 11:33:40,480 - camel.models.model_manager - ERROR - Error processing with model: <camel.models.gemini_model.GeminiModel object at 0x7d99ac99ee60>
2025-11-18 11:33:40,481 - camel.agents.chat_agent - ERROR - An error occurred while running model gemini-2.5-flash, index: 0
Traceback (most recent call last):
  File "/home/devs/autopage/camel/agents/chat_agent.py", line 1100, in _step_model_response
    response = self.model_backend.run(openai_messages)
  File "/home/devs/autopage/camel/models/model_manager.py", line 211, in run
    raise exc
  File "/home/devs/autopage/camel/models/model_manager.py", line 201, in run
    response = self.current_model.run(messages)
  File "/home/devs/autopage/camel/models/gemini_model.py", line 109, in run
    response = self._client.chat.completions.create(
  File "/home/.local/miniforge3/envs/autopage/lib/python3.10/site-packages/openai/_utils/_utils.py", line 279, in wrapper
    return func(*args, **kwargs)
  File "/home/.local/miniforge3/envs/autopage/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 859, in create
    return self._post(
  File "/home/.local/miniforge3/envs/autopage/lib/python3.10/site-packages/openai/_base_client.py", line 1283, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
  File "/home/.local/miniforge3/envs/autopage/lib/python3.10/site-packages/openai/_base_client.py", line 960, in request
    return self._request(
  File "/home//.local/miniforge3/envs/autopage/lib/python3.10/site-packages/openai/_base_client.py", line 1005, in _request
    return self._retry_request(
  File "/home/.local/miniforge3/envs/autopage/lib/python3.10/site-packages/openai/_base_client.py", line 1098, in _retry_request
    return self._request(
  File "/home/.local/miniforge3/envs/autopage/lib/python3.10/site-packages/openai/_base_client.py", line 1005, in _request
    return self._retry_request(
  File "/home/.local/miniforge3/envs/autopage/lib/python3.10/site-packages/openai/_base_client.py", line 1098, in _retry_request
    return self._request(
  File "/home/.local/miniforge3/envs/autopage/lib/python3.10/site-packages/openai/_base_client.py", line 1005, in _request
    return self._retry_request(
  File "/home/.local/miniforge3/envs/autopage/lib/python3.10/site-packages/openai/_base_client.py", line 1098, in _retry_request
    return self._request(
  File "/home/.local/miniforge3/envs/autopage/lib/python3.10/site-packages/openai/_base_client.py", line 1064, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 503 - [{'error': {'code': 503, 'message': 'The model is overloaded. Please try again later.', 'status': 'UNAVAILABLE'}}]

❌ Error during generation: Unable to process messages: none of the provided models run succesfully.
Traceback (most recent call last):
  File "/home/.local/miniforge3/envs/autopage/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/.local/miniforge3/envs/autopage/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/devs/autopage/ProjectPageAgent/main_pipline.py", line 379, in <module>
    main()
  File "/home/devs/autopage/ProjectPageAgent/main_pipline.py", line 213, in main
    paper_content, figures, input_token, output_token = planner.filter_raw_content(paper_content, figures)
  File "/home/devs/autopage/ProjectPageAgent/content_planner.py", line 460, in filter_raw_content
    response = self.planner_agent.step(prompt)
  File "/home/devs/autopage/camel/agents/chat_agent.py", line 613, in step
    return self._handle_step(response_format, self.single_iteration)
  File "/home/devs/autopage/camel/agents/chat_agent.py", line 683, in _handle_step
    ) = self._step_model_response(openai_messages, num_tokens)
  File "/home/devs/autopage/camel/agents/chat_agent.py", line 1111, in _step_model_response
    raise ModelProcessingError(
camel.models.model_manager.ModelProcessingError: Unable to process messages: none of the provided models run succesfully.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions