Skip to content

(Feature Request) Allow setting a custom OpenAI API endpoint to experiment with locally hosted LLMs #32

@sigmondkukla

Description

@sigmondkukla

Some open-source, locally run LLMs have the capability to emulate key features of the OpenAI API to "appear" to an application as GPT. Can you add support for a custom API endpoint to experiment with this and see if the performance of something like WizardLM or Falcon may be up to the task of generating commands?
I would then envision an input field in the Settings window with the default OpenAI API endpoint URL replaceable with your own.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions