Skip to content

Request for Custom API Endpoint Support  #14

Closed
@toshiakit

Description

@toshiakit

per toshiakit/MatGPT#30

Hi,

I'm exploring MatGPT's integration capabilities with LLMs and am interested in extending its utility to custom models, particularly those deployed locally.

In Python projects, customizing the base_url, as seen in openai-python openai/openai-python#913, is a straightforward approach to support custom API endpoints.

Although I'm not familiar with the specific construction of MatGPT, could a similar method be applied here to enable the use of in-house or custom LLMs via user-defined API endpoints? Your insights or guidance on this possibility would be greatly appreciated.

This refers to the use of custom assistants
https://community.openai.com/t/custom-assistant-api-endpoints/567107

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions