Closed
Description
Hi,
I'm exploring MatGPT's integration capabilities with LLMs and am interested in extending its utility to custom models, particularly those deployed locally.
In Python projects, customizing the base_url, as seen in openai-python openai/openai-python#913, is a straightforward approach to support custom API endpoints.
Although I'm not familiar with the specific construction of MatGPT, could a similar method be applied here to enable the use of in-house or custom LLMs via user-defined API endpoints? Your insights or guidance on this possibility would be greatly appreciated.
This refers to the use of custom assistants
https://community.openai.com/t/custom-assistant-api-endpoints/567107