Skip to content

Add documentation for LLM providers (Anthropic, OpenAI, Ollama) #53

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 12, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions docs/providers/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"label": "Providers",
"position": 4,
"link": {
"type": "doc",
"id": "providers/index"
}
}
70 changes: 70 additions & 0 deletions docs/providers/anthropic.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
---
sidebar_position: 2
---

# Anthropic (Claude)

[Anthropic](https://www.anthropic.com/) is the company behind the Claude family of large language models, known for their strong reasoning capabilities, long context windows, and robust tool-calling support.

## Setup

To use Claude models with MyCoder, you need an Anthropic API key:

1. Create an account at [Anthropic Console](https://console.anthropic.com/)
2. Navigate to the API Keys section and create a new API key
3. Set the API key as an environment variable or in your configuration file

### Environment Variables

You can set the Anthropic API key as an environment variable:

```bash
export ANTHROPIC_API_KEY=your_api_key_here
```

### Configuration

Configure MyCoder to use Anthropic's Claude in your `mycoder.config.js` file:

```javascript
export default {
// Provider selection
provider: 'anthropic',
model: 'claude-3-7-sonnet-20250219',

// Optional: Set API key directly (environment variable is preferred)
// anthropicApiKey: 'your_api_key_here',

// Other MyCoder settings
maxTokens: 4096,
temperature: 0.7,
// ...
};
```

## Supported Models

Anthropic offers several Claude models with different capabilities and price points:

- `claude-3-7-sonnet-20250219` (recommended) - Strong reasoning and tool-calling capabilities with 200K context
- `claude-3-5-sonnet-20240620` - Balanced performance and cost with 200K context
- `claude-3-opus-20240229` - Most capable model with 200K context
- `claude-3-haiku-20240307` - Fastest and most cost-effective with 200K context

## Best Practices

- Claude models excel at complex reasoning tasks and multi-step planning
- They have strong tool-calling capabilities, making them ideal for MyCoder workflows
- Claude models have a 200K token context window, allowing for large codebases to be processed
- For cost-sensitive applications, consider using Claude Haiku for simpler tasks

## Troubleshooting

If you encounter issues with Anthropic's Claude:

- Verify your API key is correct and has sufficient quota
- Check that you're using a supported model name
- For tool-calling issues, ensure your functions are properly formatted
- Monitor your token usage to avoid unexpected costs

For more information, visit the [Anthropic Documentation](https://docs.anthropic.com/).
54 changes: 54 additions & 0 deletions docs/providers/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
---
sidebar_position: 1
---

# LLM Providers

MyCoder supports multiple Language Model (LLM) providers, giving you flexibility to choose the best solution for your needs. This section documents how to configure and use the various supported providers.

## Supported Providers

MyCoder currently supports the following LLM providers:

- [**Anthropic**](./anthropic.md) - Claude models from Anthropic
- [**OpenAI**](./openai.md) - GPT models from OpenAI
- [**Ollama**](./ollama.md) - Self-hosted open-source models via Ollama

## Configuring Providers

Each provider has its own specific configuration requirements, typically involving:

1. Setting API keys or connection details
2. Selecting a specific model
3. Configuring provider-specific parameters

You can configure the provider in your `mycoder.config.js` file. Here's a basic example:

```javascript
export default {
// Provider selection
provider: 'anthropic',
model: 'claude-3-7-sonnet-20250219',

// Other MyCoder settings
// ...
};
```

## Provider Selection Considerations

When choosing which provider to use, consider:

- **Performance**: Different providers have different capabilities and performance characteristics
- **Cost**: Pricing varies significantly between providers
- **Features**: Some models have better support for specific features like tool calling
- **Availability**: Self-hosted options like Ollama provide more control but require setup
- **Privacy**: Self-hosted options may offer better privacy for sensitive work

## Provider-Specific Documentation

For detailed instructions on setting up each provider, see the provider-specific pages:

- [Anthropic Configuration](./anthropic.md)
- [OpenAI Configuration](./openai.md)
- [Ollama Configuration](./ollama.md)
107 changes: 107 additions & 0 deletions docs/providers/ollama.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
---
sidebar_position: 4
---

# Ollama

[Ollama](https://ollama.ai/) is a platform for running open-source large language models locally. It allows you to run various models on your own hardware, providing privacy and control over your AI interactions.

## Setup

To use Ollama with MyCoder:

1. Install Ollama from [ollama.ai](https://ollama.ai/)
2. Start the Ollama service
3. Pull a model that supports tool calling
4. Configure MyCoder to use Ollama

### Installing Ollama

Follow the installation instructions on the [Ollama website](https://ollama.ai/) for your operating system.

For macOS:
```bash
curl -fsSL https://ollama.ai/install.sh | sh
```

For Linux:
```bash
curl -fsSL https://ollama.ai/install.sh | sh
```

For Windows, download the installer from the Ollama website.

### Pulling a Model

After installing Ollama, you need to pull a model that supports tool calling. **Important: Most Ollama models do not support tool calling**, which is required for MyCoder.

A recommended model that supports tool calling is:

```bash
ollama pull medragondot/Sky-T1-32B-Preview:latest
```

### Environment Variables

You can set the Ollama base URL as an environment variable (defaults to http://localhost:11434 if not set):

```bash
export OLLAMA_BASE_URL=http://localhost:11434
```

### Configuration

Configure MyCoder to use Ollama in your `mycoder.config.js` file:

```javascript
export default {
// Provider selection
provider: 'ollama',
model: 'medragondot/Sky-T1-32B-Preview:latest',

// Optional: Custom base URL (defaults to http://localhost:11434)
// ollamaBaseUrl: 'http://localhost:11434',

// Other MyCoder settings
maxTokens: 4096,
temperature: 0.7,
// ...
};
```

## Tool Calling Support

**Important**: For MyCoder to function properly, the Ollama model must support tool calling (function calling). Most open-source models available through Ollama **do not** support this feature yet.

Confirmed models with tool calling support:

- `medragondot/Sky-T1-32B-Preview:latest` - Recommended for MyCoder

If using other models, verify their tool calling capabilities before attempting to use them with MyCoder.

## Hardware Requirements

Running large language models locally requires significant hardware resources:

- Minimum 16GB RAM (32GB+ recommended)
- GPU with at least 8GB VRAM for optimal performance
- SSD storage for model files (models can be 5-20GB each)

## Best Practices

- Start with smaller models if you have limited hardware
- Ensure your model supports tool calling before using with MyCoder
- Run on a machine with a dedicated GPU for better performance
- Consider using a cloud provider's API for resource-intensive tasks if local hardware is insufficient

## Troubleshooting

If you encounter issues with Ollama:

- Verify the Ollama service is running (`ollama serve`)
- Check that you've pulled the correct model
- Ensure the model supports tool calling
- Verify your hardware meets the minimum requirements
- Check Ollama logs for specific error messages

For more information, visit the [Ollama Documentation](https://github.com/ollama/ollama/tree/main/docs).
77 changes: 77 additions & 0 deletions docs/providers/openai.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
---
sidebar_position: 3
---

# OpenAI

[OpenAI](https://openai.com/) provides a suite of powerful language models, including the GPT family, which offer strong capabilities for code generation, analysis, and tool use.

## Setup

To use OpenAI models with MyCoder, you need an OpenAI API key:

1. Create an account at [OpenAI Platform](https://platform.openai.com/)
2. Navigate to the API Keys section and create a new API key
3. Set the API key as an environment variable or in your configuration file

### Environment Variables

You can set the OpenAI API key as an environment variable:

```bash
export OPENAI_API_KEY=your_api_key_here
```

Optionally, if you're using an organization-based account:

```bash
export OPENAI_ORGANIZATION=your_organization_id
```

### Configuration

Configure MyCoder to use OpenAI in your `mycoder.config.js` file:

```javascript
export default {
// Provider selection
provider: 'openai',
model: 'gpt-4o',

// Optional: Set API key directly (environment variable is preferred)
// openaiApiKey: 'your_api_key_here',
// openaiOrganization: 'your_organization_id',

// Other MyCoder settings
maxTokens: 4096,
temperature: 0.7,
// ...
};
```

## Supported Models

OpenAI offers several models with different capabilities:

- `gpt-4o` (recommended) - Latest model with strong reasoning and tool-calling capabilities
- `gpt-4-turbo` - Strong performance with 128K context window
- `gpt-4` - Original GPT-4 model with 8K context window
- `gpt-3.5-turbo` - More affordable option for simpler tasks

## Best Practices

- GPT-4o provides the best balance of performance and cost for most MyCoder tasks
- For complex programming tasks, use GPT-4 models rather than GPT-3.5
- The tool-calling capabilities in GPT-4o are particularly strong for MyCoder workflows
- Use the JSON response format for structured outputs when needed

## Troubleshooting

If you encounter issues with OpenAI:

- Verify your API key is correct and has sufficient quota
- Check that you're using a supported model name
- For rate limit issues, implement exponential backoff in your requests
- Monitor your token usage to avoid unexpected costs

For more information, visit the [OpenAI Documentation](https://platform.openai.com/docs/).