Skip to content

Conversation

@Cartine-dev
Copy link

Refs: #522

📝 Description

This pull request introduces support for GitHub Models as a new LLM provider in Void. This directly addresses the request to use models provided by GitHub, allowing users on free or student plans to access and experiment with a variety of powerful AI models using their GitHub account for authentication.

The implementation leverages the existing OpenAI-compatible framework, making it a seamless and maintainable addition to the current provider architecture.

✨ Features Implemented

  • New githubModels Provider: A new provider has been added, configured to use the official https://models.github.ai/inference endpoint.
  • PAT Authentication: Users can authenticate by providing a GitHub Personal Access Token (PAT) with models:read permissions in the settings. Clear instructions are provided in the UI.
  • Predefined Model List: A list of available GitHub Models (e.g., openai/gpt-4.1, deepseek/deepseek-r1, xai/grok-3) is included, along with their respective capabilities.
  • Custom Rate Limit Handling: Specific error handling has been implemented for GitHub's 429 Too Many Requests status code. The system now provides user-friendly messages for different rate-limiting scenarios (per minute, per day, concurrent requests), guiding the user on how to proceed.

🧪 How to Test

  1. Pull this branch and run the application.
  2. Navigate to Void Settings > Providers.
  3. Click "Add Provider" and select "GitHub Models".
  4. Follow the instructions to create a GitHub Personal Access Token (PAT) with models:read scope.
  5. Enter the generated PAT into the API Key field.
  6. Navigate to a feature, such as Chat, and select one of the newly available models (e.g., openai/gpt-4.1 (githubmodels)).
  7. Send a prompt and verify that a valid response is received.
  8. (Optional) To test the rate-limiting, you can make rapid successive requests to trigger a 429 error and verify that the specific error message is displayed.

✅ Contributor Checklist

  • I have read and followed the project's contribution guidelines.
  • My code follows the code style of this project.
  • I have tested my changes locally.
  • My commit messages are formatted according to the Conventional Commits specification.

Hey @animeshlego5 and @andrewpareles, I've implemented the support for GitHub Models as discussed in issue #522. This approach integrates with the existing OpenAI-compatible infrastructure and should be straightforward to review. Let me know your thoughts!

Implements support for GitHub Models as a new LLM provider, allowing users to leverage free model inference directly within the editor.

- Adds `githubModels` to the list of available providers, configured to use the `https://models.github.ai/inference` endpoint.
- Authentication is handled via a GitHub Personal Access Token (PAT) with `models:read` permissions, which users can configure in the settings.
- Includes a predefined list of available models (e.g., `openai/gpt-4.1`, `deepseek/deepseek-r1`) and their capabilities.
- Implements specific error handling to provide clear feedback to the user when GitHub's API rate limits (per minute, per day, concurrent) are exceeded.

Refs: voideditor#522
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant