Prerequisites
Access to a Custom AI Model: Ensure you have access to a custom AI model compatible with the OpenAI API format. This could be through a cloud service like DeepInfra or a self-hosted solution.
API Endpoint URL: Obtain the API endpoint URL for your custom model. This is necessary for WatchWolf to communicate with your model.
API Key: If your model requires authentication, have your API key ready.
Step 1: Set Up Your Custom AI Model
For Cloud Providers (e.g., DeepInfra)
Create an Account
Sign up for an account with the cloud provider offering OpenAI-compatible models, such as DeepInfra.
Select and Deploy a Model
Choose the AI model you wish to use from the provider's offerings.
Deploy the model as per the provider's instructions to obtain the API endpoint.
Obtain API Credentials
Generate an API key or token required to access the model.
Note down the API endpoint URL and API key provided by the service.
For Self-Hosted Solutions (e.g., Ollama, vLLM)
Install the Model
Follow the installation instructions provided by the self-hosted solution to install the AI model on your server or local machine.
Start the API Server
Launch the API server that exposes an OpenAI-compatible endpoint.
Ollama Example:
ollama serve
vLLM Example:
python -m vllm.entrypoints.openai.api_server --model your_model_name
Obtain API Endpoint
Note the URL where your API server is running (e.g., http://localhost:8000).
Step 2: Add the Custom Model to WatchWolf
Open Models Management
In the WatchWolf app, navigate to Manage > Models.
Add a New Model
Click on the "+" icon at the top of the screen to add a new model.
Enter API Endpoint and Model Details
API Endpoint: Enter the endpoint URL for your custom AI model's API.
For Cloud Providers: Use the endpoint provided by the service (e.g., https://api.deepinfra.com/v1).
For Self-Hosted Models: Use your server's URL (e.g., http://localhost:8000).
API Key: Enter the API key or token provided by your cloud service or self-hosted setup. If no key is required, you may leave this field blank.
Model Name: Enter the name of the model you are using (e.g., llama3.1).
Step 3: Set the Model as Default
Enable Default Option
Toggle the "Default" option to set this custom model as the default for AI features within WatchWolf.
Save the Model Configuration
Click on "Save" or "Add" to confirm and save your new model settings.
Note: Only the model set as Default will be used for AI features like terminal command generation, snippets generation, and metrics analysis. Ensure you enable the Default option for the model you wish to use for these functionalities.
Additional Tips
API Endpoint Configuration: Ensure that the API server for your custom model is running and accessible at the endpoint you specified in WatchWolf.
Model Compatibility: Verify that the custom model you are using is compatible with the OpenAI API schema to ensure smooth integration.
Security Considerations:
Cloud Providers: Keep your API keys and credentials secure. Do not share them publicly or store them in unsecured locations.
Self-Hosted Models: Since the model is running on your infrastructure, your data remains within your controlled environment, enhancing privacy and security.
Performance Monitoring: Be mindful of resource usage, especially if you're using self-hosted models, as running large AI models can be resource-intensive.
Switching Models: You can add multiple custom models and switch between them by changing the default setting in the Models Management section.