HexProperty's Automated Dialog Response System with AI Integration
- Automatically respond to common VS Code dialogs
- AI-powered responses using multiple LLM providers
- Configurable provider selection and model choices
- Cost tracking for AI usage
- Custom provider support through JSON configuration
- Secure API key management through environment variables
- Install the extension
- Copy
.env.example
to.env
and add your API keys:OPENROUTER_API_KEY=your_openrouter_api_key_here TOGETHER_API_KEY=your_together_api_key_here ANTHROPIC_API_KEY=your_anthropic_api_key_here
hexQuickResponder.autoRespond
: Enable/disable automatic dialog responseshexQuickResponder.useAi
: Enable/disable AI processing for unknown dialogshexQuickResponder.selectedProvider
: Selected LLM provider for AI processinghexQuickResponder.selectedModel
: Selected model for the current providerhexQuickResponder.responses
: Mapping of questions to their automatic responseshexQuickResponder.customProviders
: List of custom LLM provider configurations
-
OpenRouter.ai
- Models: Qwen 32B, Claude 2
- Features: Wide model selection, competitive pricing
-
Together.ai
- Models: Llama 2 70B
- Features: Open source models, cost-effective
-
Anthropic Direct
- Models: Claude 2, Claude Instant
- Features: High performance, extensive context window
- Command Palette > "Hex: Add Custom LLM Provider"
- Enter provider configuration in JSON format:
{ "id": "custom", "name": "Custom Provider", "baseUrl": "https://api.custom-provider.com", "headerTemplate": { "Content-Type": "application/json" }, "models": [ { "id": "model-1", "name": "Model One", "contextLength": 4096, "costPer1kTokens": 0.001, "description": "Description of the model" } ], "defaultModel": "model-1" }
- Add corresponding API key to .env file:
CUSTOM_API_KEY=your_api_key_here
Hex: Quick Respond to Dialog
(Ctrl+Alt+H): Manually trigger dialog responseHex: Add Quick Response Mapping
: Add new question/response mappingHex: Add Custom LLM Provider
: Add new provider configuration
The AI responses follow a structured approach:
- Understand the core problem
- Break it down into actionable steps
- Execute with clear direction
This ensures responses are focused on getting things done efficiently.
- API keys are stored securely in the .env file
- Environment variables are used to prevent key exposure
- Custom provider configurations are stored in VS Code settings
- Cost per request is displayed after each AI response
- Token usage tracking helps monitor API consumption
- Model selection allows choosing cost-effective options