Vercel AI SDK Integration
Overview
The NextAI starter kit leverages the Vercel AI SDK to provide a powerful and flexible foundation for building AI-powered applications. The SDK enables seamless integration with various AI models and provides built-in support for streaming responses, error handling, and more.
Available Models
The starter kit supports multiple AI providers out of the box:
- OpenAI
- Anthropic
- Cohere
- Google AI
- Mistral AI
- And more...
You can configure which models to use by setting the appropriate API keys in your .env
file.
Core Features
Streaming Responses
The starter kit implements streaming responses using the Vercel AI SDK's createDataStreamResponse
function. This enables real-time message updates and a better user experience.
return createDataStreamResponse({
execute: (dataStream) => {
const result = streamText({
model: myProvider.languageModel(selectedChatModel),
system: systemPrompt({ selectedChatModel }),
messages,
// ... other configuration
});
}
});
AI Tools Integration
The starter kit includes several built-in AI tools that can be used in your chat interactions:
- Weather information retrieval
- Document creation and management
- Image generation
- Suggestion system
- N8N workflow integration
These tools are automatically available when using the chat model with reasoning capabilities.
Message Management
All chat messages are automatically saved to the database and can be retrieved for future reference. The system supports:
- Message persistence
- Chat history
- Message voting
- Privacy controls (public/private chats)
Customization
Adding New Models
You can add support for new AI models by extending the myProvider
configuration in lib/ai/models.ts
:
export const myProvider = {
languageModel: (modelId: string) => {
// Add your model configuration here
}
};
Creating Custom Tools
To add new AI tools:
- Create a new tool file in
lib/ai/tools/
- Implement the tool's logic
- Add the tool to the
experimental_activeTools
array in the chat configuration
System Prompts
Customize the AI's behavior by modifying the system prompts in lib/ai/prompts.ts
. This allows you to:
- Define the AI's personality
- Set specific instructions
- Configure tool usage parameters
Best Practices
- Always handle errors gracefully using the provided error handling mechanisms
- Implement rate limiting for API calls
- Use environment variables for sensitive configuration
- Test new models and tools thoroughly before deployment
- Monitor usage and costs for different AI providers
Security Considerations
- API keys are protected through environment variables
- Authentication is required for accessing chat features
- User data is properly sanitized before storage
- Rate limiting prevents abuse
For more detailed information about specific features or advanced usage, check out the other sections of the documentation.