diff --git a/.cursor/rules/always-applied/core-principles.mdc b/.cursor/rules/always-applied/core-principles.mdc new file mode 100644 index 00000000..1e3f7180 --- /dev/null +++ b/.cursor/rules/always-applied/core-principles.mdc @@ -0,0 +1,36 @@ +--- +description: Core documentation principles and writing standards +globs: +alwaysApply: true +--- + +# Core Documentation Principles + +Provide developers with documentation that is quick to read, easy to follow, and immediately actionable. + +## Writing Style & Tone + +| Guideline | Why it matters | +|-----------|---------------| +| **Active voice** | "Connect the SDK" is clearer than "The SDK should be connected." | +| **Present tense** | Keeps instructions straightforward (e.g., "Run" not "You will run"). | +| **Second‑person ("you")** | Speaks directly to the reader. Reserve "we" for collaborative tutorials. | +| **Explain intent before action** | Briefly state *why* a step is needed, then show *how*. | +| **Concrete examples over theory** | Code snippets and visuals anchor concepts. | +| **Consistent terminology** | Define a term once; reuse it exactly the same everywhere. | +| **Parallel structure** | Lists and headings should follow consistent grammatical patterns. | +| **Descriptive link text** | Use "view the guide" rather than "click here." | +| **Comment code sparsely** | Only where intent isn't obvious from variable/function names. | + +> **Rule of thumb**: every sentence should either clarify *why* or *how*—if it does neither, remove or rewrite it. + +## Style Rules + +- **Tone**: Direct, professional, friendly +- Break up large blocks of text with line‑breaks +- Avoid marketing or promotional wording +- Link to related pages when helpful, especially the **API reference** at `/fern/api-reference` +- Use **bold** text to emphasize key names or concepts +- **Titles**: Capitalize only the first word unless a proper noun is used +- **Subtitles**: Begin with *Learn to …* for guides; otherwise keep them concise and factual +- **Emojis / decorative icons**: Use only when essential for comprehension diff --git a/.cursor/rules/always-applied/fern-components.mdc b/.cursor/rules/always-applied/fern-components.mdc new file mode 100644 index 00000000..33b8cb82 --- /dev/null +++ b/.cursor/rules/always-applied/fern-components.mdc @@ -0,0 +1,224 @@ +--- +description: Fern documentation framework components and features +globs: +alwaysApply: true +--- + +# Fern Components & Framework Features + +Fern is our documentation framework. Use Fern-specific components and features for enhanced functionality. + +## Code Blocks & Syntax Highlighting + +### Multi-language Code Blocks with Tabs +Use `` for multiple language examples that automatically synchronize: + +```mdx + +```typescript title="TypeScript SDK" +import { VapiClient } from "@vapi-ai/server-sdk"; + +const client = new VapiClient({ token: process.env.VAPI_API_KEY }); +``` +```python title="Python SDK" +from vapi import Vapi + +client = Vapi(token=os.getenv("VAPI_API_KEY")) +``` +```bash title="cURL" +curl -X POST "https://api.vapi.ai/assistant" \ + -H "Authorization: Bearer $VAPI_API_KEY" +``` + +``` + +### Code Block Features +Enhance code blocks with these attributes: +- `title="filename.ext"` - Add file title +- `{2-4}` - Highlight specific lines +- `focus` - Focus on specific lines +- `maxLines=10` - Limit visible lines (default: 20) +- `wordWrap` - Wrap long lines instead of scrolling + +```mdx +```typescript title="example.ts" {2-3} maxLines=15 wordWrap +const config = { + apiKey: process.env.VAPI_API_KEY, // highlighted + timeout: 30000 // highlighted +}; +``` +``` + +## Callouts & Alerts + +Use semantic callouts to highlight important information: + +```mdx +Helpful tips and best practices +Important information to remember +Cautions and potential issues +Critical errors and troubleshooting +Additional context and explanations +Success confirmations and completed tasks +``` + +## Interactive Components + +### Accordions for Collapsible Content +Perfect for FAQs and optional details: + +```mdx + + + Detailed answer with searchable content (Cmd+F works even when collapsed) + + + More detailed explanations + + +``` + +### Tabs for Related Content +Use for different approaches or languages: + +```mdx + + + Visual, no-code approach with screenshots + + + Programmatic implementation + + +``` + +### Steps for Sequential Processes +Automatically numbered with anchor links: + +```mdx + + + Create your Vapi account and get your API key + + + Install using your preferred package manager + + + Create your first assistant + + +``` + +## Cards & Navigation + +### Individual Cards +```mdx + + Get started with Vapi's Python SDK + +``` + +### Card Groups for Options +```mdx + + + **Best for:** First-time users + + Get up and running in 5 minutes + + + **Best for:** Production deployments + + Configure advanced features + + +``` + +## Content Layout + +### Aside for Sticky Content +Push content to the right in a sticky container: + +```mdx + +``` + +### Frames for Images +Wrap images in a styled container: + +```mdx + + Dashboard screenshot + +``` + +## API Reference Components + +### Endpoint Snippets +Reference API endpoints directly: + +```mdx + + + +``` + +### Parameter Documentation +Use structured parameter tables: + +```mdx + + The name of your assistant + + + The LLM model to use + +``` + +## Advanced Features + +### Embeds for Rich Media +```mdx + + +``` + +### Icons from Font Awesome +```mdx + + +``` + +### Tooltips for Contextual Help +```mdx + + API Key + +``` + +## Best Practices + +### Component Selection +- **CodeBlocks** - For multi-language examples that should sync +- **Tabs** - For different approaches to the same task +- **Steps** - For sequential procedures +- **Cards** - For navigation and option selection +- **Accordions** - For optional details and FAQs +- **Callouts** - For important information that needs attention + +### Content Organization +- Use **Aside** for complementary content that shouldn't interrupt the main flow +- Use **Frames** for important screenshots and diagrams +- Use **CardGroups** to present multiple options clearly +- Use **AccordionGroups** for comprehensive FAQ sections + +### Accessibility & Search +- All accordion content is searchable even when collapsed +- Components are built with accessibility in mind +- Proper semantic HTML is generated for SEO + +--- + +**Framework Note:** Fern automatically handles syntax highlighting, responsive design, and search indexing for all components. diff --git a/.cursor/rules/code-standards.mdc b/.cursor/rules/code-standards.mdc new file mode 100644 index 00000000..fcf2b0a0 --- /dev/null +++ b/.cursor/rules/code-standards.mdc @@ -0,0 +1,143 @@ +--- +description: Code quality standards and best practices for documentation examples. Should be used whenever a code snippet needs to be included in the document. +globs: +alwaysApply: false +--- + +# Code Quality Standards + +## General Principles + +### Code Documentation +- All code examples must be **tested and functional** +- Include all necessary imports and dependencies +- Use realistic placeholder values (e.g., `YOUR_API_KEY`, `your-assistant-id`) +- Follow language-specific conventions and best practices + +### Error Handling +- Include proper error handling in all examples +- Show both success and failure scenarios +- Provide meaningful error messages and debugging guidance +- Use try-catch blocks where appropriate + +### Security Best Practices +- Never hardcode API keys or sensitive data +- Use environment variables for configuration +- Include security warnings where relevant +- Follow OAuth/API key best practices + +## Language-Specific Standards + +### TypeScript/JavaScript +```typescript +// ✅ Good - Proper imports and error handling +import { VapiClient } from "@vapi-ai/server-sdk"; + +const vapi = new VapiClient({ + token: process.env.VAPI_API_KEY +}); + +try { + const assistant = await vapi.assistants.create({ + name: "Customer Support", + // ... configuration + }); + console.log(`Assistant created: ${assistant.id}`); +} catch (error) { + console.error("Failed to create assistant:", error); +} +``` + +### Python +```python +# ✅ Good - Proper imports and error handling +import os +from vapi import Vapi + +client = Vapi(token=os.getenv("VAPI_API_KEY")) + +try: + assistant = client.assistants.create( + name="Customer Support", + # ... configuration + ) + print(f"Assistant created: {assistant.id}") +except Exception as error: + print(f"Failed to create assistant: {error}") +``` + +### cURL +```bash +# ✅ Good - Proper headers and error codes +curl -X POST "https://api.vapi.ai/assistant" \ + -H "Authorization: Bearer $VAPI_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "name": "Customer Support" + }' \ + --fail-with-body +``` + +## Code Block Formatting + +### Multi-language Examples +Always provide multiple implementation options using Fern's ``: + +```mdx + +```typescript title="TypeScript SDK" +// Complete working example +``` +```python title="Python SDK" +# Complete working example +``` +```bash title="cURL" +# Complete working example +``` + +``` + +### Code Attributes +Use appropriate attributes for code blocks: +- `maxLines=10` for long examples +- `wordWrap` for wide content +- `title="filename.ext"` for file examples +- `{2-4}` for line highlighting + +### Placeholder Standards +- `YOUR_API_KEY` for API keys +- `YOUR_ASSISTANT_ID` for resource IDs +- `your-phone-number` for phone numbers +- `your-webhook-url` for URLs + +## Production Readiness + +### Environment Configuration +```typescript +// ✅ Good - Environment-based configuration +const config = { + apiKey: process.env.VAPI_API_KEY, + baseUrl: process.env.VAPI_BASE_URL || 'https://api.vapi.ai', + timeout: parseInt(process.env.VAPI_TIMEOUT || '30000') +}; +``` + +### Rate Limiting +```typescript +// ✅ Good - Include rate limiting considerations +async function bulkCreateAssistants(configs: AssistantConfig[]) { + const results = []; + for (const config of configs) { + try { + const assistant = await vapi.assistants.create(config); + results.push(assistant); + + // Rate limiting - wait between requests + await new Promise(resolve => setTimeout(resolve, 1000)); + } catch (error) { + console.error(`Failed to create assistant: ${error}`); + } + } + return results; +} +``` diff --git a/.cursor/rules/content-templates.mdc b/.cursor/rules/content-templates.mdc new file mode 100644 index 00000000..a92a28c6 --- /dev/null +++ b/.cursor/rules/content-templates.mdc @@ -0,0 +1,186 @@ +--- +description: Content templates and page skeletons for common documentation patterns. Use this when creating new documents, creating feature overviews, etc. +globs: +alwaysApply: false +--- + +# Content Templates + +## Page Templates + +### Standard Documentation Page +```mdx +--- +title: [Page title] +subtitle: [Brief description] +slug: [category]/[page-name] +description: [Short description for preview link] +--- + +## Overview + +[Brief description of what this page covers and who it's for] + +- [Key point or capability 1] +- [Key point or capability 2] +- [Key point or capability 3] + +For details, see **[Related Section]**. + +## [Main Content Section] + +[Core content with examples, steps, or explanations] + +## FAQ + + + + [Clear, helpful answer] + + +``` + +### Feature Overview Page +```mdx +--- +title: [Feature name] +subtitle: Learn [what users will accomplish] +--- + +## Overview + +[Feature name] enables you to [main capability]. This [type of solution] helps you [business outcome]. + +**[Feature] allows you to:** +- [Specific capability 1] +- [Specific capability 2] +- [Specific capability 3] + +## How [feature] works + +[Brief explanation of the underlying process or technology] + + + + [Brief description of first step] + + + [Brief description of second step] + + + [Brief description of third step] + + + +## Key capabilities + +- **[Capability 1]:** [Description with benefits] +- **[Capability 2]:** [Description with benefits] +- **[Capability 3]:** [Description with benefits] + +## [Implementation paths or next steps] + + + + [Description and use case] + + + [Description and use case] + + +``` + +## Content Patterns + +### Introduction Patterns +**For overviews:** +> "[Product/Feature] is [brief definition]. We handle [complex part] so you can focus on [user value]." + +**For tutorials:** +> "Build [specific outcome] step by step. Choose between using the Dashboard interface or programmatic APIs to suit your workflow." + +**For examples:** +> "Build a [use case] with [key technologies]. The [agent/workflow] handles [business scenario] using [technical approach]." + +### Step Introduction Patterns +**For setup steps:** +> "Configure [component] to [achieve specific outcome]." + +**For implementation steps:** +> "Create [thing] that [does what] for [user benefit]." + +**For testing steps:** +> "Validate [thing] works correctly with [test scenario]." + +### Closing Patterns +**For tutorials:** +> "Now that you have [accomplished goal], consider [next steps or enhancements]:" + +**For examples:** +> "Just like that, you've built [outcome]. Consider reading the following guides to further enhance your [solution]:" + +**For overviews:** +> "Ready to get started? Check out [most relevant next step] or explore [alternative path]." + +## Component Usage Patterns + +### Card Groups for Options +```mdx + + + **Best for:** [use case] + + [Brief description] + + + **Best for:** [use case] + + [Brief description] + + +``` + +### Step Lists for Procedures +```mdx + + + [Brief explanation of purpose] + + [Implementation details or sub-steps] + + + [Continue with logical flow] + + +``` + +### Tabs for Multi-modal Implementation +```mdx + + + [Visual, no-code approach] + + + [Programmatic implementation] + + + [Alternative SDK implementation] + + +``` diff --git a/.cursor/rules/examples-documentation.mdc b/.cursor/rules/examples-documentation.mdc new file mode 100644 index 00000000..bb576b56 --- /dev/null +++ b/.cursor/rules/examples-documentation.mdc @@ -0,0 +1,123 @@ +--- +description: Guidelines for example documentation and use case implementations +globs: **/examples/*.mdx,**/**/examples/*.mdx +alwaysApply: false +--- + +# Example Documentation Standards + +Examples should be **small, focused demos** that show how to implement one feature or use case. + +## Required Sections + +1. **Overview** - What the example builds and demonstrates +2. **Prerequisites** - Account requirements and setup needed +3. **Step-by-step implementation** - Detailed walkthrough +4. **Testing/validation** - How to verify it works +5. **Next steps** - Links to related examples or advanced topics + +## Content Guidelines + +### Opening Structure +```mdx +## Overview + +[1-2 sentence description of what this example demonstrates] + +**[Agent/Workflow] Capabilities:** +* [Specific capability 1] +* [Specific capability 2] + +**What You'll Build:** +* [Concrete deliverable 1] +* [Concrete deliverable 2] +* [Concrete deliverable 3] +``` + +### Implementation Approach +- **Multi-modal examples**: Always provide both Dashboard and SDK approaches +- **Complete code**: Include all imports, error handling, and setup +- **Real-world context**: Use realistic data and scenarios +- **Production-ready**: Follow best practices and include security considerations + +### Code Organization +Use `` or `` for multiple implementation approaches: +- Dashboard (visual, no-code) +- TypeScript (Server SDK) +- Python (Server SDK) +- Additional languages as relevant + +### Data and Assets +- Include downloadable sample data (CSVs, JSON files) +- Provide realistic test scenarios +- Use placeholder data that reflects real use cases + +## Quality Standards + +### Code Quality +- All code examples must be tested and functional +- Include proper error handling +- Use environment variables for sensitive data +- Follow language-specific best practices + +### Documentation Quality +- Explain the reasoning behind implementation choices +- Include common gotchas and troubleshooting +- Provide context for business use cases +- Link to relevant API documentation + +### User Experience +- Clear success criteria for each step +- Visual confirmation (screenshots, videos) +- Downloadable resources when helpful +- Progressive complexity (simple → advanced) + +## Templates + +### Example Overview +```mdx +--- +title: [Use case name] +subtitle: [Brief description of what users will build] +slug: [category]/examples/[example-name] +--- + +## Overview + +[Detailed description of the use case and what the example demonstrates] + +**[Type] Capabilities:** +* [Key capability 1] +* [Key capability 2] + +**What You'll Build:** +* [Deliverable 1 with tools/integrations] +* [Deliverable 2 with specific features] +* [Deliverable 3 with validation/testing] + +## Prerequisites + +* [Account requirement] +* [Tool/service requirement if applicable] +``` + +### Step Implementation +```mdx + + + [Brief explanation of the step's purpose] + + + + [Visual step-by-step with screenshots] + + + [Complete code example] + + + [Complete code example] + + + + +``` diff --git a/.cursor/rules/glob-based/examples-documentation.mdc b/.cursor/rules/glob-based/examples-documentation.mdc new file mode 100644 index 00000000..bb576b56 --- /dev/null +++ b/.cursor/rules/glob-based/examples-documentation.mdc @@ -0,0 +1,123 @@ +--- +description: Guidelines for example documentation and use case implementations +globs: **/examples/*.mdx,**/**/examples/*.mdx +alwaysApply: false +--- + +# Example Documentation Standards + +Examples should be **small, focused demos** that show how to implement one feature or use case. + +## Required Sections + +1. **Overview** - What the example builds and demonstrates +2. **Prerequisites** - Account requirements and setup needed +3. **Step-by-step implementation** - Detailed walkthrough +4. **Testing/validation** - How to verify it works +5. **Next steps** - Links to related examples or advanced topics + +## Content Guidelines + +### Opening Structure +```mdx +## Overview + +[1-2 sentence description of what this example demonstrates] + +**[Agent/Workflow] Capabilities:** +* [Specific capability 1] +* [Specific capability 2] + +**What You'll Build:** +* [Concrete deliverable 1] +* [Concrete deliverable 2] +* [Concrete deliverable 3] +``` + +### Implementation Approach +- **Multi-modal examples**: Always provide both Dashboard and SDK approaches +- **Complete code**: Include all imports, error handling, and setup +- **Real-world context**: Use realistic data and scenarios +- **Production-ready**: Follow best practices and include security considerations + +### Code Organization +Use `` or `` for multiple implementation approaches: +- Dashboard (visual, no-code) +- TypeScript (Server SDK) +- Python (Server SDK) +- Additional languages as relevant + +### Data and Assets +- Include downloadable sample data (CSVs, JSON files) +- Provide realistic test scenarios +- Use placeholder data that reflects real use cases + +## Quality Standards + +### Code Quality +- All code examples must be tested and functional +- Include proper error handling +- Use environment variables for sensitive data +- Follow language-specific best practices + +### Documentation Quality +- Explain the reasoning behind implementation choices +- Include common gotchas and troubleshooting +- Provide context for business use cases +- Link to relevant API documentation + +### User Experience +- Clear success criteria for each step +- Visual confirmation (screenshots, videos) +- Downloadable resources when helpful +- Progressive complexity (simple → advanced) + +## Templates + +### Example Overview +```mdx +--- +title: [Use case name] +subtitle: [Brief description of what users will build] +slug: [category]/examples/[example-name] +--- + +## Overview + +[Detailed description of the use case and what the example demonstrates] + +**[Type] Capabilities:** +* [Key capability 1] +* [Key capability 2] + +**What You'll Build:** +* [Deliverable 1 with tools/integrations] +* [Deliverable 2 with specific features] +* [Deliverable 3 with validation/testing] + +## Prerequisites + +* [Account requirement] +* [Tool/service requirement if applicable] +``` + +### Step Implementation +```mdx + + + [Brief explanation of the step's purpose] + + + + [Visual step-by-step with screenshots] + + + [Complete code example] + + + [Complete code example] + + + + +``` diff --git a/.cursor/rules/glob-based/mdx-components.mdc b/.cursor/rules/glob-based/mdx-components.mdc new file mode 100644 index 00000000..9cff6ec7 --- /dev/null +++ b/.cursor/rules/glob-based/mdx-components.mdc @@ -0,0 +1,60 @@ +--- +description: MDX front-matter, components, and formatting guidelines +globs: **/*.mdx +alwaysApply: false +--- + +# MDX Components & Formatting + +## Front‑matter Template + +```mdx +--- +title: +subtitle: +slug: path/to/page +--- +``` + +## Asset Conventions + +All images are stored in `/fern/static/images` (top‑level, not nested). +Reference images with: + +```mdx +![alt‑text](mdc:assets/images/.) +``` + +## Content Structure + +### Standard Page Layout +1. **Overview section** - What users will accomplish +2. **Prerequisites** - What users need before starting +3. **Main content** - Steps, explanations, or examples +4. **Next steps** - Where users should go next + +### Cross-References +Always link to related content: +- Use full page titles in links: `[Getting started with assistants](mdc:docs/assistants)` +- Reference API docs: `[API reference](mdc:fern/api-reference/assistants)` +- Link to examples: `[Voice widget example](mdc:docs/assistants/examples/voice-widget)` + +## Component Guidelines + +Prefer Fern's native components over basic Markdown when available: +- Use `` instead of numbered lists for procedures +- Use `` instead of separate code blocks for multi-language examples +- Use `` for important information instead of blockquotes +- Use `` for navigation and option selection + +## File Organization + +### Slugs and Paths +- Use kebab-case for file names: `voice-assistant-setup.mdx` +- Match directory structure to URL structure +- Keep slugs short but descriptive + +### Front-matter Best Practices +- **title**: Should match the main heading but can be shorter for navigation +- **subtitle**: One sentence describing what users will learn or build +- **slug**: Override only when needed for better URLs diff --git a/.cursor/rules/glob-based/quickstart-guide.mdc b/.cursor/rules/glob-based/quickstart-guide.mdc new file mode 100644 index 00000000..62d0884f --- /dev/null +++ b/.cursor/rules/glob-based/quickstart-guide.mdc @@ -0,0 +1,101 @@ +--- +description: Guidelines for quickstart guides and tutorials +globs: **/quickstart/*.mdx,**/**/quickstart.mdx +alwaysApply: false +--- + +# Quickstart Guide Standards + +## Objectives + +Get users to "Hello World" moment fast with minimal steps required. + +## Structure Requirements + +### Prerequisites Section +Always include: +- Account requirements (e.g., "A Vapi account") +- API key access instructions +- Any required downloads or installations + +### Implementation Paths +Provide multiple implementation options using `` or ``: +- Dashboard (no-code approach) +- TypeScript/JavaScript SDK +- Python SDK +- cURL (for API examples) + +### Step-by-Step Format +Use `` component for all tutorials: + +```mdx + + + Brief explanation of what this step accomplishes. + + [Implementation details with code examples] + + + Continue with logical progression... + + +``` + +## Content Guidelines + +### Code Examples +- Always provide working, copy-pastable code +- Include all necessary imports and setup +- Replace placeholder values clearly (e.g., `YOUR_API_KEY`) +- Test all code examples before publishing + +### Visual Elements +- Include screenshots or videos for Dashboard workflows +- Use `` components for important visual guidance +- Keep videos short and focused (< 30 seconds) + +### Language & Tone +- Start with "In this quickstart, you'll learn to:" +- Use active voice and present tense +- Keep explanations concise—save deep dives for other docs +- End with clear "Next steps" pointing to relevant guides + +### Success Validation +Each quickstart should include: +- Clear success criteria ("You should see...") +- Troubleshooting for common issues +- Testing instructions to verify implementation + +## Templates + +### Standard Opening +```mdx +## Overview + +[Brief description of what users will build and accomplish] + +**In this quickstart, you'll learn to:** +- [Specific actionable outcome 1] +- [Specific actionable outcome 2] +- [Specific actionable outcome 3] + +## Prerequisites + +- [Required account or service] +- [Required tools or access] +``` + +### Standard Closing +```mdx +## Next steps + +Now that you have [accomplished goal]: + +- **[Related advanced topic]:** [Brief description with link] +- **[Integration option]:** [Brief description with link] +- **[Scaling guidance]:** [Brief description with link] + + +[Helpful tip or link to related quickstart] + +``` diff --git a/.cursor/rules/glob-based/workflows-documentation.mdc b/.cursor/rules/glob-based/workflows-documentation.mdc new file mode 100644 index 00000000..8d1e9d6e --- /dev/null +++ b/.cursor/rules/glob-based/workflows-documentation.mdc @@ -0,0 +1,109 @@ +--- +description: Guidelines for workflow documentation and complex multi-step processes +globs: **/workflows/*.mdx +alwaysApply: false +--- + +# Workflow Documentation Standards + +Workflows use visual decision trees and conditional logic for complex multi-step processes. + +## Workflow Purpose + +Perfect for: +- Appointment scheduling with availability checks +- Lead qualification with branching questions +- Complex customer service flows with escalation +- Multi-step data collection and validation + +## Documentation Structure + +### Core Sections Required +1. **Overview** - Workflow purpose and business context +2. **Flow diagram** - Visual representation of the decision tree +3. **Configuration** - Step-by-step setup instructions +4. **Variables and data** - Input/output data structures +5. **Testing scenarios** - Comprehensive test cases +6. **Integration points** - External systems and APIs + +## Content Guidelines + +### Flow Visualization +- Include visual flow diagrams showing decision paths +- Use clear node labels and condition descriptions +- Highlight error handling and edge case paths +- Show data flow between steps + +### Business Context +- Explain the real-world problem being solved +- Provide specific use case scenarios +- Include success metrics and KPIs +- Reference industry best practices + +### Technical Implementation +- Detail all configuration steps +- Include variable definitions and schemas +- Provide API integration examples +- Cover error handling strategies + +## Workflow Components + +### Decision Nodes +Document: +- Condition logic and evaluation criteria +- Branch paths and outcomes +- Fallback behaviors +- Variable dependencies + +### Data Collection +Document: +- Input validation rules +- Required vs optional fields +- Data transformation logic +- Storage and retrieval patterns + +### Integrations +Document: +- External API endpoints +- Authentication requirements +- Rate limiting considerations +- Error response handling + +## Templates + +### Workflow Overview +```mdx +## Overview + +Build [workflow type] with [key capabilities]. This workflow handles [business scenario] using [decision logic approach]. + +**Business Use Case:** +[Describe the real-world problem this solves] + +**Workflow Capabilities:** +- [Primary capability with decision logic] +- [Secondary capability with data handling] +- [Integration capability with external systems] + +**Flow Overview:** +[High-level description of the workflow path] +``` + +### Testing Template +```mdx +## Test the Workflow + +### Test Scenarios + +| Scenario | Input | Expected Path | Expected Outcome | +|----------|-------|---------------|------------------| +| [Happy path] | [Sample input] | [Main flow] | [Success result] | +| [Edge case 1] | [Edge input] | [Alternative path] | [Handled result] | +| [Error case] | [Invalid input] | [Error handling] | [Error resolution] | + +### Validation Steps +1. Test each decision branch independently +2. Verify data persistence across steps +3. Confirm integration endpoints respond correctly +4. Validate error handling and recovery +``` diff --git a/.cursor/rules/index.mdc b/.cursor/rules/index.mdc new file mode 100644 index 00000000..8ea8789e --- /dev/null +++ b/.cursor/rules/index.mdc @@ -0,0 +1,97 @@ +--- +description: Main documentation rules index and system overview +globs: +alwaysApply: true +--- + +# Vapi Documentation Rules System + +This is the main entry point for Vapi documentation rules. All documentation should follow these core principles and leverage specific rules based on content type. + +## Core Documentation Standards + +Every page must be: + +- **Clear** - Use plain language, avoid jargon +- **Brief** - Keep sentences and paragraphs short +- **Task-oriented** - Present steps in logical order +- **Scannable** - Use headings, spacing, and components effectively +- **Outcome-focused** - Ensure every section supports user success + +## Active Rules + +### Always Applied + +- **This index** - System overview and rule navigation +- **Core principles** ([core-principles.mdc](mdc:.cursor/rules/always-applied/core-principles.mdc)) - Writing style, tone, and fundamental standards +- **Fern components** ([fern-components.mdc](mdc:.cursor/rules/always-applied/fern-components.mdc)) - Framework-specific component usage + +### Content-Type Rules + +These apply automatically based on file paths: + +- **MDX Components** ([mdx-components.mdc](mdc:.cursor/rules/glob-based/mdx-components.mdc)) - For all `.mdx` files - front-matter, components, formatting +- **Quickstart Guides** ([quickstart-guide.mdc](mdc:.cursor/rules/glob-based/quickstart-guide.mdc)) - For `/quickstart/` paths - tutorial structure and flow +- **Examples** ([examples-documentation.mdc](mdc:.cursor/rules/glob-based/examples-documentation.mdc)) - For `/examples/` paths - use case implementations +- **Workflows** ([workflows-documentation.mdc](mdc:.cursor/rules/glob-based/workflows-documentation.mdc)) - For `/workflows/` paths - complex multi-step processes + +### Agent-requested or Manually applied rules (applied via @rule-name when needed) + +- **Code Standards** ([code-standards.mdc](mdc:.cursor/rules/code-standards.mdc)) - Code quality, testing standards +- **Content Templates** ([content-templates.mdc](mdc:.cursor/rules/content-templates.mdc)) - Page templates and content patterns + +## When to Consult Specific Rules + +| Working on... | Consult Rule | For guidance on... | +|---------------|--------------|-------------------| +| Any `.mdx` file | [mdx-components.mdc](mdc:.cursor/rules/glob-based/mdx-components.mdc) + [fern-components.mdc](mdc:.cursor/rules/always-applied/fern-components.mdc) | Components, front-matter, formatting | +| Getting started guides | [quickstart-guide.mdc](mdc:.cursor/rules/glob-based/quickstart-guide.mdc) | Tutorial structure, step flow, prerequisites | +| Use case examples | [examples-documentation.mdc](mdc:.cursor/rules/glob-based/examples-documentation.mdc) | Implementation patterns, multi-modal examples | +| Complex workflows | [workflows-documentation.mdc](mdc:.cursor/rules/glob-based/workflows-documentation.mdc) | Decision trees, data flow, business context | +| Code examples | [code-standards.mdc](mdc:.cursor/rules/code-standards.mdc) | Quality, security, best practices | +| New page types | [content-templates.mdc](mdc:.cursor/rules/content-templates.mdc) | Templates, patterns, structure | +| Fern components | [fern-components.mdc](mdc:.cursor/rules/always-applied/fern-components.mdc) | Framework-specific components and features | + +## Quick Reference + +### Standard Opening +```mdx +## Overview + +[Brief description of what users will build/accomplish] + +**In this [guide/example], you'll learn to:** +- [Specific actionable outcome 1] +- [Specific actionable outcome 2] +``` + +### Implementation / User Journey Tabs (Fern) +```mdx + +```txt title="Dashboard" +// Complete working example +``` +```typescript title="TypeScript (Server SDK)" +// Complete working example +``` +```python title="Python (Server SDK)" +# Complete working example +``` +```bash title="cURL" +# Complete working example +``` + +``` + +### Standard Closing +```mdx +## Next steps + +Now that you have [accomplished goal]: +- **[Advanced topic]:** [Description with link] +- **[Related feature]:** [Description with link] +``` + +--- + +**Rule of thumb:** Every sentence should clarify *why* or *how*—if it does neither, remove or rewrite it. diff --git a/.cursorignore b/.cursorignore new file mode 100644 index 00000000..014e65e1 --- /dev/null +++ b/.cursorignore @@ -0,0 +1,6 @@ +**/.definition +**/.preview/** +node_modules/ +dist/ +.env +.DS_Store diff --git a/.cursorrules b/.cursorrules deleted file mode 100644 index 161cfb08..00000000 --- a/.cursorrules +++ /dev/null @@ -1,250 +0,0 @@ -## Purpose - -Provide developers with documentation that is quick to read, easy to follow, and immediately actionable. -Each page should meet the following principles: - -| Principle | Description | -| -------------------- | ------------------------------------------------------------- | -| **Clarity** | Use plain language—avoid jargon or unnecessary complexity. | -| **Brevity** | Keep sentences and paragraphs short. | -| **Task‑orientation** | Present steps in a logical order that help the reader proceed.| -| **Scannability** | Apply headings, spacing, and components that aid quick review.| -| **Outcome focus** | Ensure every section directly supports the user’s success. | - ---- - -## Style rules - -- **Titles**: Capitalize only the first word unless a proper noun is used. - *Examples*: `Getting started`, `Voice AI`, `API reference` -- **Subtitles**: Begin with *Learn to …* for guides; otherwise keep them concise and factual. -- **Emojis / decorative icons**: Use only when essential for comprehension. -- Tone: Direct, professional, friendly. -- Break up large blocks of text with line‑breaks. -- Avoid marketing or promotional wording. -- Link to related pages when helpful, especially the **API reference** at `/fern/api-reference`. -- Use **bold** text to emphasize key names or concepts. - -### Writing style & tone - -| Guideline | Why it matters | -|-----------|---------------| -| **Active voice** | “Connect the SDK” is clearer than “The SDK should be connected.” | -| **Present tense** | Keeps instructions straightforward (e.g., “Run” not “You will run”). | -| **Second‑person (“you”)** | Speaks directly to the reader. Reserve “we” for collaborative tutorials. | -| **Explain intent before action** | Briefly state *why* a step is needed, then show *how*. | -| **Concrete examples over theory** | Code snippets and visuals anchor concepts. | -| **Consistent terminology** | Define a term once; reuse it exactly the same everywhere. | -| **Parallel structure** | Lists and headings should follow consistent grammatical patterns. | -| **Descriptive link text** | Use “view the guide” rather than “click here.” | -| **Comment code sparsely** | Only where intent isn’t obvious from variable/function names. | - -> **Rule of thumb**: every sentence should either clarify *why* or *how*—if it does neither, remove or rewrite it. - ---- - -## MDX front‑matter template - -```mdx ---- -title: -subtitle: ---- -``` - -### Sample titles - -- Getting started -- Assistants -- Variables - -### Sample subtitles - -- Build a voice assistant that answers questions about your docs -- Personalize assistant messages with dynamic and default variables - ---- - -## Asset conventions - -All images are stored in `/fern/static/images` (top‑level, not nested). -Reference images with: - -```mdx -![alt‑text](/assets/images/.) -``` - ---- - -## Recommended components - -### Accordions *(FAQ sections only)* - -```mdx - - Answer - -``` - -### Callouts - -```mdx -Helpful tip -Important note -Important caution -Possible error -Additional information -Successful outcome -``` - -### Cards & Card groups - -```mdx - - View Vapi’s Python server SDK. - -``` - -### Code snippets - -```javascript maxLines=10 wordWrap -console.log('Hello, world'); -``` - -### Multi‑language code blocks - -```mdx - -```python title="hello.py" -print("Hello") -``` -```javascript title="hello.js" -console.log("Hello") -``` - -``` - -### Step lists - -```mdx - - Do this. - Do that. - Finished. - -``` - -### Frames for images - -```mdx - - Mountains - -``` - -### Tabs - -```mdx - - Content A - Content B - -``` - ---- - -## Documentation sections & best practices - -Drawing from Chris Nicholas’ *How to Write Exceptional Documentation* (Mar 2025), structure docs into purposeful sections so every developer quickly finds the right depth of information. - -### Quickstart - -| Objective | Get a new user to a “Hello World” moment fast | -|-----------|----------------------------------------------| -| **Scope** | Only the minimal steps required; one guide per supported tech | -| **Tips** | • Use numbered steps and visuals -• Pre‑fill API keys where possible -• Test end‑to‑end after every edit | - -### Tutorials - -Teach broader concepts by **building something tangible** together. -- Progressively increase complexity. -- Add interactive elements (live code, mini‑quizzes). -- Highlight best practices and link to deeper docs. - -### How‑to Guides - -Solve a **specific problem** for existing users. -- State the goal up front and list prerequisites. -- Derive topics from recurring support questions. -- Link to sample repos when helpful. - -### Explanations - -Explain concepts, architecture, or reasoning. -- Use diagrams and succinct prose—keep marketing out. -- Include basic code when it clarifies the concept. - -### API Reference - -Exhaustive, factual details for each endpoint / method. -- Lead with the simplest usage pattern. -- Use props / args / returns tables. -- Anticipate errors and include handling guidance. -- Cross‑link abundantly and follow familiar REST / OpenAPI layouts where applicable. - -### Examples - -Small, focused demos that show how to implement one feature. -- Display copy‑pasteable code. -- Provide live, interactive previews when feasible. - -### Templates - -Full, production‑ready starter projects. -- Follow industry best practices and heavy inline commenting. -- Offer one‑click deploy or CLI installers. -- Use templates as reference material and marketing demos. - -> **Iterate continuously.** Listen to user feedback and refine each section; great docs emerge through constant improvement. - ---- - -## Example page skeleton - -```mdx ---- -title: Voice AI -subtitle: Learn how to build and deploy voice agents with Vapi. ---- - -## Overview - -Vapi [Voice AI](/docs/assistants) enables you to build conversational agents for phone, web, and other platforms. - -- Automate outbound support and sales -- Integrate with your CRM -- Deploy on phone, web, or mobile - -For details, see **Assistants**. - -## Parameters - -| Name | Purpose | -| ---------------- | --------------------------------------- | -| `model` | LLM used for conversations | -| `voice` | Voice profile for the agent | -| `knowledge_base` | Documents and data for context | -| `tools` | Integrations and actions the agent uses | - -## FAQ - - - - Visit the [Assistants](/docs/assistants) page and follow the guide. - - -``` - ---- \ No newline at end of file diff --git a/fern/assistants/examples/multilingual-agent.mdx b/fern/assistants/examples/multilingual-agent.mdx new file mode 100644 index 00000000..2f6cd211 --- /dev/null +++ b/fern/assistants/examples/multilingual-agent.mdx @@ -0,0 +1,733 @@ +--- +title: Multilingual customer support agent +subtitle: Build a global support agent with automatic language detection +slug: assistants/examples/multilingual-agent +description: Complete implementation of a multilingual voice agent that handles English, Spanish, and French with automatic language switching +--- + +## Overview + +Build a customer support agent that handles inquiries in English, Spanish, and French with automatic language detection, seamless language switching, and culturally appropriate responses. + +**Agent Capabilities:** +* Automatic language detection from 99+ supported languages +* Seamless mid-conversation language switching +* Native voice quality for each language +* Cultural context awareness + +**What You'll Build:** +* Multilingual transcriber with automatic detection +* Language-adaptive voice selection +* Cultural context-aware system prompts +* Comprehensive testing and monitoring setup + +## Prerequisites + +- [Vapi account](https://dashboard.vapi.ai/) with API access +- Basic understanding of assistant configuration + +--- + + + + Set up automatic language detection for speech-to-text processing. + + + + 1. Navigate to **Assistants** in your [Vapi Dashboard](https://dashboard.vapi.ai/) + 2. Click **Create Assistant** or edit an existing one + 3. In the **Transcriber** section: + - **Provider**: Select `Deepgram` + - **Model**: Choose `nova-2-multilingual` + - **Language Detection**: Set to `Automatic multiple languages` + 4. **Optional**: Configure language hints if you know the expected languages + + + ```typescript + import { VapiClient } from "@vapi-ai/server-sdk"; + + const vapi = new VapiClient({ token: "YOUR_VAPI_API_KEY" }); + + const transcriber = { + provider: "deepgram", + model: "nova-2-multilingual", + languageBehaviour: "automatic multiple languages", + // Optional: Provide language hints for better accuracy + languages: ["en", "es", "fr"] + }; + ``` + + + ```python + from vapi import Vapi + + client = Vapi(token=os.getenv("VAPI_API_KEY")) + + transcriber = { + "provider": "deepgram", + "model": "nova-2-multilingual", + "languageBehaviour": "automatic multiple languages", + # Optional: Provide language hints for better accuracy + "languages": ["en", "es", "fr"] + } + ``` + + + ```bash + curl -X POST "https://api.vapi.ai/assistant" \ + -H "Authorization: Bearer $VAPI_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "transcriber": { + "provider": "deepgram", + "model": "nova-2-multilingual", + "languageBehaviour": "automatic multiple languages", + "languages": ["en", "es", "fr"] + } + }' + ``` + + + + + **Language hints** improve accuracy when you know the expected languages. Without hints, the model detects from all 99+ supported languages. + + + + + Configure native voices for each target language. + + + + 1. In the **Voice** section of your assistant: + - **Provider**: Select `Azure` (best multilingual support) + - **Voice**: Choose `multilingual-auto` or configure specific voices: + - **English**: `en-US-AriaNeural` + - **Spanish**: `es-ES-ElviraNeural` + - **French**: `fr-FR-DeniseNeural` + 2. **Optional**: Set up voice fallback plans for reliability + + + ```typescript + // Option 1: Automatic voice selection based on detected language + const voice = { + provider: "azure", + voiceId: "multilingual-auto" // Automatically selects appropriate voice + }; + + // Option 2: Specific voice configuration with fallbacks + const voiceWithFallbacks = { + provider: "azure", + voiceId: "en-US-AriaNeural", // Primary voice + fallbackPlan: { + voices: [ + { + provider: "azure", + voiceId: "es-ES-ElviraNeural" + }, + { + provider: "azure", + voiceId: "fr-FR-DeniseNeural" + } + ] + } + }; + ``` + + + ```python + # Option 1: Automatic voice selection based on detected language + voice = { + "provider": "azure", + "voiceId": "multilingual-auto" # Automatically selects appropriate voice + } + + # Option 2: Specific voice configuration with fallbacks + voice_with_fallbacks = { + "provider": "azure", + "voiceId": "en-US-AriaNeural", # Primary voice + "fallbackPlan": { + "voices": [ + { + "provider": "azure", + "voiceId": "es-ES-ElviraNeural" + }, + { + "provider": "azure", + "voiceId": "fr-FR-DeniseNeural" + } + ] + } + } + ``` + + + ```bash + # Add voice configuration to your assistant + curl -X PATCH "https://api.vapi.ai/assistant/YOUR_ASSISTANT_ID" \ + -H "Authorization: Bearer $VAPI_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "voice": { + "provider": "azure", + "voiceId": "multilingual-auto" + } + }' + ``` + + + + + **Azure** offers the best multilingual voice support with 400+ voices across 140+ languages. **ElevenLabs** provides excellent quality for major languages but with fewer options. + + + + + Create a system prompt that handles multiple languages gracefully. + + + + 1. In the **Model** section, add this system prompt: + ``` + You are a helpful customer support representative for GlobalTech Inc. + + Language Instructions: + - Automatically detect and respond in the user's language (English, Spanish, or French) + - If the user switches languages, switch with them seamlessly + - Always maintain a professional, friendly tone + - For complex technical terms, provide brief explanations in the user's language + + Capabilities: + - Answer questions about our products and services + - Help with account issues and billing inquiries + - Provide technical support guidance + - Escalate to human agents when needed + + If you're unsure about the language, default to English and ask the user to confirm their preferred language. + ``` + + + ```typescript + const systemPrompt = `You are a helpful customer support representative for GlobalTech Inc. + +Language Instructions: +- Automatically detect and respond in the user's language (English, Spanish, or French) +- If the user switches languages, switch with them seamlessly +- Always maintain a professional, friendly tone +- For complex technical terms, provide brief explanations in the user's language + +Capabilities: +- Answer questions about our products and services +- Help with account issues and billing inquiries +- Provide technical support guidance +- Escalate to human agents when needed + +If you're unsure about the language, default to English and ask the user to confirm their preferred language.`; + + const model = { + provider: "openai", + model: "gpt-4", + maxTokens: 500, + messages: [ + { + role: "system", + content: systemPrompt + } + ] + }; + ``` + + + ```python + system_prompt = """You are a helpful customer support representative for GlobalTech Inc. + +Language Instructions: +- Automatically detect and respond in the user's language (English, Spanish, or French) +- If the user switches languages, switch with them seamlessly +- Always maintain a professional, friendly tone +- For complex technical terms, provide brief explanations in the user's language + +Capabilities: +- Answer questions about our products and services +- Help with account issues and billing inquiries +- Provide technical support guidance +- Escalate to human agents when needed + +If you're unsure about the language, default to English and ask the user to confirm their preferred language.""" + + model = { + "provider": "openai", + "model": "gpt-4", + "maxTokens": 500, + "messages": [ + { + "role": "system", + "content": system_prompt + } + ] + } + ``` + + + ```bash + curl -X PATCH "https://api.vapi.ai/assistant/YOUR_ASSISTANT_ID" \ + -H "Authorization: Bearer $VAPI_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "model": { + "provider": "openai", + "model": "gpt-4", + "maxTokens": 500, + "messages": [ + { + "role": "system", + "content": "You are a helpful customer support representative for GlobalTech Inc..." + } + ] + } + }' + ``` + + + + + + Configure greeting messages that work across languages. + + + + 1. In the **First Message** field, use this multilingual approach: + ``` + Hello! I'm here to help you with any questions about GlobalTech. I can assist you in English, Spanish, or French. How can I help you today? + ``` + 2. **Optional**: Set up language-specific greetings using the `contents` array for more personalized experiences + + + ```typescript + // Option 1: Universal greeting that works in all languages + const firstMessage = "Hello! I'm here to help you with any questions about GlobalTech. I can assist you in English, Spanish, or French. How can I help you today?"; + + // Option 2: Language-specific greetings using contents array + const multilingualGreeting = { + contents: [ + { + type: "text", + text: "Hello! How can I assist you today?", + language: "en" + }, + { + type: "text", + text: "¡Hola! ¿Cómo puedo ayudarte hoy?", + language: "es" + }, + { + type: "text", + text: "Bonjour! Comment puis-je vous aider aujourd'hui?", + language: "fr" + } + ] + }; + + const assistant = await vapi.assistants.create({ + name: "Multilingual Support Agent", + firstMessage: firstMessage, + // ... other configuration + }); + ``` + + + ```python + # Option 1: Universal greeting that works in all languages + first_message = "Hello! I'm here to help you with any questions about GlobalTech. I can assist you in English, Spanish, or French. How can I help you today?" + + # Option 2: Language-specific greetings using contents array + multilingual_greeting = { + "contents": [ + { + "type": "text", + "text": "Hello! How can I assist you today?", + "language": "en" + }, + { + "type": "text", + "text": "¡Hola! ¿Cómo puedo ayudarte hoy?", + "language": "es" + }, + { + "type": "text", + "text": "Bonjour! Comment puis-je vous aider aujourd'hui?", + "language": "fr" + } + ] + } + + assistant = client.assistants.create( + name="Multilingual Support Agent", + first_message=first_message, + # ... other configuration + ) + ``` + + + ```bash + curl -X POST "https://api.vapi.ai/assistant" \ + -H "Authorization: Bearer $VAPI_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "name": "Multilingual Support Agent", + "firstMessage": "Hello! I'\''m here to help you with any questions about GlobalTech. I can assist you in English, Spanish, or French. How can I help you today?" + }' + ``` + + + + + + Combine all components into a production-ready multilingual assistant. + + + + 1. **Complete the assistant configuration** with all previous settings + 2. **Test the assistant** using the built-in testing tools + 3. **Deploy** by copying the assistant ID for integration + 4. **Monitor** language detection accuracy in the analytics dashboard + + + ```typescript + import { VapiClient } from "@vapi-ai/server-sdk"; + + const vapi = new VapiClient({ token: "YOUR_VAPI_API_KEY" }); + + const multilingualAssistant = await vapi.assistants.create({ + name: "GlobalTech Multilingual Support", + + // Multilingual transcription + transcriber: { + provider: "deepgram", + model: "nova-2-multilingual", + languageBehaviour: "automatic multiple languages", + languages: ["en", "es", "fr"] + }, + + // Adaptive voice selection + voice: { + provider: "azure", + voiceId: "multilingual-auto" + }, + + // Language-aware AI model + model: { + provider: "openai", + model: "gpt-4", + maxTokens: 500, + messages: [ + { + role: "system", + content: `You are a helpful customer support representative for GlobalTech Inc. + + Language Instructions: + - Automatically detect and respond in the user's language (English, Spanish, or French) + - If the user switches languages, switch with them seamlessly + - Always maintain a professional, friendly tone + - For complex technical terms, provide brief explanations in the user's language + + Capabilities: + - Answer questions about our products and services + - Help with account issues and billing inquiries + - Provide technical support guidance + - Escalate to human agents when needed + + If you're unsure about the language, default to English and ask the user to confirm their preferred language.` + } + ] + }, + + // Multilingual greeting + firstMessage: "Hello! I'm here to help you with any questions about GlobalTech. I can assist you in English, Spanish, or French. How can I help you today?", + + // Analytics for language tracking + analysisPlan: { + summaryPlan: { + enabled: true, + prompt: "Summarize this multilingual support call, noting which languages were used and how effectively the language switching was handled." + }, + successEvaluationPlan: { + enabled: true, + prompt: "Evaluate the success of this multilingual support interaction. Consider language detection accuracy, response appropriateness, and overall user satisfaction.", + rubric: "NumericScale" + } + } + }); + + console.log(`Multilingual assistant created with ID: ${multilingualAssistant.id}`); + ``` + + + ```python + from vapi import Vapi + import os + + client = Vapi(token=os.getenv("VAPI_API_KEY")) + + multilingual_assistant = client.assistants.create( + name="GlobalTech Multilingual Support", + + # Multilingual transcription + transcriber={ + "provider": "deepgram", + "model": "nova-2-multilingual", + "languageBehaviour": "automatic multiple languages", + "languages": ["en", "es", "fr"] + }, + + # Adaptive voice selection + voice={ + "provider": "azure", + "voiceId": "multilingual-auto" + }, + + # Language-aware AI model + model={ + "provider": "openai", + "model": "gpt-4", + "maxTokens": 500, + "messages": [ + { + "role": "system", + "content": """You are a helpful customer support representative for GlobalTech Inc. + + Language Instructions: + - Automatically detect and respond in the user's language (English, Spanish, or French) + - If the user switches languages, switch with them seamlessly + - Always maintain a professional, friendly tone + - For complex technical terms, provide brief explanations in the user's language + + Capabilities: + - Answer questions about our products and services + - Help with account issues and billing inquiries + - Provide technical support guidance + - Escalate to human agents when needed + + If you're unsure about the language, default to English and ask the user to confirm their preferred language.""" + } + ] + }, + + # Multilingual greeting + first_message="Hello! I'm here to help you with any questions about GlobalTech. I can assist you in English, Spanish, or French. How can I help you today?", + + # Analytics for language tracking + analysis_plan={ + "summaryPlan": { + "enabled": True, + "prompt": "Summarize this multilingual support call, noting which languages were used and how effectively the language switching was handled." + }, + "successEvaluationPlan": { + "enabled": True, + "prompt": "Evaluate the success of this multilingual support interaction. Consider language detection accuracy, response appropriateness, and overall user satisfaction.", + "rubric": "NumericScale" + } + } + ) + + print(f"Multilingual assistant created with ID: {multilingual_assistant.id}") + ``` + + + ```bash + curl -X POST "https://api.vapi.ai/assistant" \ + -H "Authorization: Bearer $VAPI_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "name": "GlobalTech Multilingual Support", + "transcriber": { + "provider": "deepgram", + "model": "nova-2-multilingual", + "languageBehaviour": "automatic multiple languages", + "languages": ["en", "es", "fr"] + }, + "voice": { + "provider": "azure", + "voiceId": "multilingual-auto" + }, + "model": { + "provider": "openai", + "model": "gpt-4", + "maxTokens": 500, + "messages": [ + { + "role": "system", + "content": "You are a helpful customer support representative for GlobalTech Inc..." + } + ] + }, + "firstMessage": "Hello! I'\''m here to help you with any questions about GlobalTech...", + "analysisPlan": { + "summaryPlan": { + "enabled": true, + "prompt": "Summarize this multilingual support call..." + } + } + }' + ``` + + + + + + Validate your multilingual assistant with comprehensive test scenarios. + + **Test Scenarios:** + 1. **Language Detection**: Start calls in different languages + 2. **Mid-conversation Switching**: Switch languages during the call + 3. **Mixed Language Input**: Use multiple languages in a single sentence + 4. **Technical Terms**: Test complex vocabulary handling + 5. **Cultural Context**: Verify appropriate cultural responses + + **Sample Test Script:** + ```javascript + // Test language detection and switching + const testCases = [ + { + scenario: "English to Spanish switch", + input: "Hello, I need help... Necesito ayuda con mi cuenta" + }, + { + scenario: "French start", + input: "Bonjour, j'ai un problème avec ma commande" + }, + { + scenario: "Mixed language technical", + input: "My password reset no funciona, ¿puedes ayudarme?" + } + ]; + ``` + + Monitor the call analytics to track: + - Language detection accuracy + - Response appropriateness + - Voice quality per language + - User satisfaction scores + + + +## Advanced multilingual features + +### Language-specific tool messages + +Configure different tool messages for each language: + +```typescript +const toolMessages = { + requestStart: { + contents: [ + { + type: "text", + text: "Let me look that up for you...", + language: "en" + }, + { + type: "text", + text: "Déjame revisar eso para ti...", + language: "es" + }, + { + type: "text", + text: "Laissez-moi vérifier cela pour vous...", + language: "fr" + } + ] + } +}; +``` + +### Cultural context awareness + +Add cultural context to your system prompt: + +``` +Cultural Guidelines: +- English: Direct, friendly, professional tone +- Spanish: Warm, respectful, use formal "usted" initially +- French: Polite, formal, use proper greeting conventions +- Adapt response style to match cultural expectations +``` + +### Regional voice variations + +Configure region-specific voices for better localization: + +```typescript +const regionalVoices = { + "en-US": "en-US-AriaNeural", + "en-GB": "en-GB-SoniaNeural", + "es-ES": "es-ES-ElviraNeural", + "es-MX": "es-MX-DaliaNeural", + "fr-FR": "fr-FR-DeniseNeural", + "fr-CA": "fr-CA-SylvieNeural" +}; +``` + +## Troubleshooting + + + + **Common causes:** + - Background noise affecting audio quality + - Mixed languages confusing the model + - Regional accents not recognized + + **Solutions:** + - Use noise reduction preprocessing + - Provide language hints in your configuration + - Test with native speakers of target languages + - Consider manual language specification for known users + + + + **Common causes:** + - Different voice providers have varying quality per language + - Some languages have fewer high-quality voice options + + **Solutions:** + - Test different voice providers for each language + - Use Azure for maximum language coverage + - Configure fallback voices as backup options + - Consider ElevenLabs for premium languages + + + + **Common causes:** + - Transcription errors triggering language switches + - Ambiguous input being misclassified + - System prompt not handling edge cases + + **Solutions:** + - Add language persistence instructions to your prompt + - Use confidence thresholds for language switching + - Implement confirmation for language changes + - Monitor transcription accuracy in analytics + + + + **Common causes:** + - Generic responses not adapted to cultural norms + - Missing cultural context in training data + - Inappropriate formality levels + + **Solutions:** + - Add cultural guidelines to your system prompt + - Test with native speakers from target cultures + - Include cultural context in your training examples + - Adjust response style per language/region + + + +## Next steps + +Now that you have a multilingual customer support agent: + +- **[Multilingual feature overview](../../../customization/multilingual):** Learn about all multilingual capabilities +- **[Phone Integration](../../../phone-calling):** Connect your multilingual agent to phone systems +- **[Analytics & Monitoring](../../../call-analysis):** Track language usage and performance metrics +- **[Advanced Workflows](../../../workflows/overview):** Build complex multilingual business processes diff --git a/fern/customization/multilingual.mdx b/fern/customization/multilingual.mdx index ab396480..1ac04206 100644 --- a/fern/customization/multilingual.mdx +++ b/fern/customization/multilingual.mdx @@ -1,32 +1,524 @@ --- -title: Multilingual -subtitle: Set up multilingual support for your assistant +title: Multilingual support +subtitle: Enable voice assistants to speak multiple languages fluently slug: customization/multilingual +description: Configure multilingual voice AI agents with automatic language detection, cross-language conversation, and localized voices --- ## Overview -We support dozens of providers, giving you access to their available models for multilingual support. +Configure your voice assistant to communicate in multiple languages with automatic language detection, native voice quality, and cultural context awareness. -Certain providers, like google and deepgram, have multilingual transcriber models that can transcribe audio in any language. +**In this guide, you'll learn to:** +- Set up automatic language detection for speech recognition +- Configure multilingual voice synthesis +- Design language-aware system prompts +- Test and optimize multilingual performance -## Transcribers (Speech-to-Text) + +**Provider Limitations:** Most transcription providers don't support true multilingual mode. Only **Google STT** currently supports automatic language detection and switching within a single conversation. + -In the dashboard's assistant tab, click on "transcriber" to view all of the available providers, languages and models for each. Each model offers different language options. +## Configure automatic language detection -## Voice (Text-to-Speech) +Set up your transcriber to automatically detect and process multiple languages. -Each provider includes a voice tag in the name of their voice. For example, Azure offers the `es-ES-ElviraNeural` voice for Spanish. Go to voice tab in the assistants page to see all of the available models. + + + 1. Navigate to **Assistants** in your [Vapi Dashboard](https://dashboard.vapi.ai/) + 2. Create a new assistant or edit an existing one + 3. In the **Transcriber** section: + - **Provider**: Select `Google` (only provider with true multilingual support) + - **Model**: Choose Flash or Flash Lite (recommended for speed) + - **Language Detection**: Set to `Automatic multiple languages` + 4. **Alternative for limited multilingual**: + - **Deepgram**: Only supports English and Spanish in "Multi" mode + - **Other providers**: Single language only, no auto-detection + 5. Click **Save** to apply the configuration + + + ```typescript + import { VapiClient } from "@vapi-ai/server-sdk"; -### Example: Setting Up a Spanish Voice Assistant + const vapi = new VapiClient({ token: "YOUR_VAPI_API_KEY" }); -```json -{ - "voice": { - "provider": "azure", - "voiceId": "es-ES-ElviraNeural" - } -} -``` + // Recommended: Google for true multilingual support + const assistant = await vapi.assistants.create({ + name: "Multilingual Assistant", + transcriber: { + provider: "google", + model: "gemini-2.0-flash-lite", // or "gemini-2.0-flash" for better accuracy + languageBehaviour: "automatic multiple languages" + } + }); -In this example, the voice `es-ES-ElviraNeural` from Azure supports Spanish. Replace `es-ES-ElviraNeural` with any other voice ID that supports your desired language. + // Alternative: Deepgram for English + Spanish only + const limitedMultilingual = { + provider: "deepgram", + model: "nova-2", + languageBehaviour: "Multi" // Only en + es supported + }; + ``` + + + ```python + from vapi import Vapi + import os + + client = Vapi(token=os.getenv("VAPI_API_KEY")) + + # Recommended: Google for true multilingual support + assistant = client.assistants.create( + name="Multilingual Assistant", + transcriber={ + "provider": "google", + "model": "gemini-2.0-flash-lite", # or "gemini-2.0-flash" for better accuracy + "languageBehaviour": "automatic multiple languages" + } + ) + + # Alternative: Deepgram for English + Spanish only + limited_multilingual = { + "provider": "deepgram", + "model": "nova-2", + "languageBehaviour": "Multi" # Only en + es supported + } + ``` + + + ```bash + # Recommended: Google for true multilingual support + curl -X POST "https://api.vapi.ai/assistant" \ + -H "Authorization: Bearer $VAPI_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "name": "Multilingual Assistant", + "transcriber": { + "provider": "google", + "model": "gemini-2.0-flash-lite", + "languageBehaviour": "automatic multiple languages" + } + }' + + # Alternative: Deepgram for English + Spanish only + curl -X POST "https://api.vapi.ai/assistant" \ + -H "Authorization: Bearer $VAPI_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "transcriber": { + "provider": "deepgram", + "model": "nova-2", + "languageBehaviour": "Multi" + } + }' + ``` + + + + +**Google STT Performance:** While Google is slower than other providers, use the `gemini-2.0-flash-lite` or `gemini-2.0-flash` models for better speed. These lighter models provide the best balance of multilingual support and performance. + + +## Set up multilingual voices + +Configure your assistant to use appropriate voices for each detected language. + + + + 1. In the **Voice** section of your assistant: + - **Provider**: Select `Azure` (best multilingual coverage) + - **Voice**: Choose `multilingual-auto` for automatic voice selection + 2. **Alternative**: Configure specific voices for each language: + - Select a primary voice (e.g., `en-US-AriaNeural`) + - Click **Add Fallback Voices** + - Add voices for other languages: + - Spanish: `es-ES-ElviraNeural` + - French: `fr-FR-DeniseNeural` + - German: `de-DE-KatjaNeural` + 3. Click **Save** to apply the voice configuration + + + ```typescript + // Option 1: Automatic voice selection (recommended) + const voice = { + provider: "azure", + voiceId: "multilingual-auto" + }; + + // Option 2: Specific voices with fallbacks + const voiceWithFallbacks = { + provider: "azure", + voiceId: "en-US-AriaNeural", // Primary voice + fallbackPlan: { + voices: [ + { provider: "azure", voiceId: "es-ES-ElviraNeural" }, + { provider: "azure", voiceId: "fr-FR-DeniseNeural" }, + { provider: "azure", voiceId: "de-DE-KatjaNeural" } + ] + } + }; + + await vapi.assistants.update(assistantId, { voice }); + ``` + + + ```python + # Option 1: Automatic voice selection (recommended) + voice = { + "provider": "azure", + "voiceId": "multilingual-auto" + } + + # Option 2: Specific voices with fallbacks + voice_with_fallbacks = { + "provider": "azure", + "voiceId": "en-US-AriaNeural", # Primary voice + "fallbackPlan": { + "voices": [ + {"provider": "azure", "voiceId": "es-ES-ElviraNeural"}, + {"provider": "azure", "voiceId": "fr-FR-DeniseNeural"}, + {"provider": "azure", "voiceId": "de-DE-KatjaNeural"} + ] + } + } + + client.assistants.update(assistant_id, voice=voice) + ``` + + + ```bash + curl -X PATCH "https://api.vapi.ai/assistant/YOUR_ASSISTANT_ID" \ + -H "Authorization: Bearer $VAPI_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "voice": { + "provider": "azure", + "voiceId": "multilingual-auto" + } + }' + ``` + + + + +**Voice Provider Support:** Unlike transcription, all major voice providers (Azure, ElevenLabs, OpenAI, etc.) support multiple languages. Azure offers the most comprehensive coverage with 400+ voices across 140+ languages. + + +## Configure language-aware prompts + +Create system prompts that explicitly list supported languages and handle multiple languages gracefully. + + + + 1. In the **Model** section, update your system prompt to explicitly list supported languages: + ``` + You are a helpful assistant that can communicate in English, Spanish, and French. + + Language Instructions: + - You can speak and understand: English, Spanish, and French + - Automatically detect and respond in the user's language + - Switch languages seamlessly when the user changes languages + - Maintain consistent personality across all languages + - Use culturally appropriate greetings and formality levels + + If a user speaks a language other than English, Spanish, or French, politely explain that you only support these three languages and ask them to continue in one of them. + ``` + 2. Click **Save** to apply the prompt changes + + + ```typescript + const systemPrompt = `You are a helpful assistant that can communicate in English, Spanish, and French. + +Language Instructions: +- You can speak and understand: English, Spanish, and French +- Automatically detect and respond in the user's language +- Switch languages seamlessly when the user changes languages +- Maintain consistent personality across all languages +- Use culturally appropriate greetings and formality levels + +If a user speaks a language other than English, Spanish, or French, politely explain that you only support these three languages and ask them to continue in one of them.`; + + const model = { + provider: "openai", + model: "gpt-4", + messages: [ + { + role: "system", + content: systemPrompt + } + ] + }; + + await vapi.assistants.update(assistantId, { model }); + ``` + + + ```python + system_prompt = """You are a helpful assistant that can communicate in English, Spanish, and French. + +Language Instructions: +- You can speak and understand: English, Spanish, and French +- Automatically detect and respond in the user's language +- Switch languages seamlessly when the user changes languages +- Maintain consistent personality across all languages +- Use culturally appropriate greetings and formality levels + +If a user speaks a language other than English, Spanish, or French, politely explain that you only support these three languages and ask them to continue in one of them.""" + + model = { + "provider": "openai", + "model": "gpt-4", + "messages": [ + { + "role": "system", + "content": system_prompt + } + ] + } + + client.assistants.update(assistant_id, model=model) + ``` + + + ```bash + curl -X PATCH "https://api.vapi.ai/assistant/YOUR_ASSISTANT_ID" \ + -H "Authorization: Bearer $VAPI_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "model": { + "provider": "openai", + "model": "gpt-4", + "messages": [ + { + "role": "system", + "content": "You are a helpful assistant that can communicate in English, Spanish, and French..." + } + ] + } + }' + ``` + + + + +**Critical for Multilingual Success:** You must explicitly list the supported languages in your system prompt. Assistants struggle to understand they can speak multiple languages without this explicit instruction. + + +## Add multilingual greetings + +Configure greeting messages that work across multiple languages. + + + + 1. In the **First Message** field, enter a multilingual greeting: + ``` + Hello! I can assist you in English, Spanish, or French. How can I help you today? + ``` + 2. **Optional**: For more personalized greetings, use the **Advanced Message Configuration**: + - Enable **Language-Specific Messages** + - Add greetings for each target language + 3. Click **Save** to apply the greeting + + + ```typescript + // Simple multilingual greeting + const firstMessage = "Hello! I can assist you in English, Spanish, or French. How can I help you today?"; + + // Language-specific greetings (advanced) + const multilingualGreeting = { + contents: [ + { + type: "text", + text: "Hello! How can I help you today?", + language: "en" + }, + { + type: "text", + text: "¡Hola! ¿Cómo puedo ayudarte hoy?", + language: "es" + }, + { + type: "text", + text: "Bonjour! Comment puis-je vous aider?", + language: "fr" + } + ] + }; + + await vapi.assistants.update(assistantId, { firstMessage }); + ``` + + + ```python + # Simple multilingual greeting + first_message = "Hello! I can assist you in English, Spanish, or French. How can I help you today?" + + # Language-specific greetings (advanced) + multilingual_greeting = { + "contents": [ + { + "type": "text", + "text": "Hello! How can I help you today?", + "language": "en" + }, + { + "type": "text", + "text": "¡Hola! ¿Cómo puedo ayudarte hoy?", + "language": "es" + }, + { + "type": "text", + "text": "Bonjour! Comment puis-je vous aider?", + "language": "fr" + } + ] + } + + client.assistants.update(assistant_id, first_message=first_message) + ``` + + + ```bash + curl -X PATCH "https://api.vapi.ai/assistant/YOUR_ASSISTANT_ID" \ + -H "Authorization: Bearer $VAPI_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "firstMessage": "Hello! I can assist you in English, Spanish, or French. How can I help you today?" + }' + ``` + + + +## Test your multilingual assistant + +Validate your configuration with different languages and scenarios. + + + + 1. Use the **Test Assistant** feature in your dashboard + 2. Test these scenarios: + - Start conversations in different languages + - Switch languages mid-conversation + - Use mixed-language input + 3. Monitor the **Call Analytics** for: + - Language detection accuracy + - Voice quality consistency + - Response appropriateness + 4. Adjust configuration based on test results + + + ```typescript + // Create test call + const testCall = await vapi.calls.create({ + assistantId: "your-multilingual-assistant-id", + customer: { + number: "+1234567890" + } + }); + + // Monitor call events + vapi.on('call-end', (event) => { + console.log('Language detection results:', event.transcript); + console.log('Call summary:', event.summary); + }); + ``` + + + ```python + # Create test call + test_call = client.calls.create( + assistant_id="your-multilingual-assistant-id", + customer={ + "number": "+1234567890" + } + ) + + # Retrieve call details for analysis + call_details = client.calls.get(test_call.id) + print(f"Language detection: {call_details.transcript}") + ``` + + + ```bash + # Create test call + curl -X POST "https://api.vapi.ai/call" \ + -H "Authorization: Bearer $VAPI_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "assistantId": "your-multilingual-assistant-id", + "customer": { + "number": "+1234567890" + } + }' + ``` + + + +## Provider capabilities (Accurate as of testing) + +### Speech Recognition (Transcription) + +| Provider | Multilingual Support | Languages | Notes | +|----------|---------------------|-----------|-------| +| **Google STT** | ✅ Full auto-detection | 125+ | **Only provider with true multilingual mode**. Use Flash or Flash Lite models for speed. | +| **Deepgram** | ⚠️ Limited | English + Spanish only | "Multi" mode supports only 2 languages | +| **Assembly AI** | ❌ English only | English | No multilingual support | +| **Azure STT** | ❌ Single language | 100+ | Many languages, but no auto-detection | +| **OpenAI Whisper** | ❌ Single language | 90+ | Many languages, but no auto-detection | +| **Gladia** | ❌ Single language | 80+ | Many languages, but no auto-detection | +| **Speechmatics** | ❌ Single language | 50+ | Many languages, but no auto-detection | +| **Talkscriber** | ❌ Single language | 40+ | Many languages, but no auto-detection | + +### Voice Synthesis (Text-to-Speech) + +| Provider | Languages | Multilingual Voice Selection | Best For | +|----------|-----------|------------------------------|----------| +| **Azure** | 140+ | ✅ Automatic | Maximum language coverage | +| **ElevenLabs** | 30+ | ✅ Automatic | Premium voice quality | +| **OpenAI TTS** | 50+ | ✅ Automatic | Consistent quality across languages | +| **PlayHT** | 80+ | ✅ Automatic | Cost-effective scaling | + +## Common challenges and solutions + + + + **Solutions:** + - Use Google STT (only provider with reliable multilingual support) + - For Deepgram, stick to English + Spanish combinations only + - Use higher-quality audio preprocessing + - Test with native speakers of target languages + + + + **Solutions:** + - **Explicitly list all supported languages** in your system prompt + - Include language capabilities in the assistant's instructions + - Test the prompt with multilingual conversations + - Avoid generic "multilingual" statements without specifics + + + + **Solutions:** + - Use `gemini-2.0-flash-lite` or `gemini-2.0-flash` models instead of standard models + - Consider the speed vs accuracy tradeoff + - For English + Spanish only, use Deepgram's "Multi" mode + - Optimize your audio quality to improve processing speed + + + + **Solutions:** + - Test different voice providers for each language + - Use Azure for maximum language coverage + - Configure fallback voices as backup options + - Consider premium providers for key languages + + + +## Next steps + +Now that you have multilingual support configured: + +- **[Build a complete multilingual agent](../assistants/examples/multilingual-agent):** Follow our step-by-step implementation guide +- **[Custom voices](custom-voices/custom-voice):** Set up region-specific custom voices +- **[System prompting](../prompting-guide):** Design effective multilingual prompts +- **[Call analysis](../call-analysis):** Monitor language performance and usage diff --git a/fern/docs.yml b/fern/docs.yml index b900b683..0c615231 100644 --- a/fern/docs.yml +++ b/fern/docs.yml @@ -208,6 +208,9 @@ navigation: - page: Documentation agent path: assistants/examples/docs-agent.mdx icon: fa-light fa-microphone + - page: Multilingual agent + path: assistants/examples/multilingual-agent.mdx + icon: fa-light fa-globe - section: Workflows contents: diff --git a/static/videos/quickstart.mp4 b/static/videos/quickstart.mp4 deleted file mode 100644 index ab30bea9..00000000 Binary files a/static/videos/quickstart.mp4 and /dev/null differ