Skip to content

[DOC] Add documentation about MCP token limits and large model handling #6

@ivnvxd

Description

@ivnvxd

Document MCP Token Limits

Documentation Need

Users need clear guidance on handling MCP's 25,000 token limit, especially when working with large Odoo models that have many fields. Without this documentation, users encounter confusing token limit errors.

Current Problem

When users request all fields from large models:

# This can exceed 25,000 tokens and fail
get_record(model="res.partner", record_id=10, fields=["__all__"])

Error: Token limit exceeded (127,342 tokens vs 25,000 limit)

Before Implementation

⚠️ CRITICAL: Verify current token limit behavior before documenting

  1. Test Current Behavior

    • Test large model queries with various field counts
    • Measure actual token usage for different operations
    • Identify which models actually exceed limits
    • Verify where token limits are enforced
  2. Review Existing Documentation

    • Check if token limits are already documented
    • Review current troubleshooting guides
    • Check README and API docs for warnings
    • Identify documentation gaps
  3. Analyze User Impact

    • Review user reports of token limit issues
    • Test common use cases that might hit limits
    • Understand which operations are most affected
    • Prioritize documentation based on actual problems

Documentation Requirements

1. Token Limit Overview

  • Explain what the 25,000 token limit means
  • Why it exists (MCP protocol limitation)
  • How to calculate approximate token usage
  • Impact on Odoo operations

2. Large Model Reference

Create a table of commonly used large models:

Model Total Fields Recommended Fields Token Estimate (all fields)
res.partner 199 10-20 ~127,000
sale.order 150+ 15-25 ~95,000
account.move 180+ 10-20 ~110,000
product.product 170+ 15-20 ~105,000
stock.picking 140+ 10-15 ~85,000

3. Best Practices Guide

## Handling Large Models

### DO ✅
- Always specify fields when working with large models
- Use smart defaults (omit fields parameter for automatic selection)
- Paginate large result sets
- Request only the fields you need

### DON'T ❌
- Use fields=["__all__"] on large models
- Request all records without pagination
- Fetch unnecessary relational data

### Examples

#### Good: Specific Fields
```python
# Request only needed fields
result = get_record(
    model="res.partner",
    record_id=10,
    fields=["name", "email", "phone", "is_company", "city"]
)

Good: Smart Defaults

# Let the system choose optimal fields
result = get_record(
    model="res.partner",
    record_id=10
    # No fields parameter - uses smart defaults
)

Bad: All Fields

# DON'T DO THIS - exceeds token limit
result = get_record(
    model="res.partner",
    record_id=10,
    fields=["__all__"]  # ❌ 127,000+ tokens!
)

### 4. Token Estimation Formula

```markdown
## Estimating Token Usage

Rough formula:
- Field name: ~2-3 tokens
- Field value (average): ~10-20 tokens
- Structure overhead: ~5 tokens per field

Total ≈ (number_of_fields × 30) + base_overhead

Example for res.partner:
- 199 fields × 30 tokens = 5,970 tokens (minimum)
- With actual data: ~127,000 tokens (varies by content)

5. Troubleshooting Section

## Token Limit Errors

### Symptoms
- Error: "Token limit exceeded"
- Partial responses
- Timeouts on large requests

### Solutions

1. **Reduce fields**
   ```python
   # Instead of all fields, select specific ones
   fields = ["name", "email", "phone", "website", "vat"]
  1. Use smart defaults

    # Omit fields parameter entirely
    result = get_record(model="res.partner", record_id=10)
  2. Paginate large searches

    # Fetch in batches
    for offset in range(0, total, 100):
        result = search_records(
            model="res.partner",
            limit=100,
            offset=offset,
            fields=["name", "email"]  # Minimal fields
        )
  3. Check field count first

    # Use list_models to see field counts
    models = list_models()
    # Check if model has >100 fields

## Documentation Locations

Add this documentation to:
1. **README.md** - Add "Token Limits" section
2. **docs/troubleshooting.md** - Create comprehensive guide
3. **API Reference** - Add warnings to relevant methods
4. **Examples** - Show proper field selection patterns

## Success Criteria

- [ ] Users understand the 25,000 token limit
- [ ] Table of large models with field counts
- [ ] Clear examples of good vs bad practices
- [ ] Token estimation guidance
- [ ] Troubleshooting steps for token errors
- [ ] Integration with existing documentation

## Additional Context

This limitation has caused confusion in testing where users attempt to fetch all fields and encounter unexpected errors. Clear documentation will prevent this common pitfall and improve user experience.

Metadata

Metadata

Assignees

Labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions