Skip to content

Conversation

@danny-avila
Copy link
Owner

Summary

I added comprehensive support for Zhipu AI's GLM model family, including context window configurations and pricing multipliers for all GLM variants.

  • Added context window limits for all GLM models (glm-4, glm-4-32b, glm-4.5, glm-4.5-air, glm-4.5v, glm-4.6) ranging from 66K to 200K tokens
  • Implemented pricing multipliers for prompt and completion tokens across all GLM model variants
  • Enhanced model name matching to handle provider prefixes (z-ai/, zai/, zai-org/) and suffixes (-fp8, -FP8) with case-insensitive matching
  • Refactored findMatchingPattern function to use case-insensitive comparison for improved pattern matching reliability
  • Added comprehensive test coverage for GLM models including pattern matching, token limits, and pricing calculations across various naming conventions
  • Updated @librechat/agents to v2.4.83 to address a reasoning edge case encountered with GLM models

Change Type

  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)

Testing

Tested GLM model support including context window limits, pricing calculations, and model name matching with various provider prefixes and suffixes. Verified case-insensitive matching works correctly and that all model variants return expected token limits and pricing multipliers.

Test Configuration

  • Node.js environment with Jest test runner
  • Test cases cover exact model names, provider-prefixed names, case variations, and suffix variations
  • Validated against tokenValues and maxTokensMap configurations

Checklist

  • My code adheres to this project's style guidelines
  • I have performed a self-review of my own code
  • I have commented in any complex areas of my code
  • My changes do not introduce new warnings
  • I have written tests demonstrating that my changes are effective or that my feature works
  • Local unit tests pass with my changes
  • Any changes dependent on mine have been merged and published in downstream modules

@danny-avila danny-avila merged commit c9103a1 into dev Oct 5, 2025
5 checks passed
@danny-avila danny-avila deleted the feat/glm-support branch October 5, 2025 13:08
arbreton pushed a commit to arbreton/LibreChat that referenced this pull request Oct 9, 2025
* fix: update @librechat/agents to v2.4.83 to handle reasoning edge case encountered with GLM models

* feat: GLM Context Window & Pricing Support

* feat: Add support for glm4 model in token values and tests
Francistab705 pushed a commit to Francistab705/LibreChat that referenced this pull request Oct 27, 2025
* fix: update @librechat/agents to v2.4.83 to handle reasoning edge case encountered with GLM models

* feat: GLM Context Window & Pricing Support

* feat: Add support for glm4 model in token values and tests
JustinBeaudry pushed a commit to Actual-Reality/LibreChat that referenced this pull request Nov 12, 2025
* fix: update @librechat/agents to v2.4.83 to handle reasoning edge case encountered with GLM models

* feat: GLM Context Window & Pricing Support

* feat: Add support for glm4 model in token values and tests
Guiraud pushed a commit to Guiraud/LibreChat that referenced this pull request Nov 21, 2025
* fix: update @librechat/agents to v2.4.83 to handle reasoning edge case encountered with GLM models

* feat: GLM Context Window & Pricing Support

* feat: Add support for glm4 model in token values and tests
patricksn3ll pushed a commit to patricksn3ll/LibreChat that referenced this pull request Dec 11, 2025
* fix: update @librechat/agents to v2.4.83 to handle reasoning edge case encountered with GLM models

* feat: GLM Context Window & Pricing Support

* feat: Add support for glm4 model in token values and tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants