Skip to content

feat(tool call): Enhance Llama32Detector for improved JSON parsing in non-stream #6784

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
May 31, 2025

Conversation

CatherineSue
Copy link
Collaborator

@CatherineSue CatherineSue commented May 30, 2025

Motivation

Resolves: #6589 (comment)

Modifications

  • Introduced error handling to handle invalid JSON part with mixed text
  • Added a new UT for Llama32Detector

Checklist

… non-stream

- Introduced error handling to handle invalid JSON part with mixed text
- Added a new UT for Llama32Detector
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @CatherineSue, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello! Gemini here, providing a summary of this pull request by @CatherineSue.

This PR aims to enhance the Llama32Detector's ability to parse tool calls, specifically addressing issues where the model's output might contain malformed JSON or a mix of text and JSON, particularly in non-streaming scenarios. The core change involves replacing the previous simple split-and-parse logic with a more robust iterative parsing approach using json.JSONDecoder().raw_decode. This new method allows the detector to find and parse multiple valid JSON objects even if they are interspersed with or preceded by invalid text, and correctly handle any trailing text after the last valid JSON.

Additionally, a new dedicated test class has been added to thoroughly test the Llama32Detector under various conditions, including those with invalid JSON parts and mixed text, ensuring the improved parsing logic functions as expected.

Highlights

  • Improved JSON Parsing: The Llama32Detector's detect_and_parse method has been refactored to use json.JSONDecoder().raw_decode in a loop. This allows it to parse multiple tool calls and gracefully handle invalid JSON segments or mixed text within the output, skipping malformed parts and continuing to search for valid JSON objects.
  • Robust Error Handling: The new parsing logic includes error handling for json.JSONDecodeError, enabling the detector to recover from parsing failures for individual JSON parts and attempt to find the next valid tool call object within the text.
  • Handling Trailing Text: The updated parser now correctly identifies and includes any text that appears after the last successfully parsed JSON object in the normal_text output.
  • New Unit Tests: A dedicated test class TestLlama32Detector has been added with several test cases covering various scenarios, including single/multiple tool calls, mixed valid/invalid JSON, trailing text, and the presence of the <|python_tag|>.

Changelog

  • python/sglang/srt/function_call/llama32_detector.py
    • Modified detect_and_parse to use text.split("<|python_tag|>", maxsplit=1) for the initial split.
    • Replaced the simple split-by-separator and json.loads loop with an iterative parsing approach using json.JSONDecoder().raw_decode.
    • Added error handling (try...except json.JSONDecodeError) to skip invalid JSON parts and search for the next potential JSON object start ({"name":).
    • Introduced safe_idx to track the end of the last successfully parsed JSON object.
    • Ensured trailing text after the last valid JSON object is appended to the normal_text result.
  • test/srt/test_function_call_parser.py
    • Added a new test class TestLlama32Detector.
    • Added setUp method to initialize tools and the detector.
    • Added test cases for test_single_json, test_multiple_json_with_separator, test_multiple_json_with_separator_customized, test_json_with_trailing_text, test_invalid_then_valid_json, test_plain_text_only, and test_with_python_tag_prefix to cover various parsing scenarios for the Llama32Detector.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


JSON, a data tree,
Sometimes tangled, wild, and free.
Parse with care, you see,
Or errors you will decree.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@CatherineSue
Copy link
Collaborator Author

Test:
warnings are expected
Screenshot 2025-05-30 at 4 03 03 PM

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces valuable enhancements to the Llama32Detector for parsing JSON tool calls, especially in handling malformed or mixed content by using raw_decode and an error recovery mechanism. The added unit tests are helpful for verifying these changes.

I've identified a high-severity issue concerning potential loss of non-JSON text situated between valid tool calls or before a parse error, and a medium-severity issue related to the handling of the tool call separator. Addressing these will significantly improve the robustness and correctness of the parser.

Summary of Findings

  • Potential Data Loss: Intermediate non-JSON text between tool calls or before a recovered parsing error might be lost. The current logic for constructing normal_text primarily relies on trailing_text, which captures only the final unparsed segment after the last successful JSON parse. (Severity: high)
  • Separator Handling Rigidity: The parser advances its index by a fixed len(self.tool_call_separator) after each successful JSON parse. This assumes the separator is always present and identical, which could lead to misparsing if the actual output varies (e.g., missing separator, different separator, variable whitespace). (Severity: medium)
  • Test Case Clarity (Low Severity - Not Commented): The test test_multiple_json_with_separator_customized passes due to the error recovery mechanism rather than flexible separator handling. Its name might be slightly misleading. This is a minor point and not a functional bug in the main code.

Merge Readiness

The pull request makes good progress in improving JSON parsing for Llama32Detector. However, due to the identified high-severity issue regarding potential data loss of intermediate text, I recommend addressing this before merging. The medium-severity issue concerning separator handling should also be considered for improved robustness. Once these points are addressed, the PR should be in a much stronger position. As an AI, I am not authorized to approve pull requests; please ensure further review and approval from team members.

Comment on lines 51 to 78
safe_idx = idx # the index of the last valid JSON object
all_actions = []
for part in json_parts:
while idx < len(action_text):
try:
# Parse each individual JSON object
action = json.loads(part)
all_actions.append(action)
obj, end = decoder.raw_decode(action_text[idx:])
all_actions.append(obj)
idx += end + len(self.tool_call_separator)
safe_idx = idx
except json.JSONDecodeError as e:
logger.warning(f"Failed to parse JSON part: {part}")
logger.warning(f"JSON parse error: {str(e)}")
# Find where next `{"name"` appears and try again
logger.warning(
f"Failed to parse JSON part: {action_text[idx:]}, JSON parse error: {str(e)}"
)
next_obj_start = action_text.find('{"name":', idx + 1)
if next_obj_start == -1:
break
idx = next_obj_start
continue
calls = []

# Only process if we found valid JSON objects
if all_actions:
calls = self.parse_base_json(all_actions, tools)
return StreamingParseResult(normal_text=normal_text, calls=calls)
calls = self.parse_base_json(all_actions, tools) if all_actions else []
# Use safe_idx to avoid idx containing the last part of an invalid JSON object
trailing_text = (
action_text[safe_idx:].strip() if safe_idx < len(action_text) else ""
)
return StreamingParseResult(
normal_text=normal_text + trailing_text, calls=calls
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There's a potential for data loss of non-JSON text segments that appear between valid JSON tool calls or before a JSON parsing error that's subsequently recovered from.

Currently, safe_idx is updated only after a successful JSON parse and the assumed separator. If a parsing error occurs, idx might jump to the next potential JSON start (next_obj_start), but the text between the previous safe_idx and the point of error, or between the point of error and next_obj_start, might not be fully incorporated into the normal_text.

The trailing_text (calculated as action_text[safe_idx:].strip()) captures text after the last position safe_idx was updated (i.e., after the last successfully parsed JSON and its separator). This means intermediate unparseable segments could be lost.

For example, with action_text = '{"name":"call1"}; intermediate text {"name":"call2"}; final text':

  1. call1 is parsed. safe_idx is updated to point after call1; (i.e., start of intermediate text).
  2. An attempt to parse from intermediate text fails. idx jumps to the start of call2.
  3. call2 is parsed. safe_idx is updated to point after call2; (i.e., start of final text).
  4. Loop ends or next parse fails.
  5. trailing_text becomes final text.
    In this scenario, intermediate text is lost.

Could we adjust the logic to accumulate all non-JSON segments into normal_text? This would likely involve collecting parts of action_text that are skipped or determined to be unparseable into a list and then joining them.

A conceptual approach:

  1. Initialize collected_normal_text_parts = [normal_text_prefix] (where normal_text_prefix is from the <|python_tag|> split).
  2. In the parsing loop, when raw_decode fails:
    a. The text from the previous safe_idx to the current idx (where the error occurred) is an interstitial segment. Add action_text[safe_idx:idx] to collected_normal_text_parts if idx > safe_idx.
    b. If next_obj_start is found, the segment action_text[idx:next_obj_start] is the unparseable JSON. Add this to collected_normal_text_parts.
    c. Update idx = next_obj_start and, crucially, safe_idx = idx so the next segment starts correctly.
    d. If next_obj_start is not found, the remainder action_text[idx:] is unparseable. Add it to collected_normal_text_parts and break.
  3. After the loop, if it exited normally, any remaining text from safe_idx to len(action_text) should be added.
  4. Join collected_normal_text_parts to form the final normal_text.

This would ensure all parts of action_text are accounted for, either as a parsed tool call or as part of the normal text output.

        collected_normal_text_parts = [normal_text]  # `normal_text` here is from the initial split
        decoder = json.JSONDecoder()
        current_idx = 0
        # safe_idx tracks the start of the current segment being processed.
        # It's updated to mark the beginning of text that will either be parsed as JSON
        # or appended to collected_normal_text_parts.
        safe_idx = 0 
        all_actions = []
        action_text_len = len(action_text)

        while current_idx < action_text_len:
            try:
                # Append text between the last safe_idx and current parsing position to normal_text_parts.
                # This captures text that was not part of a successful JSON parse in the previous step.
                if current_idx > safe_idx:
                    collected_normal_text_parts.append(action_text[safe_idx:current_idx])
                
                obj, end_offset = decoder.raw_decode(action_text[current_idx:])
                all_actions.append(obj)
                current_idx += end_offset
                safe_idx = current_idx # Mark the end of the successfully parsed JSON object.

                # Attempt to skip the tool_call_separator. The original code advanced idx by len(separator).
                # This assumes the separator is present. A more robust handling might be needed here (see other comment),
                # but for this suggestion, we'll stick to the PR's implicit consumption.
                # If the separator is considered part of the scannable action_text:
                if current_idx < action_text_len and action_text.startswith(self.tool_call_separator, current_idx):
                    current_idx += len(self.tool_call_separator)
                # If separator is not strictly required or can vary, this part needs more thought.
                # For now, this mimics the PR's advancement past an assumed separator.
                safe_idx = current_idx # Update safe_idx to be after the JSON and its assumed separator.

            except json.JSONDecodeError as e:
                logger.warning(
                    f"Failed to parse JSON part starting at index {current_idx}: {action_text[current_idx:]}, JSON parse error: {str(e)}"
                )
                # Append text from last safe_idx up to where the error occurred.
                if current_idx > safe_idx:
                    collected_normal_text_parts.append(action_text[safe_idx:current_idx])

                next_obj_start = action_text.find('{"name":', current_idx + 1)
                if next_obj_start != -1:
                    # The text from current_idx (where error occurred) to next_obj_start is the unparseable segment.
                    collected_normal_text_parts.append(action_text[current_idx:next_obj_start])
                    current_idx = next_obj_start
                    safe_idx = current_idx # Next attempt starts here; this becomes the new safe point.
                else:
                    # No more '{"name":' found. The rest of the string from current_idx is normal text.
                    collected_normal_text_parts.append(action_text[current_idx:])
                    safe_idx = action_text_len # All remaining text accounted for.
                    break # Exit loop
                continue

        # After the loop, if there's any remaining text from the last safe_idx to the end.
        if safe_idx < action_text_len:
            collected_normal_text_parts.append(action_text[safe_idx:])

        calls = self.parse_base_json(all_actions, tools) if all_actions else []
        final_normal_text = "".join(collected_normal_text_parts)
        return StreamingParseResult(
            normal_text=final_normal_text.strip(), calls=calls
        )

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

action_text = '{"name":"call1"}; intermediate text {"name":"call2"}; final text'
This too is an extreme case, usually model doesn't give this kind of response. We can deal with it when such case happens. Otherwise the logic becomes too complicated.

Even with the logic you suggested, the collected_normal_text_parts may overlook if the intermediate text involves some malformed JSON. For instance, action_text = '{"name":"call1"}; {"malperformed, {"name":"call2"}; final text'

@zhyncs zhyncs merged commit f18b068 into main May 31, 2025
1 of 20 checks passed
@zhyncs zhyncs deleted the chang/llama32-tool-call branch May 31, 2025 00:05
Edenzzzz pushed a commit to Edenzzzz/sglang that referenced this pull request Jun 2, 2025
Layssy pushed a commit to Layssy/sglang-iaas that referenced this pull request Jun 9, 2025
xwu-intel pushed a commit to xwu-intel/sglang that referenced this pull request Jun 17, 2025
walker-ai pushed a commit to walker-ai/sglang that referenced this pull request Jul 8, 2025
Merge branch 'sgl_20250610_sync_tag047 of [email protected]:Theta/SGLang.git into main

https://code.alipay.com/Theta/SGLang/pull_requests/52


Reviewed-by: 剑川 <[email protected]>


* [Bugfix] Fix slice operation when chunk size mismatch (sgl-project#6697)
* [Bugfix] Fix ChatCompletion endpoint of mini_lb when stream is set (sgl-project#6703)
* [CI] Fix setup of disaggregation with different tp (sgl-project#6706)
* [PD] Remove Unnecessary Exception Handling for FastQueue.get() (sgl-project#6712)
* Fuse routed_scaling_factor in DeepSeek (sgl-project#6710)
* Overlap two kernels in DeepSeek with communication (sgl-project#6711)
* Minor refactor two-batch overlap (sgl-project#6682)
* Speed up when having padding tokens two-batch overlap (sgl-project#6668)
* [Feature] Support Flashinfer fp8 blockwise GEMM kernel on Blackwell (sgl-project#6479)
* Fix LoRA bench (sgl-project#6719)
* temp
* Fix PP for Qwen3 MoE (sgl-project#6709)
* [feat] triton kernel for get_last_loc (sgl-project#6676)
* [fix] more mem for draft_extend cuda_graph (sgl-project#6726)
* [PD] bug fix:  Update status if nixl receiver send a a dummy req. (sgl-project#6720)
* Tune memory arguments on B200 (sgl-project#6718)
* Add DeepSeek-R1-0528 function call chat template (sgl-project#6725)
* refactor(tool call): Fix BaseFormatDetector tool_index issue and refactor `parse_streaming_increment` (sgl-project#6715)
* Add draft extend CUDA graph for Triton backend (sgl-project#6705)
* refactor apply_w8a8_block_fp8_linear in fp (sgl-project#6545)
* [PD] Support completion endpoint (sgl-project#6729)
* PD Rust LB (PO2) (sgl-project#6437)
* Super tiny enable sole usage of expert distribution metrics and update doc (sgl-project#6680)
* Support picking variants of EPLB algorithms (sgl-project#6728)
* Support tuning DeepEP configs (sgl-project#6742)
* [test] add ut and bm for get_last_loc (sgl-project#6746)
* Fix mem_fraction_static for AMD CI (sgl-project#6748)
* [fix][RL] Fix DeepSeekV3ForCausalLM.post_load_weights for multiple update weight (sgl-project#6265)
* Improve EPLB logical to physical dispatch map (sgl-project#6727)
* Update DeepSeek-R1-0528 function call chat template (sgl-project#6765)
* [PD] Optimize time out logic and add env var doc for mooncake (sgl-project#6761)
* Fix aiohttp 'Chunk too big' in bench_serving (sgl-project#6737)
* Support sliding window in triton backend (sgl-project#6509)
* Fix shared experts fusion error (sgl-project#6289)
* Fix one bug in the grouped-gemm triton kernel (sgl-project#6772)
* update llama4 chat template and pythonic parser (sgl-project#6679)
* feat(tool call): Enhance Llama32Detector for improved JSON parsing in non-stream (sgl-project#6784)
* Support token-level quantization for EP MoE (sgl-project#6782)
* Temporarily lower mmlu threshold for triton sliding window backend (sgl-project#6785)
* ci: relax test_function_call_required (sgl-project#6786)
* Add intel_amx backend for Radix Attention for CPU (sgl-project#6408)
* Fix incorrect LoRA weight loading for fused gate_up_proj (sgl-project#6734)
* fix(PD-disaggregation): Can not get local ip (sgl-project#6792)
* [FIX] mmmu bench serving result display error (sgl-project#6525) (sgl-project#6791)
* Bump torch to 2.7.0 (sgl-project#6788)
* chore: bump sgl-kernel v0.1.5 (sgl-project#6794)
* Improve profiler and integrate profiler in bench_one_batch_server (sgl-project#6787)
* chore: upgrade sgl-kernel v0.1.5 (sgl-project#6795)
* [Minor] Always append newline after image token when parsing chat message (sgl-project#6797)
* Update CI tests for Llama4 models (sgl-project#6421)
* [Feat] Enable PDL automatically on Hopper architecture (sgl-project#5981)
* chore: update blackwell docker (sgl-project#6800)
* misc: cache is_hopper_arch (sgl-project#6799)
* Remove contiguous before Flashinfer groupwise fp8 gemm (sgl-project#6804)
* Correctly abort the failed grammar requests & Improve the handling of abort (sgl-project#6803)
* [EP] Add cuda kernel for moe_ep_pre_reorder (sgl-project#6699)
* Add draft extend CUDA graph for flashinfer backend  (sgl-project#6805)
* Refactor CustomOp to avoid confusing bugs (sgl-project#5382)
* Tiny log prefill time (sgl-project#6780)
* Tiny fix EPLB assertion about rebalancing period and recorder window size (sgl-project#6813)
* Add simple utility to dump tensors for debugging (sgl-project#6815)
* Fix profiles do not have consistent names (sgl-project#6811)
* Speed up rebalancing when using non-static dispatch algorithms (sgl-project#6812)
* [1/2] Add Kernel support for Cutlass based Fused FP4 MoE (sgl-project#6093)
* [Router] Fix k8s Service Discovery (sgl-project#6766)
* Add CPU optimized kernels for topk and rope fusions  (sgl-project#6456)
* fix new_page_count_next_decode (sgl-project#6671)
* Fix wrong weight reference in dynamic EPLB (sgl-project#6818)
* Minor add metrics to expert location updater (sgl-project#6816)
* [Refactor] Rename `n_share_experts_fusion` as `num_fused_shared_experts` (sgl-project#6735)
* [FEAT] Add transformers backend support  (sgl-project#5929)
* [fix] recover auto-dispatch for rmsnorm and rope (sgl-project#6745)
* fix ep_moe_reorder kernel bugs (sgl-project#6858)
* [Refactor] Multimodal data processing for VLM (sgl-project#6659)
* Decoder-only Scoring API (sgl-project#6460)
* feat: add dp-rank to KV events (sgl-project#6852)
* Set `num_fused_shared_experts` as `num_shared_experts` when shared_experts fusion is not disabled (sgl-project#6736)
* Fix one missing arg in DeepEP (sgl-project#6878)
* Support LoRA in TestOpenAIVisionServer and fix fused kv_proj loading bug. (sgl-project#6861)
* support 1 shot allreduce  in 1-node and 2-node using mscclpp (sgl-project#6277)
* Fix Qwen3MoE missing token padding optimization (sgl-project#6820)
* Tiny update error hints (sgl-project#6846)
* Support layerwise rebalancing experts (sgl-project#6851)
* Tiny allow profiler API to auto create directory (sgl-project#6865)
* Support Blackwell DeepEP docker images (sgl-project#6868)
* [EP] Add cuda kernel for moe_ep_post_reorder (sgl-project#6837)
* [theta]merge 0605
* oai: fix openAI client error with single request via batch api (sgl-project#6170)
* [PD] Fix potential perf spike caused by tracker gc and optimize doc (sgl-project#6764)
* Use deepgemm instead of triton for fused_qkv_a_proj_with_mqa (sgl-project#6890)
* [CUTLASS-FP4-MOE]  Introduce CutlassMoEParams class for easy initialization of Cutlass Grouped Gems Metadata (sgl-project#6887)
* bugfix(OAI): Fix image_data processing for jinja chat templates (sgl-project#6877)
* [CPU] enable CI for PRs, add Dockerfile and auto build task (sgl-project#6458)
* AITER backend extension and workload optimizations (sgl-project#6838)
* [theta]merge
* [theta]merge
* [Feature] Support Flashinfer fmha on Blackwell (sgl-project#6930)
* Fix a bug in abort & Improve docstrings for abort (sgl-project#6931)
* Tiny support customize DeepEP max dispatch tokens per rank (sgl-project#6934)
* Sync the changes on cuda graph runners (sgl-project#6932)
* [PD] Optimize transfer queue forward logic for dummy rank (sgl-project#6922)
* [Refactor] image data process in bench_serving (sgl-project#6879)
* [fix] logical_to_all_physical_map index 256 is out of bounds in EP parallel. (sgl-project#6767)
* Add triton fused moe kernel config for E=257 on B200 (sgl-project#6939)
* [sgl-kernel] update deepgemm (sgl-project#6942)
* chore: bump sgl-kernel v0.1.6 (sgl-project#6943)
* Minor compile fused topk (sgl-project#6944)
* [Bugfix] pipeline parallelism and Eagle Qwen2 (sgl-project#6910)
* Tiny re-introduce profile id logging (sgl-project#6912)
* Add triton version as a fused_moe_triton config search key to avoid performace decrease in different Triton version (sgl-project#5955)
* reduce torch.zeros overhead in moe align block size kernel (sgl-project#6369)
* chore: upgrade sgl-kernel v0.1.6 (sgl-project#6945)
* add fbgemm moe grouped gemm kernel benchmark (sgl-project#6924)
* [Docker] Add docker file for SGL Router (sgl-project#6915)
* Disabling mixed chunked prefill when eagle is enabled (sgl-project#6874)
* Add canary for EPLB rebalancing (sgl-project#6895)
* Refactor global_server_args_dict (sgl-project#6866)
* Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220)
* Update server timeout time in AMD CI. (sgl-project#6953)
* [misc] add is_cpu() (sgl-project#6950)
* Add H20 fused MoE kernel tuning configs for DeepSeek-R1/V3 (sgl-project#6885)
* Add a CUDA kernel for fusing mapping and weighted sum for MoE. (sgl-project#6916)
* chore: bump sgl-kernel v0.1.6.post1 (sgl-project#6955)
* chore: upgrade sgl-kernel v0.1.6.post1 (sgl-project#6957)
* [DeepseekR1-FP4] Add Support for nvidia/DeepSeekR1-FP4 model (sgl-project#6853)
* Revert "Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220)" (sgl-project#6968)
* [AMD] Add more tests to per-commit-amd (sgl-project#6926)
* chore: bump sgl-kernel v0.1.7 (sgl-project#6963)
* Slightly improve the sampler to skip unnecessary steps (sgl-project#6956)
* rebase h20 fused_moe config (sgl-project#6966)
* Fix CI and triton moe Configs (sgl-project#6974)
* Remove unnecessary kernels of num_token_non_padded (sgl-project#6965)
* Extend cuda graph capture bs for B200 (sgl-project#6937)
* Fuse routed scaling factor in deepseek (sgl-project#6970)
* Sync cuda graph runners (sgl-project#6976)
* Fix draft extend ut stability with flush cache (sgl-project#6979)
* Fix triton sliding window test case (sgl-project#6981)
* Fix expert distribution dumping causes OOM (sgl-project#6967)
* Minor remove one kernel for DeepSeek (sgl-project#6977)
* [perf][sgl-kernel] extend cutlass_mla_decode to support num_head < 128 (sgl-project#6929)
* Enable more unit tests for AMD CI. (sgl-project#6983)
* Use torch.compile to fuse flash attention decode metadata preparation (sgl-project#6973)
* Eliminate stream sync to speed up LoRA batch init  (sgl-project#6960)
* support qwen3 emebedding (sgl-project#6990)
* Fix torch profiler bugs for bench_offline_throughput.py (sgl-project#6557)
* chore: upgrade flashinfer v0.2.6.post1 jit (sgl-project#6958)
* cleanup tmp dir (sgl-project#7007)
* chore: update pr test xeon (sgl-project#7008)
* Fix cutlass MLA gets almost zero accuracy (sgl-project#6998)
* Update amd nightly models CI. (sgl-project#6992)
* feat: add direct routing strategy to DP worker (sgl-project#6884)
* Fallback to lower triton version for unfound fused moe configs (sgl-project#7013)
* Fix torchvision version for Blackwell (sgl-project#7015)
* Simplify prepare_extend_after_decode (sgl-project#6987)
* Migrate to assertEqual (sgl-project#6741)
* Fix torch version in blackwell dockerfile (sgl-project#7017)
* chore: update pr test xeon (sgl-project#7018)
* Update default settings for blackwell (sgl-project#7023)
* Support both approximate and exact expert distribution collection (sgl-project#6964)
* Add decode req pool (sgl-project#6980)
* [theta]merge 0610
* [theta]merge 0610
* [CI] Add CI workflow for sgl-router docker build (sgl-project#7027)
* Fix fused_moe triton configs (sgl-project#7029)
* CPU: map changes from developing branch in sgl-kernel (sgl-project#6833)
* chore: bump v0.4.7 (sgl-project#7038)
* Update README.md (sgl-project#7040)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature] Tool Call Roadmap
2 participants