Skip to content

debugpy 发生异常: TypeError an integer is required _pydevd_sys_monitoring\\_pydevd_sys_monitoring_cython.pyx", line 1367 #1733

Closed
@xiezhipeng-git

Description

@xiezhipeng-git

Before creating a new issue, please check the FAQ to see if your question is answered there.

Environment data

  • debugpy version: 1.8.8
  • OS and version: windows
  • Python version (& distribution if applicable, e.g. Anaconda): 3.12
  • Using VS Code or Visual Studio:VS Code

Actual behavior

发生异常: TypeError (note: full exception trace is shown but execution is paused at: _run_module_as_main)
an integer is required
File "/root/anaconda3/lib/python3.12/site-packages/_pydevd_sys_monitoring\_pydevd_sys_monitoring_cython.pyx", line 1367, in _pydevd_sys_monitoring_cython._jump_event
File "", line 69, in cfunc.to_py.__Pyx_CFunc_7f6725__29_pydevd_sys_monitoring_cython_object__lParen__etc_to_py_4code_11from_offset_9to_offset.wrap
File "/root/anaconda3/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 190, in build_async_engine_client_from_engine_args
try:
File "/root/anaconda3/lib/python3.12/contextlib.py", line 210, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 108, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/lib/python3.12/contextlib.py", line 210, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 556, in run_server
async with build_async_engine_client(args) as engine_client:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/lib/python3.12/site-packages/uvloop/init.py", line 61, in wrapper
return await main
^^^^^^^^^^
File "/root/anaconda3/lib/python3.12/site-packages/uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
return future.result()
File "/root/anaconda3/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/root/anaconda3/lib/python3.12/site-packages/uvloop/init.py", line 109, in run
return __asyncio.run(
^^^^^^^^^^^^^^
File "/root/anaconda3/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 593, in
uvloop.run(run_server(args))
File "/root/anaconda3/lib/python3.12/runpy.py", line 88, in _run_code
exec(code, run_globals)
File "/root/anaconda3/lib/python3.12/runpy.py", line 198, in _run_module_as_main (Current frame)
return _run_code(code, main_globals, None,
TypeError: an integer is required

Expected behavior

can debug vllm api_server

Steps to reproduce:

{
    // 使用 IntelliSense 了解相关属性。 
    // 悬停以查看现有属性的描述。
    // 欲了解更多信息,请访问: https://go.microsoft.com/fwlink/?linkid=830387
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Start VLLM API Server",
            "type": "debugpy",
            "request": "launch",
            "program": "/root/anaconda3/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py",
            "args": [
                "--model", "/mnt/d/Users/Admin/.cache/modelscope/hub/Qwen/Qwen2___5-0___5B-Instruct",
                "--served-model-name", "/mnt/d/Users/Admin/.cache/modelscope/hub/Qwen/Qwen2___5-0___5B-Instruct",
                // "--model", "/mnt/d/my/work/LLM/AI-MO-prize_1/model_8bit",
                // "--served-model-name", "/mnt/d/my/work/LLM/AI-MO-prize_1/model_8bit",
                "--trust-remote-code",
                "--host", "0.0.0.0",
                "--port", "45001",
                "--tensor-parallel-size", "1",
                "--gpu-memory-utilization", "0.95",
                "--max-num-seqs", "256",
                "--enforce-eager",
                "--disable-log-requests",
                "--disable-log-stats",
                "--max-model-len","4096",
            ],
            // "env": {
            //     "CUDA_VISIBLE_DEVICES": "0"
            // },
            "console": "integratedTerminal"
        }
    ]
}

In this issue
vllm-project/vllm#10116
when use 7B model.will can not get info.want to debug.but failed

Metadata

Metadata

Assignees

Labels

needs reproIssue has not been reproduced yet

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions