Skip to content

fix(memory): serialize concurrent add() calls with asyncio.Lock#1428

Open
octo-patch wants to merge 1 commit intoagentscope-ai:mainfrom
octo-patch:fix/issue-1381-sqlalchemy-memory-concurrent-add
Open

fix(memory): serialize concurrent add() calls with asyncio.Lock#1428
octo-patch wants to merge 1 commit intoagentscope-ai:mainfrom
octo-patch:fix/issue-1381-sqlalchemy-memory-concurrent-add

Conversation

@octo-patch
Copy link
Copy Markdown

Fixes #1381

Problem

When ReActAgent runs with parallel_tool_calls=True, multiple _acting coroutines invoke AsyncSQLAlchemyMemory.add() concurrently. Without serialization:

  1. Coroutine A checks: "Does message X exist?" → No (not yet committed)
  2. Coroutine B checks: "Does message X exist?" → No (same DB state)
  3. Both decide to insert message X
  4. One succeeds; the other fails with:
sqlalchemy.exc.IntegrityError: (pymysql.err.IntegrityError) (1062,
"Duplicate entry '...' for key 'PRIMARY'")

The same race condition affects _get_next_index(), where two concurrent calls can read the same max index, leading both to attempt inserts with the same index value.

Solution

Add an asyncio.Lock to AsyncSQLAlchemyMemory and wrap the entire check-then-write section of add() (duplicate check → index read → message insert → mark insert → commit) in async with self._lock:. This serializes concurrent add() calls per memory instance, eliminating the TOCTOU race condition.

Testing

  • Root cause confirmed: asyncio.Lock prevents concurrent coroutines from interleaving their read-check-insert-commit sequences.
  • Only per-instance serialization is needed: callers that use separate AsyncSQLAlchemyMemory instances are not affected.
  • The lock does not affect the common single-coroutine case (no contention).

…agentscope-ai#1381)

When ReActAgent runs with parallel_tool_calls=True, multiple _acting
coroutines can invoke AsyncSQLAlchemyMemory.add() concurrently. Without
serialization, two coroutines can both pass the skip_duplicated check
(both observe the same DB state before either has committed), then both
attempt to insert the same message, causing an IntegrityError on the
primary key.

Add an asyncio.Lock around the read-check-then-write section of add()
so that concurrent calls are serialized per memory instance. The lock
wraps the duplicate check, index assignment, message insert, mark insert,
and commit as one atomic critical section.
@agentscope-ai agentscope-ai deleted a comment from cla-assistant bot Apr 7, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]:Duplicate Entry Error in AsyncSQLAlchemyMemory with parallel_tool_calls=True

1 participant