generated from amazon-archives/__template_Apache-2.0
-
Notifications
You must be signed in to change notification settings - Fork 17
Open
Labels
AI KitIssues related to AI KitIssues related to AI Kitfeature-requestNew feature or requestNew feature or request
Description
Describe the bug
Knowledge Base integration in RAG pipeline not working. AIConversation component only returns model chat() output, with no signs of embedding retrieval.
To Reproduce
Steps to reproduce the behavior:
- using NextJS App router, configure Knowledge Base as described in https://docs.amplify.aws/react/ai/conversation/knowledge-base/.
const schema = a.schema({
knowledgeBase: a
.query()
.arguments({ input: a.string() })
.handler(
a.handler.custom({
dataSource: "KnowledgeBaseDataSource",
entry: "./kbResolver.js",
}),
)
.returns(a.string())
.authorization((allow) => allow.authenticated()),
chat: a.conversation({
aiModel: a.ai.model("Claude 3 Sonnet"),
systemPrompt: "You are a knowledgeable pharmaceutical assistant specialized in medication regulations.",
tools: [
a.ai.dataTool({
name: 'searchDocumentation',
description: 'Searches pharmaceutical documentation.',
query: a.ref('knowledgeBase'),
}),
]
})
.authorization((allow) => allow.owner()),
- Generate simple UI, similar to: Build a Travel Planner with React Native, AWS Amplify, and Amazon B...
- When the
sendMessagefunction is called, the model response is streamed back without performing database retrieval as specified in the backend tool.
const [
{
data: { messages, conversation },
isLoading,
},
sendMessage,
] = useAIConversation("chat", { id })
Response received:
{
"data": {
"listConversationMessageChats": {
"items": [
{
"id": "8b1f9709-3776-48f2-ab93-c40386d66e10",
"conversationId": "48c6b6f6-66fc-43ef-9f14-15596388bf7e",
"role": "user",
"content": [
{
"text": "Europe regulations for pharma?",
"document": null,
"image": null,
"toolResult": null,
"toolUse": null
}
],
"aiContext": null,
"toolConfiguration": null,
"createdAt": "2025-03-28T07:48:39.535Z",
"updatedAt": "2025-03-28T07:48:39.535Z",
"owner": "720564b4-b0c1-70d8-960b-25becf38c7d5"
}
],
"nextToken": null,
"__typename": "ModelConversationMessageChatConnection"
}
}
}
Getting the following log:
"The model returned the following errors: Your API request included an assistant message in the final position, which would pre-fill the assistant response. When using tools, pre-filling the assistant response is not supported."
Expected behavior
Expected the retrieval step to be invoked before model inference. However, only model inference step is performed, without any prior embedding similarity search.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
- OS: [iOS]
- Browser [firefox]
- Version [e.g. 22]
Smartphone (please complete the following information):
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
Additional context
Region: eu-west-1 region.
Discord chat: https://discord.com/channels/705853757799399426/1353857342868947054
Metadata
Metadata
Assignees
Labels
AI KitIssues related to AI KitIssues related to AI Kitfeature-requestNew feature or requestNew feature or request