Skip to content

Need way to add proprietary aiContext for knowledgeBase chats via lambda or backend #602

@lets-getitnow

Description

@lets-getitnow

I have proprietary information that needs to be passed in the context for an AIConversation that calls the knowledgeBase via an appsync resolver.

The proprietary information (PROPVAR) itself does not directly need to be sent to the knowledgebase resolver, but it does need to be in the context for the main prompt.

I would strongly prefer for that information to be private and not publicly accessible.

The only technical solution I found is to pass the propriety information via the aiContext argument of the AIConversation in the front end which means PROPVAR has to be exposed to the public.

This is far from a desired solution.

I tried putting the AIConversation via client.conversations.chat.create() into a backend lambda but it warned of lack of JWT tokens.

Here is the general shape of my setup:

const schema = a.schema({
  // ********** AI **********
  knowledgeBase: a
    .query()
    .arguments({ 
      input: a.string(),
    })
    .handler(
      a.handler.custom({
        dataSource: "KnowledgeBaseDataSource",
        entry: "./resolvers/kbResolver.js",
      }),
    )
    .returns(a.string())
    .authorization((allow) => [allow.authenticated()]),

  chat: a.conversation({
    aiModel: a.ai.model("Claude 3.5 Haiku"),
    systemPrompt: systemPrompt,
    tools: [
      a.ai.dataTool({
        name: 'backendSearchDocumentation',
        description: '",
        query: a.ref('knowledgeBase'),
      }),
    ]

I can see some potential solutions (unless I've completely misunderstood your API/system, in which case I apologize, but after the amount of days I've spent researching this, I'm slightly confident in this request ).

1. allow execution to occur inside of the

chat: a.conversation({
    aiModel: a.ai.model("Claude 3.5 Haiku"),
    systemPrompt: systemPrompt,

1a. that execution could be limited to model calls and specifically only be to load aiContext. maybe something like

chat: a.conversation({
    aiModel: a.ai.model("Claude 3.5 Haiku"),
    systemPrompt: systemPrompt,
    aiContext: client.models.PROPTABLE.list({
      filter: { userId: { eq: event.user.userId } }).propvar

This probably breaks the simplicity of chat being inside the Schema. Which leads to:

1b. maybe expose a little more of chat: instead of (or in addition to) having it defined in the schema, upgrade it to a resolver so that finer grained control can be had?

2. allow the ai conversation to be run on the backend lambda since it does have IAM permission without the strict JWT check.

I think this request makes a lot of sense. AIConversations + Knowledgebase is one of the only end-to-end RAG solutions out there and it's a reason I want to stick with this architecture rather than roll my own RAG.

Given this, any of the solutions proposed above (or any other that satisfy my feature request) would be huge in unlocking the true potential of this very powerful system, thank you

Metadata

Metadata

Assignees

No one assigned

    Labels

    AI KitIssues related to AI Kitfeature-requestNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions