The 42 AI Peer Assistant is an open-source project aimed at providing 24/7 assistance to students at 42. This AI-powered assistant is designed to complement the support provided by human peers by offering instant and reliable help whenever students need it. With a wide range of features and a comprehensive knowledge base, the assistant aims to enhance the learning experience and provide valuable guidance to students.
- Chat Interface: The project includes a user-friendly chat interface where students can interact with the AI Peer Assistant.
- Knowledge Base: The assistant is equipped with an extensive knowledge base that covers various topics, including 42 rules (Norminette, Moulinette), UNIX knowledge, Bash scripting, and more.
- User Profiles: Each student can create a personalized profile to customize their experience and track their interactions with the assistant.
- Authentication: The assistant provides secure user registration and login functionality using JWT tokens. Additionally, students can use single sign-on (SSO) login options via the 42 API and Google.
- LLM Chatbot Integration: The project integrates a Long-Term Memory (LLM) chatbot, powered by schema, models, prompts, indexes, memory, chains, and agents. This enables the assistant to handle complex queries and provide more accurate responses.
- Mobile App (Beyond MVP): The project aims to develop a mobile app using Flutter for iOS and Android platforms, allowing students to access the assistant on their smartphones.
- Blockchain NFT Gated Content and Rewards (Beyond MVP): The assistant will leverage blockchain technology to gate certain content and provide rewards to users, enhancing engagement and incentivizing interactions within the app.
First, create a new .env
file from .env.example
and add your OpenAI API key found here.
cp .env.example .env
- Node.js (v16 or higher)
- Yarn
- tsx
wget
(on macOS, you can install this withbrew install wget
)- [Docker](https://nodejs.org/en/download/](https://www.docker.com/products/docker-desktop)
- Ollama
yarn
yarn dockerSetup
Next, we'll need to load our data source.
Data ingestion happens in two steps.
Parses our data source into txt for rapid processing of LLM into ./data directory and then it ingest our data to our vector store which is here Weaviate
Run
./initialize.sh
Note: If on Node v16, use NODE_OPTIONS='--experimental-fetch' yarn ingest
This will parse the data, split text, create embeddings, store them in a vectorstore, and
then save it to the data/
directory.
We save it to a directory because we only want to run the (expensive) data ingestion process once.
The Next.js server relies on the presence of the data/
directory. Please
make sure to run this before moving on to the next step.
Then, run the development server:
yarn dev
Open http://localhost:3000 with your browser to see the result.
- Persistent Chats: The chat interface allows users to save a limited number of conversations for future reference, reducing the need for repetitive inquiries and minimizing database costs.
- Multilingual Chat : The Chatbot will answer in various languages according to their choices
- Better UI / UX Experience
- VS Code / Clion Extension
Contributions to the 42 AI Peer Assistant project are welcome! If you have any ideas, bug fixes, or improvements, feel free to open an issue or submit a pull request.
This project is licensed under the MIT License. You are free to use, modify, and distribute the code in accordance with the terms of the license.
For any questions or inquiries regarding the 42 AI Peer Assistant project, please contact @juansimmendinger @mdabir1203 @jnspr @nachoGonz