|
1 | 1 | # ChangeLog
|
2 | 2 |
|
| 3 | +## [2025-06-07] |
| 4 | + |
| 5 | +### `llama-index-core` [0.12.41] |
| 6 | + |
| 7 | +- feat: Add MutableMappingKVStore for easier caching (#18893) |
| 8 | +- fix: async functions in tool specs (#19000) |
| 9 | +- fix: properly apply file limit to SimpleDirectoryReader (#18983) |
| 10 | +- fix: overwriting of LLM callback manager from Settings (#18951) |
| 11 | +- fix: Adding warning in the docstring of JsonPickleSerializer for the user to deserialize only safe things, rename to PickleSerializer (#18943) |
| 12 | +- fix: ImageDocument path and url checking to ensure that the input is really an image (#18947) |
| 13 | +- chore: remove some unused utils from core (#18985) |
| 14 | + |
| 15 | +### `llama-index-embeddings-azure-openai` [0.3.8] |
| 16 | + |
| 17 | +- fix: Azure api-key and azure-endpoint resolution fixes (#18975) |
| 18 | +- fix: api_base vs azure_endpoint resolution fixes (#19002) |
| 19 | + |
| 20 | +### `llama-index-graph-stores-ApertureDB` [0.1.0] |
| 21 | + |
| 22 | +- feat: Aperturedb propertygraph (#18749) |
| 23 | + |
| 24 | +### `llama-index-indices-managed-llama-cloud` [0.7.4] |
| 25 | + |
| 26 | +- fix: resolve retriever llamacloud index (#18949) |
| 27 | +- chore: composite retrieval add ReRankConfig (#18973) |
| 28 | + |
| 29 | +### `llama-index-llms-azure-openai` [0.3.4] |
| 30 | + |
| 31 | +- fix: api_base vs azure_endpoint resolution fixes (#19002) |
| 32 | + |
| 33 | +### `llama-index-llms-bedrock-converse` [0.7.1] |
| 34 | + |
| 35 | +- fix: handle empty message content to prevent ValidationError (#18914) |
| 36 | + |
| 37 | +### `llama-index-llms-litellm` [0.5.1] |
| 38 | + |
| 39 | +- feat: Add DocumentBlock support to LiteLLM integration (#18955) |
| 40 | + |
| 41 | +### `llama-index-llms-ollama` [0.6.2] |
| 42 | + |
| 43 | +- feat: Add support for the new think feature in ollama (#18993) |
| 44 | + |
| 45 | +### `llama-index-llms-openai` [0.4.4] |
| 46 | + |
| 47 | +- feat: add OpenAI JSON Schema structured output support (#18897) |
| 48 | +- fix: skip tool description length check in openai response api (#18956) |
| 49 | + |
| 50 | +### `llama-index-packs-searchain` [0.1.0] |
| 51 | + |
| 52 | +- feat: Add searchain package (#18929) |
| 53 | + |
| 54 | +### `llama-index-readers-docugami` [0.3.1] |
| 55 | + |
| 56 | +- fix: Avoid hash collision in XML parsing (#18986) |
| 57 | + |
| 58 | +### `llama-index-readers-file` [0.4.9] |
| 59 | + |
| 60 | +- fix: pin llama-index-readers-file pandas for now (#18976) |
| 61 | + |
| 62 | +### `llama-index-readers-gcs` [0.4.1] |
| 63 | + |
| 64 | +- feat: Allow newer versions of gcsfs (#18987) |
| 65 | + |
| 66 | +### `llama-index-readers-obsidian` [0.5.2] |
| 67 | + |
| 68 | +- fix: Obsidian reader checks and skips hardlinks (#18950) |
| 69 | + |
| 70 | +### `llama-index-readers-web` [0.4.2] |
| 71 | + |
| 72 | +- fix: Use httpx instead of urllib in llama-index-readers-web (#18945) |
| 73 | + |
| 74 | +### `llama-index-storage-kvstore-postgres` [0.3.5] |
| 75 | + |
| 76 | +- fix: Remove unnecessary psycopg2 from llama-index-storage-kvstore-postgres dependencies (#18964) |
| 77 | + |
| 78 | +### `llama-index-tools-mcp` [0.2.5] |
| 79 | + |
| 80 | +- fix: actually format the workflow args into a start event instance (#19001) |
| 81 | +- feat: Adding support for log recording during MCP tool calls (#18927) |
| 82 | + |
| 83 | +### `llama-index-vector-stores-chroma` [0.4.2] |
| 84 | + |
| 85 | +- fix: Update ChromaVectorStore port field and argument types (#18977) |
| 86 | + |
| 87 | +### `llama-index-vector-stores-milvus` [0.8.4] |
| 88 | + |
| 89 | +- feat: Upsert Entities supported in Milvus (#18962) |
| 90 | + |
| 91 | +### `llama-index-vector-stores-redis` [0.5.2] |
| 92 | + |
| 93 | +- fix: Correcting Redis URL/Client handling (#18982) |
| 94 | + |
| 95 | +### `llama-index-voice-agents-elevenlabs` [0.1.0-beta] |
| 96 | + |
| 97 | +- feat: ElevenLabs beta integration (#18967) |
| 98 | + |
3 | 99 | ## [2025-06-02]
|
4 | 100 |
|
5 | 101 | ### `llama-index-core` [0.12.40]
|
|
0 commit comments