Skip to content

Commit 8441371

Browse files
authored
Merge branch 'main' into feature-generate-nl-outlines
2 parents 5c992f2 + de7e8a4 commit 8441371

File tree

505 files changed

+11132
-4810
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

505 files changed

+11132
-4810
lines changed

.changes/unreleased/Features-20240805-162901.yaml

Lines changed: 0 additions & 3 deletions
This file was deleted.

.changes/unreleased/Fixed and Improvements-20240810-221045.yaml

Lines changed: 0 additions & 4 deletions
This file was deleted.

.changes/unreleased/Fixed and Improvements-20240811-124728.yaml

Lines changed: 0 additions & 3 deletions
This file was deleted.

.changes/unreleased/Fixed and Improvements-20240812-125536.yaml

Lines changed: 0 additions & 3 deletions
This file was deleted.

.changes/v0.16.1.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
## v0.16.1 (2024-08-27)
2+
3+
### Notice
4+
* Starting from this version, we are utilizing websockets for features that require streaming (e.g., Answer Engine and Chat Side Panel). If you are deploying tabby behind a reverse proxy, you may need to configure the proxy to support websockets.
5+
6+
### Features
7+
8+
* Discussion threads in the Answer Engine are now persisted, allowing users to share threads with others.
9+
10+
### Fixed and Improvements
11+
12+
* Fixed an issue where the llama-server subprocess was not being reused when reusing a model for Chat / Completion together (e.g., Codestral-22B) with the local model backend.
13+
* Updated llama.cpp to version b3571 to support the jina series embedding models.

.changes/v0.17.0.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
## v0.17.0 (2024-09-10)
2+
3+
### Notice
4+
5+
* We've reworked the `Web` (a beta feature) context provider into the `Developer Docs` context provider. Previously added context in the `Web` tab has been cleared and needs to be manually migrated to `Developer Docs`.
6+
7+
### Features
8+
9+
* Extensive rework has been done in the answer engine search box.
10+
- Developer Docs / Web search is now triggered by `@`.
11+
- Repository Context is now selected using `#`.
12+
13+
* Supports OCaml

.github/workflows/docker.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ on:
88
tags:
99
- "v*"
1010
- "!*-dev.*"
11+
- "!vscode@*"
1112

1213
concurrency:
1314
group: ${{ github.workflow }}-${{ github.head_ref || github.ref_name }}

.github/workflows/release-vscode.yml

Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
name: Release vscode extension
2+
3+
on:
4+
workflow_dispatch:
5+
push:
6+
tags:
7+
- 'vscode@*'
8+
9+
concurrency:
10+
group: ${{ github.workflow_ref }}-${{ github.head_ref || github.ref_name }}
11+
12+
# If this is enabled it will cancel current running and start latest
13+
cancel-in-progress: true
14+
15+
jobs:
16+
publish-vscode:
17+
runs-on: ubuntu-latest
18+
steps:
19+
- name: Checkout
20+
uses: actions/checkout@v4
21+
with:
22+
lfs: true
23+
24+
- name: Install Node.js
25+
uses: actions/setup-node@v4
26+
with:
27+
node-version: 18
28+
29+
- uses: pnpm/action-setup@v4
30+
name: Install pnpm
31+
with:
32+
version: 9
33+
run_install: false
34+
35+
- name: Get pnpm store directory
36+
shell: bash
37+
run: |
38+
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
39+
40+
- uses: actions/cache@v4
41+
name: Setup pnpm cache
42+
with:
43+
path: ${{ env.STORE_PATH }}
44+
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
45+
restore-keys: |
46+
${{ runner.os }}-pnpm-store-
47+
48+
- name: Install dependencies
49+
run: pnpm install
50+
51+
- name: Publish
52+
run: cd clients/vscode && pnpm run $(node scripts/publish.cjs)
53+
env:
54+
VSCE_PAT: ${{ secrets.VSCE_PAT }}

.github/workflows/release.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ on:
77
- 'v*'
88
- 'nightly'
99
- "!*-dev.*"
10+
- '!vscode@*'
1011
pull_request:
1112
branches: [ "main" ]
1213
paths:

CHANGELOG.md

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,34 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
55
adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html),
66
and is generated by [Changie](https://github.com/miniscruff/changie).
77

8+
## v0.17.0 (2024-09-10)
9+
10+
### Notice
11+
12+
* We've reworked the `Web` (a beta feature) context provider into the `Developer Docs` context provider. Previously added context in the `Web` tab has been cleared and needs to be manually migrated to `Developer Docs`.
13+
14+
### Features
15+
16+
* Extensive rework has been done in the answer engine search box.
17+
- Developer Docs / Web search is now triggered by `@`.
18+
- Repository Context is now selected using `#`.
19+
20+
* Supports OCaml
21+
22+
## v0.16.1 (2024-08-27)
23+
24+
### Notice
25+
* Starting from this version, we are utilizing websockets for features that require streaming (e.g., Answer Engine and Chat Side Panel). If you are deploying tabby behind a reverse proxy, you may need to configure the proxy to support websockets.
26+
27+
### Features
28+
29+
* Discussion threads in the Answer Engine are now persisted, allowing users to share threads with others.
30+
31+
### Fixed and Improvements
32+
33+
* Fixed an issue where the llama-server subprocess was not being reused when reusing a model for Chat / Completion together (e.g., Codestral-22B) with the local model backend.
34+
* Updated llama.cpp to version b3571 to support the jina series embedding models.
35+
836
## v0.15.0 (2024-08-08)
937

1038
### Features
@@ -17,6 +45,7 @@ and is generated by [Changie](https://github.com/miniscruff/changie).
1745
* For linked GitHub repositories, issues and PRs are now only returned when the repository is selected.
1846
* Fixed GitLab issues/MRs indexing - no longer panics if the description field is null.
1947
* When connecting to localhost model servers, proxy settings are now skipped.
48+
* Allow set code completion's `max_input_length` and `max_output_tokens` in config.toml
2049

2150
## v0.14.0 (2024-07-23)
2251

Cargo.lock

Lines changed: 29 additions & 18 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

Cargo.toml

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,15 +13,17 @@ members = [
1313
"crates/http-api-bindings",
1414
"crates/llama-cpp-server",
1515
"crates/ollama-api-bindings",
16+
"crates/tabby-index-cli",
1617

1718
"ee/tabby-webserver",
1819
"ee/tabby-db",
1920
"ee/tabby-db-macros",
2021
"ee/tabby-schema",
22+
"crates/hash-ids",
2123
]
2224

2325
[workspace.package]
24-
version = "0.17.0-dev.0"
26+
version = "0.18.0-dev.0"
2527
edition = "2021"
2628
authors = ["TabbyML Team"]
2729
homepage = "https://github.com/TabbyML/tabby"
@@ -54,7 +56,7 @@ juniper = "0.16"
5456
chrono = "0.4"
5557
reqwest-eventsource = "0.6"
5658
serial_test = "3.0.0"
57-
hash-ids = "0.2.1"
59+
hash-ids = { path = "./crates/hash-ids" }
5860
ignore = "0.4.20"
5961
nucleo = "0.5.0"
6062
url = "2.5.0"
@@ -67,6 +69,7 @@ insta = "1.34.0"
6769
logkit = "0.3"
6870
async-openai = "0.20"
6971
tracing-test = "0.2"
72+
clap = "4.3.0"
7073

7174
[workspace.dependencies.uuid]
7275
version = "1.3.3"

MODEL_SPEC.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
# Tabby Model Specification (Unstable)
1+
# Tabby Model Specification
22

33
Tabby organizes the model within a directory. This document provides an explanation of the necessary contents for supporting model serving.
44
The minimal Tabby model directory should include the following contents:
55

66
```
7-
ggml/
7+
ggml/model.gguf
88
tabby.json
99
```
1010

@@ -29,4 +29,4 @@ The **chat_template** field is optional. When it is present, it is assumed that
2929

3030
This directory contains binary files used by the [llama.cpp](https://github.com/ggerganov/llama.cpp) inference engine. Tabby utilizes ggml for inference on `cpu`, `cuda` and `metal` devices.
3131

32-
Currently, only `q8_0.v2.gguf` (or, starting with 0.11, `model.gguf`) in this directory is in use. You can refer to the instructions in llama.cpp to learn how to acquire it.
32+
Currently, only `model.gguf` in this directory is in use. You can refer to the instructions in llama.cpp to learn how to acquire it.

0 commit comments

Comments
 (0)