AGILAB is a reproducible AI/ML workbench for engineering teams. It turns notebooks and scripts into controlled, executable apps with:
- one-command setup
- controlled environments
- local or distributed execution
- visible experiment evidence
- optional MLflow integration
AGILAB complements MLflow and production MLOps platforms. It owns the reproducible execution and analysis layer around them. In short: MLflow tracks experiments; AGILAB transforms notebooks and scripts into reproducible executable AI applications.
Notebook/script → AGILAB app → execution (local/distributed) → MLflow → Streamlit UI
Start with the public browser preview or the demo chooser:
uv --preview-features extra-build-dependencies tool install --upgrade "agilab[ui]"
agilabFor a zero-install browser preview, open the public
AGILAB Space. It opens the
lightweight flight_telemetry_project path by default and exposes the
weather_forecast_project notebook-migration demo with forecast analysis views.
Advanced scenarios such as mission_decision_project,
execution_pandas_project, execution_polars_project, and
uav_relay_queue_project are collected in the
Advanced Proof Pack.
The default hosted flight journey covers PROJECT, ORCHESTRATE, WORKFLOW,
and ANALYSIS, including bundled flight analysis views.
If startup fails, run a progressive fallback:
agilab dry-run
agilab first-proof --json --with-uiagilab dry-run is the fast alias for agilab first-proof --dry-run; it
verifies CLI/core readiness only.
agilab first-proof --json --with-ui does the local onboarding contract
including manifest generation for the UI path.
| Capability | Status |
|---|---|
| Local run | Stable |
| Distributed (Dask) | Stable |
| UI Streamlit | Beta |
| MLflow | Beta |
| Production | Experimental |
| RL examples | Example available |
AGILAB is most mature in the bridge between notebook experimentation and reproducible AI applications: local execution, environment control, and analysis. Distributed execution is mature in the core runtime; remote cluster mounts, credentials, and hardware stacks remain environment-dependent. Production-grade MLOps features are delivered through integrations and are not yet a packaged platform claim.
AGILAB should be adopted as an experimentation and validation workbench first. Use this boundary before deploying it in sensitive environments:
| Boundary | Status | Required controls |
|---|---|---|
| Safe for production-like use | Local research sandboxes, internal demos, notebook-to-app migration, reproducible validation with non-sensitive data. | Normal repository hygiene and local proof evidence. |
| Conditional use only | Shared team workspaces, SSH/Dask clusters, external apps, LLM connectors, or sensitive datasets. | Per-user isolation, explicit secrets management, TLS/auth for exposed services, SBOM plus vulnerability scan evidence, and a deployment threat model. |
| Not safe as-is | Sole production MLOps control plane, public Streamlit exposure, regulated production model serving, enterprise governance, online monitoring, drift detection, or audit-trail ownership. | Pair AGILAB with a hardened production stack such as MLflow/Kubeflow/SageMaker/Dagster/Airflow or an internal platform. |
For shared adoption, run agilab security-check --profile shared --json and
use --strict or AGILAB_SECURITY_CHECK_STRICT=1 when missing controls should
block the gate. The stricter profiles check app-repository allowlists, public UI
bind controls, cluster-share isolation, generated-code execution boundaries,
plaintext local secrets, and profile-specific SBOM / pip-audit evidence.
The public package is intentionally profile-based so operators can install only what they need:
| Profile | Dependency scope | Use when |
|---|---|---|
| Base package | agilab plus agi-core, which wires agi-env, agi-node, and agi-cluster. This includes the core local/distributed runtime dependencies but not the built-in app or page-bundle payload. |
CLI/core tooling, source-checkout validation, and worker-runtime development. |
ui extra |
Streamlit UI, page helpers, pandas/network graph utilities, agi-apps, and the agi-pages provider. Promoted app payload packages are on PyPI; page bundles and unpromoted payloads remain release artifacts until their publishers are configured. |
Running the local product UI with the packaged runtime and optional public demo assets. |
examples extra |
agi-apps app catalog/examples plus notebook/demo helper dependencies such as JupyterLab and optional plotting packages. |
Running packaged notebooks, demos, learning examples, and package first-proof routes. |
pages extra |
agi-pages page-provider helpers without the full UI profile. |
Installing or validating sidecar page-bundle discovery separately from built-in app projects. |
agents extra |
API client dependency boundary for packaged agent workflow helpers. | Reproducible coding-agent and assistant-backed workflows. |
mlflow extra |
MLflow tracking integration. | Recording runs, metrics, artifacts, or model registry handoff evidence. |
ai and viz extras |
API LLM clients and optional plotting packages. | Assistant-backed workflows or richer visual analysis. |
local-llm / offline extras |
Local/offline model stacks such as Torch, Transformers, GPT-OSS, and MLX where supported. | Isolated local-model experiments; expect a larger supply-chain and hardware footprint. |
dev extra |
Contributor test/build/audit tooling only. | Validating a source checkout or release candidate; avoid it for runtime installs. |
Agent workflows can now produce AGILAB evidence directly. Use
agilab agent-run --agent codex --label "Review current diff" -- codex review
to execute a local coding-agent command and write a redacted
agilab.agent_run.v1 manifest plus local stdout/stderr artifacts under
~/log/agents/. Command arguments are redacted by default and represented by
an argv hash; pass --include-command-args only when the prompt/arguments are
safe to store.
Cluster/Dask dependencies are currently part of the base package through
agi-core; a smaller cluster-specific package split is a packaging roadmap item,
not a current release claim.
Release and adoption supply-chain evidence is explicit: Dependabot watches
Python and GitHub Actions manifests, release workflows publish per-profile
pip-audit JSON and CycloneDX SBOM artifacts, and
tools/profile_supply_chain_scan.py can regenerate the same profile evidence
locally. PyPI publication uses Trusted Publishing/OIDC and the release workflow
runs tools/pypi_provenance_check.py after upload so missing PyPI attestations
fail before GitHub release assets are published.
AGILAB separates public claims by evidence type:
| Evidence type | What it proves | What it does not prove |
|---|---|---|
| Automated proof | Commands such as agilab first-proof --json, workflow parity checks, coverage, release proof, and UI robot evidence ran successfully. |
Independent certification or coverage of every deployment topology. |
| Integration tests | A specific source path, package route, app, or workflow contract is exercised by tests. | Production SLA, security certification, or external operator acceptance. |
| Benchmarks | Timings for declared hardware, datasets, modes, and benchmark scripts. | General performance across arbitrary hardware, networks, or datasets. |
| Self-assessment | KPI scores such as production readiness and strategic potential are maintained from repository evidence. | External validation or third-party certification. |
| External validation | Only claimed when a named external artifact, reviewer, CI provider, or hosted demo proof is linked. | Implied endorsement beyond the linked evidence. |
AGILAB is a monorepo, but it is not a single stability surface:
| Area | Role | Stability contract |
|---|---|---|
src/agilab/core/agi-env, agi-node, agi-cluster, agi-core |
Runtime packages for environment setup, worker packaging, distributed execution, and the compact API. | Stable where documented; changes require focused regression evidence. |
src/agilab/lib/agi-gui, src/agilab/pages |
Streamlit UI and page helpers. | Beta product surface; useful for operators, still evolving. |
src/agilab/lib/agi-apps |
PyPI umbrella that carries app catalog/example assets and exact-pins the app payload packages already promoted to PyPI. | Packaged asset surface for the ui and examples extras. |
src/agilab/lib/agi-pages |
PyPI provider package for public analysis page discovery. Page payload packages are built as release artifacts until their PyPI publishers are configured. | Packaged page-provider surface for the ui and pages extras. |
src/agilab/apps/builtin |
Public built-in apps used for first proof, demos, workflow examples, and regression coverage. | Packaged examples, not enterprise deployment templates. |
src/agilab/examples |
Learning scripts, notebooks, and preview examples. | Educational material; optional helper dependencies live behind extras. |
tools, .github, pycharm, .codex, .claude, dev |
Contributor, release, agent, and IDE automation. | Maintainer tooling, not runtime API. |
docs/source |
Public documentation mirror. | Published docs source; canonical docs are synchronized before release. |
This split is intentional. Treat AGILAB as an AI engineering reproducibility workbench first: stable runtime contracts, beta UI, packaged examples, and maintainer automation live together so release proof can validate the same source tree users install from.
Local source checkouts can grow after runs because built-in apps can create
.venv directories, build outputs, caches, datasets, and local logs.
Those local artifacts are not the package contract. Public wheels are bounded
by pyproject.toml package data rules and exclude virtual environments,
tests, docs/html, build directories, generated C files,
__pycache__, .pyc, and .egg-info artifacts.
Current packaging policy is conservative:
- Base
agilabkeeps CLI/core proof dependencies separate from UI, page bundles, examples, agents, MLflow, visualization, local-LLM, offline, and dev profiles. - Promoted app payloads live in per-app packages such as
agi-app-mission-decision,agi-app-pandas-execution,agi-app-polars-execution,agi-app-flight-telemetry,agi-app-global-dag,agi-app-weather-forecast, andagi-app-uav-relay-queue;agi-appsis the umbrella catalog/example package pulled in by theuiandexamplesextras. - Public analysis page bundles use decoupled
agi-page-*package names such asagi-page-feature-attribution;agi-pagesis the provider package pulled in by theuiandpagesextras. - Larger optional stacks must stay behind extras, and release evidence must
include SBOM /
pip-auditdata for the actual enabled profile. - Further cluster/runtime splitting is a roadmap item; it is not claimed as complete in the current release.
- Preview the product quickly: AGILAB Space
- Understand notebook-to-app migration: Notebook Migration Demo
- Prove the full source-checkout flow: Source Checkout
- Verify a CLI-only package install: Published Package
- Contribute safely: Contributor onboarding
- Audit external apps and evidence: App Repository Updates and Release Proof
For a single-page adoption checklist, use ADOPTION.md.
AGILAB publishes from this repository, but each public surface has a distinct role:
| Surface | Meaning | Source of truth |
|---|---|---|
main branch and root pyproject.toml |
Active source checkout and next release candidate. It can move after a package has already been published. | GitHub source tree |
| Release tag | Immutable source snapshot used for a public release. Use this for reproducible source installs. | GitHub tag and GitHub Release |
| PyPI package | Latest installable public wheel/sdist for agilab and the agi-* packages. |
PyPI project and PyPI version badge |
| Release proof | Public evidence tying the release tag, PyPI package version, docs, CI, coverage, and demo proof together. | Release Proof |
For development, use main. For reproducible release validation, use the
release tag or the PyPI package version recorded in the release proof.
Run the installable product path with the built-in flight_telemetry_project:
CHECKOUT="${AGILAB_CHECKOUT:-$HOME/agilab-src}"
git clone https://github.com/ThalesGroup/agilab.git "$CHECKOUT"
cd "$CHECKOUT"
./install.sh --install-apps
uv --preview-features extra-build-dependencies run streamlit run src/agilab/main_page.pyFollow the in-app pages from PROJECT to ORCHESTRATE, WORKFLOW, and
ANALYSIS. To collect the same check as JSON:
uv --preview-features extra-build-dependencies run agilab first-proof --jsonThe JSON proof writes run_manifest.json under ~/log/execute/flight_telemetry/. For
installer flags, IDE run configs, and troubleshooting, use the Quick Start docs.
For a CLI-only package smoke without Streamlit:
uv --preview-features extra-build-dependencies tool install --upgrade "agilab[examples]"
agilab first-proof --json --max-seconds 60When APPS_REPOSITORY points at an external apps repository, rerun the
installer after app changes:
./install.sh --non-interactive --apps-repository /path/to/apps-repository --install-apps allDuring an update, the apps repository is treated as the source of truth. If the
target app/page already exists as a real directory instead of a symlink, AGILAB
backs it up as <name>.previous.<timestamp>, then links the repository copy in
its place. After the update, AGILAB runs the repository version; the
.previous directory is kept only for manual recovery. See
Service mode and paths
for the full path contract.
The README is only the entry page. Detailed capability evidence, compatibility status, and roadmap scope live in:
Current public evaluation summary, refreshed from the public KPI bundle:
4.0 / 5for ease of adoption, research experimentation, and engineering prototyping.3.0 / 5for production readiness.4.2 / 5for strategic potential.- Overall public evaluation, rounded category average:
3.8 / 5.
These are public experimentation-workbench scores, not production MLOps claims. They cover project setup, environment management, execution, and result analysis. The evidence and limits are maintained in the compatibility matrix and MLOps positioning. The strategic score movement rule is tracked in the strategic scorecard.