Skip to content

Integration: Bytez as a model provider #12121

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
Open

Conversation

inf3rnus
Copy link

@inf3rnus inf3rnus commented Jun 28, 2025

Bytez as a chat model provider

Title says it all!

Other things include one typing fix in a file, l also included documentation for how to integrate as a model provider 😉

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

There is a unit test in the tests directory. I have also added integration tests inside of the chat directory for bytez, as they do not belong in the mocked tests directory. These are more for our convenience if/when we update the code.

  • [ x] I have added a screenshot of my new test passing locally
  • [ x] My PR passes all unit tests on make test-unit
  • [ x] My PR's scope is as isolated as possible, it only solves 1 specific problem
    I can break this apart into smaller PRs if necessary, pls LMK 🙏

image

Type

🆕 New Feature ->Bytez as a chat model provider
🐛 Bug Fix -> Minor problem with litellm proxy where its logger would throw because typing was wrong in a file
📖 Documentation -> Bytez docs + how to integrate as a model provider

Copy link

vercel bot commented Jun 28, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jun 30, 2025 7:18pm

@CLAassistant
Copy link

CLAassistant commented Jun 28, 2025

CLA assistant check
All committers have signed the CLA.

@inf3rnus
Copy link
Author

inf3rnus commented Jun 28, 2025

Also I had mentioned in your feedback channel that I'd provide some of my thinking with how the code could be cleaned up...

I do think that the directory structure's naming convention needs to be locked to specific names like /chat/config.py

config.py is very clear, when you have a file named transformation.py it gets confusing on first visit.

Main idea is to create a registry of each provider (see the code below) that's automatically discovered via the project's structure.

You then would want to include every single configuration option on the base class and keep everything on the base class so it can be quickly understood.

(The goal here is to provide a mental model of what I'm thinking, the names of things etc. can be changed.)

Here's a mini example of what I think would make things immensely simpler for model providers to integrate! 😄

# ========================== llm_wrapper.py ==========================

import abc
import importlib.util
import pathlib
import os
from typing import Any, Dict, List, Type

# Registry to track subclasses of LLMWrapper
LLM_REGISTRY: Dict[str, Type["LLMWrapper"]] = {}


class LLMWrapper(abc.ABC):
    """
    Abstract base class for LLM wrappers.
    """
    name: str = "base"
    supports_streaming: bool = False
    has_custom_stream_wrapper: bool = False

    def __init_subclass__(cls, **kwargs):
        super().__init_subclass__(**kwargs)
        if hasattr(cls, "name") and cls.name != "base":
            LLM_REGISTRY[cls.name] = cls

    @abc.abstractmethod
    def transform_request(self, input_data: Dict[str, Any]) -> Dict[str, Any]:
        pass

    @abc.abstractmethod
    def transform_response(self, llm_output: Dict[str, Any]) -> Dict[str, Any]:
        pass


def discover_llm_wrappers(base_path: str = "llms") -> None:
    """
    Dynamically imports all llms/{provider}/chat/config.py modules.
    """
    base = pathlib.Path(base_path)
    for config_file in base.glob("*/chat/config.py"):
        module_name = ".".join(config_file.with_suffix("").parts)
        spec = importlib.util.spec_from_file_location(module_name, config_file)
        module = importlib.util.module_from_spec(spec)
        spec.loader.exec_module(module)
    # Classes will auto-register through __init_subclass__


def list_available_providers(base_path: str = "llms") -> List[str]:
    """
    Returns provider names that successfully registered wrappers.
    """
    discover_llm_wrappers(base_path)
    return list(LLM_REGISTRY.keys())


def find_all_provider_folders(base_path: str = "llms") -> List[str]:
    """
    Lists all folders under llms/ that contain chat/config.py, regardless of registration success.
    """
    base = pathlib.Path(base_path)
    return [
        path.parent.parent.name
        for path in base.glob("*/chat/config.py")
    ]

# ========================== llms/openai/chat/config.py ==========================

# This would normally be in: llms/openai/chat/config.py
# Make sure this file lives in the correct folder so discover_llm_wrappers can find it.

class OpenAIWrapper(LLMWrapper):
    name = "openai"
    supports_streaming = True
    has_custom_stream_wrapper = False

    def transform_request(self, input_data):
        return {
            "prompt": input_data["message"],
            "temperature": 0.7
        }

    def transform_response(self, llm_output):
        return {
            "text": llm_output.get("choices", [{}])[0].get("text", "")
        }

# ========================== main.py ==========================

# Example usage (would normally be in main.py or test script)
if __name__ == "__main__":
    print("== Discovering and Registering Wrappers ==")
    discover_llm_wrappers()

    print("\n== Registered Providers ==")
    for name, cls in LLM_REGISTRY.items():
        print(f"- {name} (streaming={cls.supports_streaming}, custom_stream={cls.has_custom_stream_wrapper})")

    print("\n== Available Providers ==")
    print(list_available_providers())

    print("\n== Provider Folders with config.py ==")
    print(find_all_provider_folders())

    print("\n== Example Usage ==")
    wrapper_cls = LLM_REGISTRY.get("openai")
    if wrapper_cls:
        wrapper = wrapper_cls()
        req = wrapper.transform_request({"message": "Hello world"})
        print("Transformed Request:", req)

        resp = wrapper.transform_response({"choices": [{"text": "Hi there!"}]})
        print("Transformed Response:", resp)

@inf3rnus
Copy link
Author

But yeah, the thing that made this integration a little frictional was having to figure out where everything was. Lot of debugger running there! So, there's some updated docs for you and a proposal on architecture revision 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants