Skip to content

modelcontextstandard/specification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 

Repository files navigation

Model Context Standard – Specification

Status: Draft v0.2 (July 12, 2025) | Audience: Driver authors & SDK maintainers

This document is language-agnostic. Code blocks illustrate the contract in pseudocode so it can be mapped 1-to-1 to Python, TypeScript, Go, ...


1 · MCS Motivation Overview

LLMs are token predictors. They consume and produce sequences of tokens that are rendered as text via a tokenizer. This means text is the only interface an LLM has to interact with its environment. It cannot execute anything directly.

To enable execution, you need a parser. The parser takes the LLM’s output, identifies structured instructions, and translates them into real-world actions.

Function Calling Recap

Function calling enables LLMs to interact with external systems by generating structured outputs (typically JSON) that describe function invocations.

The process involves:

  1. Providing the LLM with function schemas/descriptions
  2. The LLM generates a structured call in response to user queries
  3. A parser extracts and validates the function call
  4. The system executes the call against the target API/service
  5. Results are returned to continue the conversation

This pattern allows LLMs to act as intelligent orchestrators using text alone. The exact format of the function description doesn't matter. What matters is that the LLM understands when and how to call a tool and how to format the output for the parser.

This concept was formalized in TALM: Tool Augmented Language Models, which showed how LLMs can be extended with non-differentiable tools to solve real-world tasks.

Modern AI frameworks often provide parsers, but not standardized descriptions or callable functions. This leads to a fragmented landscape of custom tooling where developers repeatedly reinvent the same logic for each application.

The Model Context Protocol (MCP) addressed this by introducing the first open standard to connect LLMs with external systems. OpenAI followed a similar idea with "Actions" for Custom GPTs long before that, but never published it as a general concept.

However, MCP added a full protocol stack, along with new complexity and security implications. As of 2025, recent updates to MCP include OAuth Resource Servers, mandatory Resource Indicators (RFC 8707), and streamable HTTP as a new transport mechanism (released March 2025). Despite these advancements, critiques highlight ongoing issues like prompt injection weaknesses (reported May 2025) and vulnerabilities such as CVE-2025-49596 (RCE in MCP Inspector, June 2025). Much of the effort that followed focused on building wrappers around APIs that could already be used directly by LLMs, as demonstrated in the MCS proof of concept.

Critically, MCP often reimplements features the web has solved decades ago. Take authentication: instead of relying on proven standards like Basic Auth, OAuth2, or API Keys over HTTPS, MCP introduces its own way, all while using JSON-RPC under the hood. This adds layers of complexity with little gain.

Despite this, MCP succeeded, not because of elegance, but because it is the first real standard in this space. And a standard was needed.

Useful features like autostarting MCP servers were not design decisions, but emerged from practical needs when using the STDIO transport layer. Making a core benefit for some developers a side effect not a core of MCP itself.

MCS now distills this all down to the essentials. What is actually required to connect an LLM to external systems in a standardized and reusable way.

At the core, it's a driver problem.

A MCS driver must expose a function specification that LLMs can consume. Most modern LLMs can use these out of the box. But to ensure precision, the driver should also provide usage instructions and formatting hints so the output can be correctly parsed.

The parser is the other half of the equation. It bridges the LLM’s output to real-world execution by scanning for and dispatching structured calls.

Previously, function implementations were written from scratch for every use case. But with MCS generalization is key. If a REST call works for one service, it can be reused for all REST-over-HTTP services.

An MCS Driver does exactly that. It generalizes function calling for a given protocol over a specific transport layer.


2 · Core Idea

MCS defines a thin glue layer between a Language Model and any existing interface such as REST, GraphQL, CAN-Bus, EDI, or even filesystems.

Every MCS driver must implement two mandatory capabilities:

  1. Expose: Provide a machine-readable function description via get_function_description() and usage instructions (including function description) via get_driver_system_message(). This allows the LLM to discover available tools and learn how to call them effectively. While get_function_description() is optional for LLMs that natively handle specs like OpenAPI (e.g., ChatGPT), get_driver_system_message() is preferred. It delivers a complete, pre-optimized system prompt tailored by the driver author, freeing clients from prompt engineering and ensuring high-quality, reusable prompts that evolve over time.
  2. Execute: Handle structured LLM-emitted calls via process_llm_response(). The driver parses the request, routes it through a bridge (HTTP, serial, message bus, etc.), and returns the raw result.

The complexity of a MCS Driver is mostly concentrated in the execution phase. Everything related to authentication, rate-limiting, retries, logging, or protocol-specific quirks is handled internally by the driver using existing transports and if available machine readable standard specs like OpenAPI.

Drivers are initialized with configuration parameters through the constructor. This makes it easy to inject dependencies or load configuration dynamically at runtime.

Optional or advanced functionality can be added modularly via capabilities, allowing drivers to remain lightweight by default.

The client acts as a coordinator. It retrieves the function specification from the driver, injects it into the LLM system message, and later passes the LLM’s output back to the driver for inspection, if an execution of a function is wanted by the LLM.

Importantly, the client does not need to know how the driver works internally, which technology stack it uses or what prompts should be used.

To handle multiple drivers efficiently and avoid format and tool name conflicts, MCS introduces the concept of an Orchestrator. The Orchestrator aggregates tools from multiple ToolDrivers (a specialized driver type that lists tools and executes them without direct LLM interaction), unifies their descriptions into a consistent format, and presents them as a single MCSDriver to the client.

The Orchestrator is a MCS Driver itself, so it is transparent to the client.

For the client, it is irrelevant whether it interacts with a single / multiple MCS Driver(s) or Orchestrator(s), as both adhere to the same interface. This allows mixing and matching components arbitrarily without requiring adjustments to the client's logic.

This separation allows ToolDrivers to focus on technical bridging, while Orchestrators handle LLM-specific optimizations like prompt formatting also for a variety of LLMs.


Phase A – Spec exposure

 Client ─── request spec ───▶  Driver                
  ▲                              │
  └─── Spec (OpenAPI …) ◄────────┘

The client first calls get_function_description() to retrieve a machine-readable function specification. How this spec is generated or retrieved—whether from a local file, HTTP endpoint, or generated dynamically, is left to the driver implementation.

The client may embed the spec into the LLM's system prompt or use it in other prompt injection strategies.

To simplify this process, drivers need to implement get_driver_system_message() which returns a complete, ready-to-use system prompt. This includes both the tool description and formatting guidance tailored for a LLM or for a specific LLM as well.

This is crucial because different LLMs may respond better to differently phrased prompts.

Phase B – Call execution

LLM ──► JSON call ──► Driver/Parser ──► External API
 ▲                           │
 └─────────── Result ◄───────┘

Once the LLM emits a structured function call (typically as a JSON object in the text output), the client passes this to the driver’s process_llm_response() method.

The driver parses the call, dispatches it over its bridge (e.g. HTTP, CAN-Bus, AS2), and returns the raw result. The result can then be forwarded back into the conversation, either directly or via formatting logic handled elsewhere.

A Proof of Concept with existing ChatModels can be found here.


3 · Minimal Driver Contract

See MCS Driver Contract for the detailed pseudocode.

The core MCSDriver interface is minimal:

  • meta: Driver metadata (ID, version, protocol, transport, etc.)
  • get_function_description(model_name?): Returns LLM-readable function spec.
  • get_driver_system_message(model_name?): Returns full system prompt.
  • process_llm_response(llm_response): Parses and executes calls, returns result.

If no call is detected, return the response unchanged for chaining in process_llm_response().

The exact signatures are up to each SDK. The semantics must match.

3.1 get_function_description(model_name?)

Returns a static artifact that describes the available functions in a LLM-readable format. The approach follows a standard-first principle. If an established specification format exists, it should be used.

Standard formats like:

  • OpenAPI (JSON/YAML) – for RESTful APIs
  • JSON Schema – for structured input/output validation, CLI tools, or message formats
  • GraphQL SDL – for GraphQL-based APIs
  • WSDL – for SOAP and legacy enterprise services
  • gRPC / Protocol Buffers (proto) – for high-performance binary APIs
  • OpenRPC – for JSON-RPC APIs
  • EDIFACT/X12 schemas – for EDI-based B2B interfaces

If no standard is available, a custom function description has to be written.

Drivers may implement/use a dynamic descriptions to tailor the spec based on the LLM’s capabilities. For example, instead of exposing a raw OpenAPI schema, the driver may generate a simplified and LLM-friendly representation that retains full fidelity but improves comprehension.

Important is that the driver can accept a standard spec, how that is treated is up to the driver.

3.2 get_driver_system_message(model_name?)

Returns a complete system message containing or referencing the function description, crafted specifically for an LLM family.

This message guides the model to make valid and parseable tool calls. While the default behavior may simply inline get_function_description(), advanced drivers can define custom prompts tailored to different LLMs (e.g. OpenAI, Claude, Mistral), including:

  • Format hints
  • JSON schema constraints
  • Few-shot examples
  • Token budget control

3.3 process_llm_response(llm_response)

Consumes the output message generated by the LLM and executes the described operation if a call is detected. The method should:

  1. Validate and parse the input (typically JSON or structured text)
  2. If a call is detected: Map the request to a bridge-compatible operation (e.g. HTTP call, MQTT message)
  3. Return the raw result without postprocessing (the orchestrator or client will handle formatting or retries)

This separation ensures that drivers focus on interfacing with the external system, while clients and orchestrators remain agnostic of internal logic and implementation.


4 · Orchestrator & ToolDriver

When using multiple MCS drivers in a chain, each driver must autonomously determine if a function call in the LLM response is intended for it, assuming drivers maintain an internal representation of their tools (implemented in varying ways without a strict convention). The LLM's output is passed sequentially through all process_llm_response methods, and the system prompts from all drivers are combined and fed to the LLM. This can lead to side effects, especially with numerous drivers offering different formats. The LLM may struggle to cleanly separate and handle them, resulting in parsing errors or unreliable calls.

To achieve higher integration and mitigate these issues, MCS recommends a middleware like an Orchestrator. It serves as a central coordinator for multiple drivers, offering several advantages when dealing with diverse or numerous tools. For instance, it prevents function collisions (e.g., when multiple drivers provide tools with the same name) by unifying and renaming them if needed. Instead of blindly routing the LLM response through every driver, the Orchestrator intelligently dispatches to the specific tool the model intended. A key benefit is tool unification. It converts varied formats and descriptions from individual drivers into a consistent target format, possibly optimized for particular LLMs, reducing context bloat and improving reliability.

The Orchestrator's usage depends on the application scenario. In simple setups with one or two drivers, direct chaining suffices. In complex environments with many drivers, an Orchestrator can manage all tasks or target specific areas. Hybrid forms are also possible, allowing flexible scaling without client changes.

To enable this, MCS introduces the ToolDriver—a specialized driver type focused solely on technical bridging, without direct LLM interaction. This separation motivates the ToolDriver: It allows authors to concentrate purely on execution and bridging logic, without needing knowledge of LLMs, prompts, or model-specific quirks—shifting that responsibility to the Orchestrator for better modularity and expertise division.

The core MCSToolDriver interface is minimal:

  • meta: Driver metadata (ID, version, protocol, transport, etc.)
  • list_tools(): Returns list of Tool objects (name, description, parameters).
  • execute_tool(tool_name, arguments): Executes and returns result.

The exact signatures are up to each SDK. The semantics must match.

4.1 list_tools()

Returns a list of Tool objects representing the functions or tools provided by the driver. Each Tool includes a name, description, and a list of parameters (with name, description, required flag, and optional schema).

This method allows the Orchestrator to discover and aggregate tools from multiple ToolDrivers without needing to know their internal details. The returned tools should be in a standardized format to facilitate unification.

ToolDrivers may dynamically generate this list based on a standard spec (e.g., fetching from an OpenAPI endpoint) or return a static set for unstandardized tools, like filesystems over local file system, where no real spec is available.

4.2 execute_tool(tool_name, arguments)

Executes the specified tool with the given arguments and returns the raw result. The method should:

  1. Validate the tool_name and arguments against the driver's internal capabilities.
  2. Map the call to the underlying bridge operation (e.g., HTTP request, file operation).
  3. Handle errors gracefully, potentially raising exceptions for invalid calls.

This keeps execution isolated and focused on the technical bridge, allowing Orchestrators to handle LLM-specific parsing and routing.

BasicOrchestrator implements MCSDriver by collecting from ToolDrivers, formatting prompts, and dispatching calls.

This separation keeps ToolDrivers lean and execution-focused (e.g., handling HTTP calls or file access), while Orchestrators (which implement the MCSDriver interface) aggregate these tools, build unified prompts, and route executions efficiently. For the client, an Orchestrator appears as a single MCSDriver, enabling seamless mixing without logic adjustments.

Hybrid Tools may also be possible to be used with or without an Orchestrator, implementing both interfaces.

It is now an easier task because the driver already has an internal representation of the tools with this approach.


5 · Configuration & Instantiation

Drivers are configured via constructors, e.g., URLs for specs, auth tokens, proxies. Use libraries like Pydantic for validation.

This decouples configuration from functionality, enabling dynamic setups from env vars or JSON. The motivation is to allow drivers to be fully automatically loaded and configured in the future, without requiring the client to implement special functions for particular tools.

Ideally, driver-specific settings can be handled through a configuration file or dependency injection, keeping the client agnostic.

With tools like Pydantic, the client could even dynamically generate configuration interfaces for the user without needing to know the details (though this can be challenging, as auto-generated forms have their own complexities).

This mirrors traditional drivers that bring their own configuration interfaces, without involving the operating system.


6 · Optional Capabilities

MCS keeps the base contract tiny. Optional behavior is signaled via capability flags in DriverMeta. Consumers must feature-detect before invoking an optional method (i.e., check if the flag exists in meta.capabilities and then dynamically call the corresponding method).

Extend via capabilities, e.g.:

Capability Flag Suggested Mix-in / Interface Description
Health check healthcheck abstract class SupportsHealthcheck { abstract healthcheck() -> dict } Returns status info, e.g., {"status": "OK"}.
Resource preload cache abstract class SupportsCache { abstract warmup() -> void } Preloads resources for faster execution.
Status & metrics status abstract class SupportsStatus { abstract get_status() -> dict } Provides runtime metrics or detailed status.
Autostart autostart abstract class SupportsAutostart { abstract autostart(kwargs: dict) -> void } Launches required infrastructure (e.g., containers).

Rule of Thumb: For easy use cases, name the mixin class Supports<CapabilityName> with the method named <capabilityName>. This convention simplifies dynamic invocation but is not mandatory. SDKs may define their own standards for common capabilities.


7 · Autostart Convention

Autostart is mostly obsolete in MCS, as existing interfaces (e.g., HTTP endpoints) are used directly. For local systems, employ direct code (e.g., mcs-driver-filesystem-localfs) without spinning up extra processes.

In MCS, the autostart functionality, key to MCP's popularity, is largely unnecessary. You'd never think to start a FastAPI server for local filesystem access, yet that's common in MCP. MCS focuses on established interfaces: HTTP endpoints already run on servers, and local operations use direct code, eliminating additional processes.

This makes integration more intuitive and secure, no extra server logic to wrap existing APIs. Anything accessible via HTTP, local methods, or standards can be used directly.

Example: The Context7 MCP server 7 provides access to LLM developer docs 8. It's already an API, so instead of starting a local MCP server, bind it via an MCS driver. Context7 lacks an OpenAPI spec, but create one from its MCP tool description and host it on a CDN 9. Context7 becomes MCS-compatible without extra infrastructure.

This highlights OpenAPI's advantage: A simple alternative description suffices—extend, simplify, or LLM-optimize it. Clients swap a URL; no development, hosting, or maintenance for additional servers, especially thin wrappers.

Thus, local autostart is fully eliminated. However, if a driver needs to spin up local infrastructure, it's optional via the SupportsAutostart mixin:

from abc import ABC, abstractmethod

class SupportsAutostart(ABC):
    @abstractmethod
    def autostart(self, **kwargs) -> None:
        pass

No concrete reference implementation exists yet, but the concept is clear: The driver defines start parameters (e.g., for a Docker container), and a framework or Orchestrator launches it automatically on demand. Users might just provide the Docker image name, but execution and binding happen in the background. Resulting in a better Plug & Play experience.

If autostart is used, it must be virtualized for safety. Driver authors should specify how systems start containers, including guidelines for container developers to ensure uniform startup. For a REST-HTTP driver with autostart, include management to launch containers, isolate ports, and pass URLs back to the driver for populating get_function_description() or list_tools().

This is safer than MCP's implicit STDIO autostart, avoiding privilege risks. Controlled, reproducible, and secure through process virtualization. Local environments need no manual config, they're orchestrated.

But really most of the time you will not need this anymore.


8 · Security, Usability and Integration

A central goal of the Model Context Standard (MCS) is to connect LLMs to external systems directly and minimally. Unlike MCP, MCS avoids a custom protocol, relying on mature technologies like HTTP, OpenAPI, or Docker that are already proven secure. This eliminates many potential attack surfaces typical of new standards.

MCP's STDIO transport enforces an autostart mechanism, running server processes in the background with user privileges. This creates a high entry barrier for less technical users (who must provide and secure the execution environment) and poses security risks. Protection depends entirely on the server implementation. A malicious MCP server could execute arbitrary actions in the user's context, especially without virtualization or safeguards.

MCS carries a similar risk of malicious drivers from untrusted sources, but lacks MCP's additional protocol vulnerabilities, since no such protocol exists. MCS addresses trust more directly. Each driver has a unique ID, is distributed via established package managers like PyPI, and can be secured prospectively with signed checksums or central registries. This makes origin and integrity verifiable, similar to package checksums in Linux or artifact checks in Maven environments. Starting with default repositories and naming conventions defined by MCS SDKs for each language ensures consistent, auditable distribution. Later such a central registry could be used to verify the integrity of the driver and support autoloading.

Usability benefits directly. For REST over HTTP, users simply provide an OpenAPI URL, the driver auto-detects functions, generates prompts, and provides execution mechanisms. Neither client developers nor end-users need to handle technical details or prompt knowledge. Developers can reuse existing REST endpoints and integrate new interfaces into projects or toolchains without extra effort.

An optional autostart remains possible but is not mandatory. The reference architecture uses a mixin model for Docker container starts: The driver defines container setup, while users ideally just specify the image name. This is safer than starting local server processes and simpler to handle. Users also gain from central Docker repositories for verifiable image origin and integrity.

MCS complements MCP rather than replacing it, approaching LLM-external system binding from a different angle. Both concepts can be used together: Existing MCP servers integrate seamlessly via MCS drivers (e.g., MCP-over-STDIO or MCP-over-SSE), reusing solutions without client changes. Conversely, MCP servers can use MCS drivers—especially ToolDrivers—with a single MCS-compatible MCP implementation bridging all MCS drivers into the MCP ecosystem.

MCS combines modular drivers with established standards, enhancing security, reducing unnecessary complexity, and creating an open foundation for seamless LLM integration across diverse system landscapes.


9 · LLM Prompt Patterns (Informative)

While outside the formal spec, MCS emphasizes prompt patterns as a key driver benefit. They encapsulate optimized instructions within the driver, making prompt engineering a one-time, highly rewarding investment. Once refined (e.g., using DSPy for iterative optimization via input-output comparisons), a prompt becomes reusable across all compatible apps, models, and scenarios. Eliminating the need for scattered hubs or collections. Prompts evolve into specialized, model-agnostic assets, integrated exactly where needed and supporting multiple LLMs out of the box without client-side tweaks.

This clear separation of tool (execution) and prompt (communication) opens a strategic perspective on model optimization. A perfected prompt in the driver is instantly available to every compatible application and model, turning intensive prompt work into a sustainable, reusable building block for long-term use across projects. No more per-app reinvention. Specialized prompts handle diverse models seamlessly.

Inline-spec and schema-only are two common approaches (or "styles") for how drivers in MCS structure the prompts they provide to LLMs via get_driver_system_message(). These styles determine how the function description (from get_function_description()) is incorporated into the prompt to guide the LLM on tool usage. A ToolDriver will always follow a schema-only style, as it reduces the spec to a standardized Tools description (via list_tools()), focusing on lightweight aggregation for Orchestrators. In contrast, an MCSDriver can choose dynamically, e.g. inline-spec for powerful models (to provide full context upfront) or schema-only for weaker ones (to conserve tokens). The pros and cons balance comprehensiveness vs. efficiency, allowing adaptation per use case.

  1. Inline-spec Style

    What it means: The entire function description (e.g., a full OpenAPI spec or JSON schema) is directly embedded into the prompt as plain text, often formatted as a Markdown code block for readability. For example:

    You have access to the following tools:
    
    ```json
    { "name": "get_weather", "description": "Get current weather", "parameters": { "city": { "type": "string" } } }
    

    Use this format to call: { "tool": "get_weather", "arguments": { "city": "Berlin" } }

    Pros: Simple to implement. No extra logic needed. The LLM gets everything in one go, making it reliable for models that can handle the spec out of the box.
    Cons: Token-heavy, especially for large or complex specs (e.g., a big OpenAPI file could eat up thousands of tokens, limiting context for long conversations).
    When to use: For small, static toolsets where token cost isn't an issue.
    
    
  2. Schema-only Style (optionalwith Fetch on Demand)

    What it means: The prompt includes only a condensed summary or schema of the arguments (e.g., just the required params and types), without the full description. The LLM could be instructed to "fetch" more details if needed. Meaning it generates a call to retrieve the full spec dynamically (e.g., via another tool or URL). Example prompt:

    You have access to tools. For details, fetch the schema first if unsure.
    
    Tool: get_weather
    Schema: { "city": { "type": "string", "required": true } }
    
    To fetch full description: Use { "tool": "fetch_spec", "arguments": { "tool_name": "get_weather" } }
    

    Pros: Saves tokens by keeping the initial prompt lightweight, ideal for large or dynamic toolsets. Cons: Requires a retrieval mechanism (e.g., the LLM must be able to call a "fetch" tool), which adds complexity and potential for errors if the model doesn't follow instructions. When to use: For expansive APIs or when token limits are tight, assuming the LLM supports multi-step reasoning.

Driver authors need not explicitly document their approach, as they are responsible for optimization. What matters to users is seamless performance, not the internals. In the driver's __init__ method (or equivalent), allow external prompt overrides (or via a mixin for broader availability). This enables flexible updates without redeploying code. In the future, a dedicated Prompt Provider could allow dynamic loading of improved prompts, shared across clients and drivers, updating the behavior in drivers on the fly, but for now, focus on extensibility to see what patterns emerge.


10 · Relation to MCP

MCS does not compete with MCP directly. It generalizes the same idea of standardizing LLM external system connections without imposing a new wire protocol or stack. MCP updates in 2025 (e.g., streamable HTTP transport, OAuth Resource Servers with mandatory Resource Indicators per RFC 8707) have improved its robustness, but vulnerabilities persist. Mostly because of the new protocol stack. MCP uses it and MCS shows that this is not needed.

MCS approaches the goal from a different angle, focusing on minimalism and reuse of established standards, but the concepts are complementary. Existing MCP servers can be seamlessly integrated via MCS drivers (e.g., MCP-over-STDIO or MCP-over-SSE). The driver treats the MCP server as its bridge and exposes MCP's tool list as its spec. Conversely, MCP servers can leverage MCS drivers—especially ToolDrivers, with a single MCS-compatible MCP implementation bridging all MCS drivers into the MCP ecosystem.

This mutual compatibility allows reusing solutions without client changes, fostering an open foundation for diverse integrations.


11 · Versioning & Compatibility

MCS follows Semantic Versioning (MAJOR.MINOR.PATCH) for the specification document itself, ensuring clear evolution while maintaining backward compatibility where possible.

  • MAJOR: Breaking changes (e.g., interface redesigns) that require updates to drivers or clients.
  • MINOR: Additive features (e.g., new optional capabilities) or deprecations. Deprecations always trigger a MINOR bump and include detailed migration notes to guide transitions.
  • PATCH: Bug fixes or clarifications with no functional impact.

Versions in the 0.x range (as with the current Draft) are intended as proof-of-work and represent alpha status, not yet production-ready. They focus on exploration, feedback, and refinement, with potential for significant changes before 1.0.

Drivers and SDKs also adhere to Semantic Versioning. However, drivers do not need to explicitly declare the supported spec version in code. SDKs handle version translation and compatibility. Drivers are managed via SDK packages (e.g., installed/updated through Pip for Python), so their version aligns with the SDK they depend on. This simplifies maintenance: Update the SDK, and compatible drivers follow automatically, without manual declarations or conflicts.


12 · Next Steps

Core Standard Enhancements (Define in Spec for Consistency Across SDKs)

  • Finalize JSON-Schema for DriverMeta and capability flags: This ensures metadata is machine-validatable, promoting interoperability. Include schemas for Bindings, Tools, and ToolParameters to standardize serialization/deserialization.
  • Decide if sync/async drivers are needed: Specify optional async semantics in the standard (e.g., for I/O-heavy bridges); define when to use (e.g., streaming responses). SDKs handle language idioms (e.g., async/await in Python).
  • Clear signaling in process_llm_response for call occurrence: Standardize return types (e.g., a tuple like (result, was_called: bool)) or flags to indicate execution; include error conventions (e.g., raise vs. return error objects). This avoids ambiguity in chaining.
  • Exception handling and error reporting vs. return values in error cases: Define what clients expect (e.g., structured error objects with codes/messages for failures, raw results on success); SDKs adapt to language-specific exceptions/logging.
  • Should the output from process_llm_response() really be ANY or a structured object?: Mandate a minimal envelope (e.g., JSON with { "result": any, "status": int, "error": string? }) for metadata like status codes; keeps flexibility while ensuring parseability. SDKs can add type safety.
  • Driver versioning, registries, dynamic loading for true Plug & Play and max security: Outline guidelines in the spec (e.g., semver rules, registry discovery protocols, checksum requirements); include autoloading via names/checksums. SDKs implement loading mechanics (e.g., Pip integration).
  • Autostart recommendation (container labels, health endpoints): Expand with virtualization mandates (e.g., Docker params, sandboxing rules) and startup guidelines; define how drivers signal needs (e.g., via metadata). SDKs provide reference frameworks for launching.
  • Explore a Prompt Provider for dynamic loading: Add informative section on future extensions for external prompt overrides/loading (e.g., via URLs/registries); define override semantics in init. Prototype in SDKs before standardizing.
  • Hybrid driver-orchestrator patterns: Clarify in spec how drivers can implement both MCSDriver and MCSToolDriver interfaces for versatility; provide guidelines on when to use hybrids. SDKs offer examples.

Defer to SDKs (Implementation Details, Not Core Spec)

  • Provide language-specific reference interfaces in python-sdk / typescript-sdk: Fully shift to SDKs as the standard is agnostic; use them to bootstrap other languages (e.g., Go, Rust) with code samples.
  • Checksums and autoloading in clients via names alone: Spec recommends security practices (e.g., mandatory verification); SDKs build autoload logic, handling platform-specific package management and resolution.
  • Exception handling nuances and type hints: Spec defines semantics (e.g., what to return/raise); SDKs adapt with language features (e.g., typed errors in TypeScript).
  • Collect community feedback: Ongoing for the standard (e.g., via GitHub issues); SDKs incorporate into best practices/examples, feeding back to spec revisions.

13 · Contributing

We're building and optimizing MCS as a community. Your ideas and contributions are welcome! Join the discussion on GitHub to share clarifications, proposals, or refinements.

Feel free to open a Discussion, Issue, or PR. We're excited to collaborate.

Proof-of-Work Notice: These repos are shared as-is in their early stages. PRs and issues will be evaluated based on alignment with the project's goals and available time. No guarantees on reviews or merges, but outstanding contributions will shine and get prioritized!


References

[1] A. Parisi, Y. Zhao, und N. Fiedel, „TALM: Tool Augmented Language Models“, 24. Mai 2022, arXiv: arXiv:2205.12255. doi: 10.48550/arXiv.2205.12255.
[2] https://github.com/modelcontextstandard
[3] https://github.com/modelcontextstandard/python-sdk
[4] https://github.com/modelcontextprotocol
[5] https://github.com/upstash/context7
[6] https://gist.githubusercontent.com/bizrockman/7fbca8d1c3d30ef9c54db6f7190c6166/raw/4236a47e555552bea0c00e1384964a1ea0d568ae/context7_openapi_llm_friendly.json
[7] https://dspy.ai/

About

The driver specification for the Model Context Standard (MCS)

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published