Lobe = Brain
The Cognitive Engine & Verifiable Execution Consensus
The Lobe is the core processing unit of the Agent.

1. Lobe Definition
Agent Creators primarily need to specify the Lobe's Uniform Resource Identifier (URI). This URI tells Executors how to access and run the Lobe.
Built-in Lobes: The Executor might have optimized, native implementations referenced directly.
Remote/Community Lobes: The URI might point to a standardized service endpoint or package developed by the community.
Lobe Inputs:
Memory Context: Relevant portions of the Agent's Memory (e.g., Hot Memory, relevant Longterm Memory retrieved via RAG).
User Query: The current interaction payload from the user.
Tool Manifest: List of available Tools and their descriptions/APIs.
Lobe Outputs:
Memory Directives: Instructions for updating the Agent's Memory (e.g., add to Longterm Memory).
Response: The output generated for the User.
Tool Usage Directives: Instructions specifying which Tools to use and with what parameters.
Execution Proof: Verifiable evidence of the Lobe's execution process.
Essentially, a Lobe wraps one or more foundational models (like LLMs). It preprocesses inputs (combining query, memory, tool info), invokes the model(s), post-processes the model output, and formats the final Lobe Output structure.
2. Lobe Consensus
Ensuring the Lobe's execution is verifiable and deterministic (or has agreed-upon non-determinism) is crucial for overall DeAgent consensus. This requires proofs for two aspects:
Model Invocation/Execution Proof: Proving the underlying AI model was called correctly and produced a specific output. This is challenging.
Non-Model Logic Proof: Proving the correctness of the surrounding business logic within the Lobe (preprocessing, post-processing, tool handling). This is more amenable to standard techniques like Zero-Knowledge Proofs (ZKPs).
(1) Model Invocation Proof Mechanisms
Closed-Source Models (Whitelisting + TLS Proofs): For proprietary models from trusted vendors (e.g., OpenAI, Anthropic, Google, Cloudflare AI), full execution proof is impossible. Instead, we rely on a whitelist approach. Executors provide proof (e.g., via ZK-TLS techniques like DECO or TLSNotary) demonstrating they securely connected to the vendor's official API endpoint and relayed the specific query and response without tampering.
Open-Source Models (Challenges and Hybrid Approach):
zkML: Techniques like zkSNARKs applied to machine learning models (zkML) could theoretically provide full execution proofs. However, current zkML approaches (even promising ones like zkPyTorch mentioned by Polyhedra, though practical availability was limited at the time of writing) often incur prohibitive computational overhead and latency.
Replication Issues: Simply having multiple nodes run the same open model isn't foolproof. Differences in hardware (e.g., CUDA SM versions), software libraries, and floating-point arithmetic can lead to slightly different outputs, breaking naive consensus.
Proposed Hybrid Approach (Active + PoW-like Entropy Minimization): We propose a specialized compute network for open models. Participants run the models. When an interaction requires an open model, multiple participants can execute and submit their results. Instead of requiring identical outputs, we use an entropy function to select the "best" result. The participant submitting the result with the lowest calculated entropy is chosen. This incentivizes participants to produce high-quality, relevant outputs. Penalty and reward mechanisms should be integrated into this network.
(2) Entropy Function for Result Selection
To objectively select among potentially valid but different model outputs (especially from open models), we use an entropy-based selection mechanism:
Encoding Model: Utilize a separate, highly capable encoder model (potentially a RAG-enhanced QA model) denoted as
E
. This model maps text sequences to dense vector embeddings:vector = E(sequence)
.Entropy Calculation: For a given input (
input
) and a candidate output (output
), calculate the negative dot product of their embeddings:Entropy = - ( E(input) ⋅ E(output) )
(Note: Using the negative dot product means higher similarity/relevance corresponds to lower "entropy" in this context. Cosine similarity could also be used).Selection: Among all candidate outputs submitted by Executors for a given interaction, the Committers select the one yielding the lowest entropy value.
Security Assumption: This mechanism relies on the assumption that it is computationally difficult for an attacker to craft a malicious or arbitrary response that also achieves the minimum entropy score for a given input, according to the chosen encoder model E
. This assumption requires ongoing research and validation.
3. Lobe Summary
Through ZK proofs for non-model logic, TLS proofs for whitelisted APIs, and an entropy-based selection for open models, the DeAgent framework aims to establish robust consensus around the Lobe's execution results.
Last updated