Human Dividend Protocol: A Decentralized Proof of Compute Network for the AI Economy

Version 2.0.0 | February 2026 | Classification: Public

Developed by Laurelin Labs LLC


Abstract

The Human Dividend Protocol (HDP) is a Decentralized Physical Infrastructure Network (DePIN) that measures, verifies, and compensates AI compute contributions from personal machines through the $HDPT token on Base L2. As artificial intelligence becomes the dominant engine of economic productivity, the machines that power this transformation --- owned by billions of individuals --- generate immense value that flows exclusively to centralized AI providers. Every token produced during inference represents economic output created on someone's hardware, yet machine owners receive nothing for it. Local model operators bear the full cost of inference while the tokens they generate hold measurable value. Cloud API consumers subsidize infrastructure without recognition. No major AI provider signs or attests to API responses, leaving all client-side usage data fundamentally unverifiable.

HDP addresses this asymmetry by introducing Proof of Compute (PoC), a multi-layered verification protocol that combines hardware attestation, GPU telemetry, model identity verification, computation timing profiles, energy correlation, kernel audit logs, and zero-knowledge proofs into a composite trust score. Rather than requiring every individual compute claim to be cryptographically proven --- an impractical constraint given the current state of zkML --- HDP employs a tiered trust model where approximately 30% of fully verified contributions from Compute Providers and Orchestrators calibrate statistical models that make fabrication economically irrational across the entire network.

The protocol distributes $HDPT tokens --- an ERC-20 utility token with a fixed supply of 1,000,000,000 --- according to verified compute work. Rewards follow a halving emission schedule governed by on-chain parameters, with a dynamic rate controller that maintains sustainable issuance over a multi-year horizon. Nine smart contracts are deployed and verified on Base Sepolia testnet. Over 885 automated tests validate the protocol across ten application packages. Desktop applications, browser extensions, TypeScript and Python SDKs, a Go-based aggregation engine, and four operational dashboards constitute the working implementation.

The Human Dividend thesis is simple: machines power AI, and their owners deserve the dividend.


Table of Contents

  1. Introduction
  2. Problem Statement
  3. Background and Prior Art
  4. Proof of Compute Protocol
  5. Network Architecture
  6. Token Economics
  7. Security Model
  8. Governance
  9. Use Cases
  10. Implementation Status and Roadmap
  11. Team
  12. Conclusion
  13. References

1. Introduction

1.1 The AI Compute Revolution

The rapid proliferation of large language models, generative AI systems, and autonomous agents has created an unprecedented category of economic activity: AI-mediated compute. Global AI API usage is growing at exponential rates, with major providers processing billions of inference requests daily. Each interaction requires substantial client-side resources --- request preparation, response parsing, context management, local storage, and network bandwidth --- yet the economic value generated by this distributed compute infrastructure flows almost entirely to centralized service providers.

The emergence of locally deployable AI models has amplified this dynamic. Frameworks such as Ollama, vLLM, llama.cpp, and LM Studio enable individuals to run large language models on personal hardware, bearing the full cost of electricity, GPU depreciation, memory allocation, and thermal management. These operators produce real economic output --- every token generated during local inference is a unit of value created entirely by their hardware --- yet they have no mechanism for economic participation in the value they produce.

Meanwhile, the global installed base of AI-capable devices continues to expand. Consumer GPUs capable of running large language models locally are now available for under $1,000. Apple Silicon machines ship with unified memory architectures that can load 70-billion-parameter models. Enterprise workstations increasingly feature AI-optimized accelerators as standard components. Collectively, these devices represent a distributed compute network of extraordinary scale --- potentially hundreds of millions of AI-capable machines worldwide. This infrastructure operates daily in service of AI workloads, yet its contribution remains unmeasured, unverified, and uncompensated.

The economic magnitude of this uncompensated contribution is significant. Consider a single developer making 200 AI API calls per day: each call requires local CPU cycles for request preparation and response parsing, memory for context management, network bandwidth for data transfer, and storage for conversation history. For local model operators, the economics are even starker --- their machines perform the actual inference, generating thousands of tokens per session through billions of floating-point operations, yet no mechanism exists to recognize or compensate this output. Across millions of such users, the aggregate resource commitment and token output is substantial, yet it generates zero economic return for the machine owners who bear these costs and produce this value.

1.2 The Human Dividend Thesis

HDP proposes a fundamental reframing of the relationship between machine owners and the AI economy: AI compute is a distributed infrastructure resource, and those who contribute hardware to AI workloads deserve recurring compensation proportional to their verified contribution.

This thesis draws from the established DePIN model, where blockchain-coordinated incentives align distributed physical resource providers with network demand. Helium demonstrated this pattern for wireless coverage. Filecoin proved it for data storage. Render applied it to GPU rendering. HDP extends the model to AI compute metering --- the measurement and compensation of client-side contributions to AI inference, training, and orchestration workloads.

We call this compensation the Human Dividend: a recurring economic return to machine owners based on the verified AI work their hardware performs. The dividend is not speculative. It is earned through measurable, validated compute contributions and paid in a token whose issuance is governed by transparent on-chain rules.

1.3 Design Principles

The Human Dividend Protocol is built on four core principles:

Verifiability. Every claimed compute contribution must be substantiated through measurable evidence. The protocol employs a multi-dimensional verification system that combines hardware attestation, real-time telemetry, timing analysis, and cryptographic proofs to establish confidence in each claim. Where individual cryptographic proof is impractical, statistical methods applied across the network provide collective assurance.

Fairness. Rewards must be proportional to actual work performed. The protocol distinguishes between contribution types --- local GPU inference, API-mediated orchestration, and browser-based consumption --- and assigns reward multipliers that reflect the verifiability and economic cost of each category. Machines that bear higher compute costs and produce stronger proof earn commensurately higher rewards.

Accessibility. Participation must not require specialized hardware, deep technical expertise, or significant capital outlay. A browser extension installation, a one-line SDK wrapper, or a desktop application should be sufficient to begin earning. The protocol provides multiple entry points at varying levels of friction and sophistication.

Sustainability. The economic model must remain viable as the network scales from thousands to millions of participants. Emission schedules, dynamic rate controllers, staking requirements, and halving mechanisms work in concert to ensure long-term token value and network health.

1.4 Document Purpose

This whitepaper serves as the technical and economic specification for the Human Dividend Protocol. It describes the protocol's verification mechanisms, network architecture, token economics, security model, and governance framework at an abstract level suitable for developers evaluating technical integration, validators considering network participation, machine owners interested in earning rewards, researchers studying DePIN economics, and investors conducting due diligence.

This document does not constitute investment advice or a solicitation to purchase tokens.


2. Problem Statement

The AI economy has created a new form of value extraction that operates at global scale, affects billions of device owners, and has no existing mechanism for redress. This section articulates the three dimensions of the problem that HDP addresses.

2.1 The Invisible Infrastructure Subsidy

When a knowledge worker drafts a document with Claude, a developer debugs code with GPT-4, or an analyst queries Gemini, a chain of compute activity occurs across multiple systems:

+-------------------------------------------------------------------+
|               AI INTERACTION VALUE FLOW                            |
+-------------------------------------------------------------------+
|                                                                   |
|   User Machine           AI Provider           Value Created      |
|        |                      |                      |            |
|        +---> Request Prep     |                      |            |
|        +---> Context Mgmt     |                      |            |
|        +---> API Call ------->|                      |            |
|        |                      +---> Inference         |            |
|        |<------- Response ----|                      |            |
|        +---> Parse Response   |                      |            |
|        +---> Local Storage    |                      |            |
|        +---> Display/Render   |                      |            |
|        |                      |                      |            |
|        |   UNCOMPENSATED      |   REVENUE            |            |
|        |   Hardware costs     |   API fees           |  Tokens   |
|        |   Electricity        |   Subscriptions      |  Generated|
|        |   Bandwidth          |   Enterprise deals   |  Economic |
|        |   Depreciation       |   Data insights      |  Output   |
|        |   Tokens generated   |   Model licensing    |            |
|        |                      |                      |            |
+-------------------------------------------------------------------+

Users pay for API access. Providers capture revenue. But machine owners receive no compensation for the electricity consumed during AI interactions, the hardware depreciation from sustained compute loads, the bandwidth utilized for data transfer, or the processing cycles dedicated to AI workloads. Critically, they receive no share of the value created by the tokens their machines generate --- every inference output, every completed response, every generated embedding represents economic value produced on or through their hardware and captured entirely by providers. This constitutes an invisible infrastructure subsidy --- billions of devices contributing resources and producing valuable output for the AI economy without recognition or reward.

2.2 The Local Model Challenge

The rise of locally deployed AI models --- Llama, Mistral, Phi, Qwen, Gemma, and others --- creates an additional dimension to this problem. Users who run local inference bear the entirety of compute costs: electricity for sustained GPU utilization, thermal management, VRAM allocation, and accelerated hardware wear. Every token generated on their hardware --- every response, every embedding, every chain-of-thought completion --- is a unit of economic output produced entirely by their machine. Yet they participate in no economic value chain beyond their own immediate productivity.

A user running a 70-billion-parameter model on a high-end GPU consumes hundreds of watts of power during inference, generates significant heat that accelerates component degradation, and allocates tens of gigabytes of VRAM exclusively to the model. Their machine may produce thousands of tokens per session, each representing real computational work --- matrix multiplications, attention calculations, and memory operations executed on their silicon. This is real, measurable economic cost and real, measurable economic output, with no corresponding compensation mechanism.

2.3 The Verification Gap

A fundamental challenge underlies any attempt to compensate AI compute: no major AI provider cryptographically signs or attests to API responses. When a user calls the Anthropic, OpenAI, Google, or xAI APIs, the response includes metadata such as token counts, request identifiers, and model names, but none of this data carries a cryptographic signature from the provider. All client-side usage data is, in principle, fabricatable.

This verification gap means that any naive reward system --- one that simply pays tokens per reported API call --- would be trivially exploitable. An attacker could generate millions of fake usage reports without performing any actual compute.

This is not a theoretical concern. The absence of provider-signed responses is the defining constraint of the AI compute verification problem. Unlike storage networks (where the data itself serves as proof) or rendering networks (where the output image serves as proof), AI inference produces outputs that could have been generated by any means. The token count claimed by a client cannot be independently verified without re-executing the inference or obtaining a cryptographic attestation from the provider.

HDP addresses this gap not by requiring provider cooperation (which is unlikely in the near term) but through a multi-layered verification architecture that makes fabrication economically irrational. Hardware attestation binds claims to physical devices. GPU telemetry validates that real computation occurred. Timing profiles enforce physics-constrained performance bounds. Statistical validation detects anomalous patterns across the network. And staking requirements ensure that the cost of attempted fraud exceeds the potential reward.

The protocol is designed to be robust even if individual verification layers are imperfect. The composite scoring approach means that an adversary must defeat multiple independent verification mechanisms simultaneously --- hardware attestation AND telemetry AND timing AND statistical analysis --- to extract rewards. The probability of evading all layers simultaneously is dramatically lower than evading any single layer.


3. Background and Prior Art

3.1 Proof of Work and Useful Work

Bitcoin's Proof of Work (PoW) established the foundational insight that cryptoeconomic systems can incentivize distributed resource contribution through verifiable computation [1]. However, Bitcoin's PoW is deliberately useless --- the computation produces nothing beyond consensus, consuming enormous energy in the process. The hash puzzles that miners solve serve only to demonstrate expended computational effort; they produce no useful output beyond the consensus itself.

Subsequent protocols sought to redirect this computational effort toward productive ends. Filecoin introduced Proof of Replication (PoRep) and Proof of Spacetime (PoSt), which verify that storage providers are maintaining unique copies of client data over time [2]. PoRep requires providers to demonstrate that they have encoded a unique copy of the data into their storage, while PoSt requires ongoing proof that the data continues to be stored over time. These proof systems transform the abstract concept of "useful work" into a concrete, cryptographically verifiable property: the prover demonstrates ongoing stewardship of specific data. The key innovation is that the proof mechanism itself --- the process of encoding and verifying data --- serves the network's purpose rather than being wasted.

HDP extends this lineage to AI compute. Where Filecoin proves that data is being stored, HDP proves that computation is being performed --- that a specific model was executed on specific hardware, consuming measurable resources, to produce inference results consistent with the claimed workload. The challenge is fundamentally different: storage can be verified by reading back data, but computation can only be verified by either re-executing it (expensive), proving it cryptographically (currently impractical for large models), or establishing multi-dimensional evidence that it occurred (HDP's approach).

3.2 Decentralized Compute Networks

Several projects have established the viability of decentralized compute coordination:

Render Network pioneered decentralized GPU rendering, allowing node operators to contribute GPU capacity for 3D rendering tasks and earn RENDER tokens [3]. Render's verification model relies on deterministic rendering outputs --- the same scene rendered on different GPUs should produce identical pixel data --- a property that AI inference does not share due to floating-point non-determinism.

io.net created a distributed GPU pooling platform that aggregates underutilized GPUs into virtual clusters for AI and machine learning workloads [4]. io.net focuses on the supply side of GPU compute, connecting GPU owners with demand, rather than on verifying the compute itself.

Akash Network provides a decentralized cloud compute marketplace where providers bid on containerized workloads [5]. Akash addresses general-purpose compute rather than AI-specific verification challenges.

Gensyn introduced Proof of Learning (PoL), a verification mechanism for machine learning training that uses probabilistic checking of gradient computations [6]. Gensyn's approach is the closest precursor to HDP's Proof of Compute, though Gensyn focuses specifically on training verification while HDP addresses the broader spectrum of AI compute including inference, orchestration, and consumption.

The following table summarizes the landscape of decentralized compute networks and HDP's positioning:

Project Resource Type Token Verification Method HDP Differentiation
Render GPU rendering RENDER Deterministic output comparison HDP verifies non-deterministic AI inference
io.net GPU pooling IO Task assignment and completion HDP verifies client-side compute, not pooled tasks
Akash Cloud compute AKT Container deployment verification HDP operates at the inference level, not container level
Gensyn ML training - Probabilistic gradient checking HDP covers inference + orchestration + consumption
HDP AI compute HDPT 7-layer composite scoring First AI compute metering DePIN

3.3 AI Inference Verification

The challenge of verifying that a specific AI inference actually occurred has attracted significant research attention:

zkML frameworks such as EZKL convert machine learning models into zero-knowledge circuits, enabling cryptographic proof that a model produced a specific output from a specific input [7]. Current zkML technology can handle models up to approximately 100 million parameters in practical timeframes, but remains impractical for full verification of large language models with billions of parameters.

Trusted Execution Environments (TEEs) such as Intel SGX, Intel TDX, and AWS Nitro Enclaves provide hardware-isolated computation environments that can attest to the code and data they process [8]. TEEs offer strong guarantees but require specific hardware and introduce latency overhead.

TLSNotary uses multi-party computation over TLS sessions to prove that specific data was received from a specific server, without requiring server cooperation [9]. TLSNotary can prove that an API response came from a genuine provider, but is currently limited to TLS 1.2 and adds significant overhead per session.

Opacity Network combines TLS attestation with Cloudflare AI Gateway routing to produce verifiable proofs of AI API usage [10]. Opacity demonstrates that provider-agnostic verification is possible, though it requires routing traffic through specific infrastructure.

HDP draws from all of these approaches, employing each where appropriate within a layered verification architecture rather than depending on any single mechanism. The key design insight is that no single verification technology is both practical and sufficient for all AI compute scenarios. zkML is cryptographically strong but computationally limited. TEEs are hardware-bound but not universally available. TLS attestation proves network provenance but not computational effort. By combining these techniques into a composite scoring system, HDP achieves practical verification coverage across the full spectrum of AI compute activities.

3.4 DePIN Architecture Patterns

Analysis of successful DePIN protocols reveals a common five-layer architecture model that HDP adopts and extends:

+-------------------------------------------------------------------+
|                    DePIN 5-LAYER MODEL                             |
+-------------------------------------------------------------------+
|                                                                   |
|  +-------------------------------------------------------------+ |
|  |  APPLICATION LAYER                                           | |
|  |  User interfaces, developer tools, dashboards, SDKs         | |
|  +-------------------------------------------------------------+ |
|  |  GOVERNANCE LAYER                                            | |
|  |  Parameter management, upgrade mechanisms, DAO transition    | |
|  +-------------------------------------------------------------+ |
|  |  DATA LAYER                                                  | |
|  |  Off-chain aggregation, indexing, content-addressed storage  | |
|  +-------------------------------------------------------------+ |
|  |  BLOCKCHAIN LAYER                                            | |
|  |  Smart contracts, token logic, on-chain state                | |
|  +-------------------------------------------------------------+ |
|  |  INFRASTRUCTURE LAYER                                        | |
|  |  Physical machines, hardware attestation, compute resources  | |
|  +-------------------------------------------------------------+ |
|                                                                   |
+-------------------------------------------------------------------+

This layered separation allows each concern to evolve independently. Infrastructure participants need not understand blockchain mechanics. Application developers need not understand hardware attestation. Governance changes propagate through well-defined interfaces rather than requiring coordinated updates across the entire stack.

Successful DePIN protocols share several architectural properties that HDP adopts: token-incentivized resource contribution (participants earn tokens proportional to verified contributions), progressive decentralization (beginning with foundation-led coordination and transitioning to community governance), verifiable proof systems (cryptographic or economic mechanisms that prevent fraudulent resource claims), and scalable settlement (off-chain aggregation with periodic on-chain finalization to manage blockchain throughput constraints).


4. Proof of Compute Protocol

The Proof of Compute (PoC) protocol is the core innovation of the Human Dividend Protocol. It defines how AI compute contributions are measured, verified, scored, and rewarded.

4.1 Design Goals

The design of a compute verification protocol for AI workloads must navigate fundamental trade-offs that do not exist in storage or rendering verification. AI inference is non-deterministic (the same prompt may produce different outputs), the largest models exceed current zkML capabilities by orders of magnitude, and the most common form of AI compute --- cloud API calls --- leaves no local computational trace beyond network traffic. The PoC protocol is designed to satisfy four goals that exist in tension with one another:

Verifiability without universal cryptographic proof. Full zero-knowledge proof of every AI inference is computationally infeasible for large models with current technology. The protocol must achieve high confidence in aggregate network honesty without requiring each individual claim to be independently provable.

Low onboarding friction. A protocol that requires TPM attestation, GPU telemetry drivers, and zkML setup before any rewards are earned will fail to achieve adoption. Participants must be able to begin contributing and earning immediately, with proof quality improving progressively over time.

Sybil resistance. A single adversary must not be able to simulate thousands of machines to extract disproportionate rewards. The protocol must bind claims to physical hardware and impose economic costs on fabrication attempts.

Economic security. The cost of successfully gaming the protocol must exceed the potential reward. This property must hold not just at the individual claim level but across sustained attack strategies.

4.2 Compute Receipt Model

Every unit of AI work produces a Compute Receipt --- a structured, signed document that captures evidence of the computation performed. A receipt is the atomic unit of proof in the HDP network.

Conceptually, a compute receipt contains four categories of evidence:

Hardware context. The identity of the physical machine, established through hardware attestation mechanisms such as TPM or Secure Enclave key binding. This includes a hardware-bound public key, a fingerprint of the machine's compute capabilities, and attestation that the HDP client binary matches a known-good release.

Computation metadata. The nature of the AI work performed: the model identity (via file hash for local models, or provider and model name for API calls), token counts, compute unit calculation, and the type of workload (local inference, API call, fine-tuning, embedding generation). The computation type determines which verification layers are applicable and what calibration data the scoring engine uses for validation.

Telemetry evidence. Real-time measurements captured during computation: GPU utilization samples, VRAM allocation, power consumption, temperature readings, kernel dispatch traces, and energy consumption figures. These measurements are physics-constrained --- they reflect actual hardware behavior that cannot be replicated without performing real computation. Telemetry is sampled at high frequency during the computation window and summarized into the receipt as statistical aggregates (means, distributions, correlations) that characterize the workload signature.

Timing data. Precise timestamps for the computation lifecycle: time-to-first-token, per-token generation timestamps, total latency, and streaming chunk arrival times. Timing profiles are determined by the physical properties of the hardware and the mathematical structure of the model, creating a signature that is difficult to fabricate without performing the actual inference.

Each receipt is signed by the machine's hardware-bound key, binding the claim to a specific physical device. Receipts include a monotonic nonce to prevent replay attacks and a reference to the proof rules version under which they were generated.

4.3 Multi-Layer Verification

HDP employs a multi-dimensional scoring system that evaluates compute receipts across seven verification layers. Each layer provides independent evidence of legitimate computation, and their composite score determines reward eligibility and multiplier.

Layer 1: Hardware Attestation. The foundation of the trust model. Each machine establishes identity through platform-specific hardware attestation --- TPM 2.0 on Windows and Linux, Secure Enclave and App Attest on macOS, NVML attestation for NVIDIA GPUs. Hardware attestation provides a unique, hardware-bound identity that cannot be duplicated in software, proves that the running client binary matches a known-good release, verifies the machine's claimed compute capabilities (CPU cores, GPU model, VRAM capacity, RAM), and binds every subsequent compute receipt to a specific physical device through cryptographic signing.

Layer 2: GPU Telemetry. AI inference produces an unmistakable hardware signature. The HDP client captures real-time telemetry during computation: GPU utilization percentage, VRAM allocation, power draw in watts, temperature progression, streaming multiprocessor occupancy, and clock speeds. These measurements must be internally consistent (power consumption correlates with utilization, temperature rises during sustained computation) and externally consistent (VRAM allocation matches the claimed model's requirements for the reported quantization level).

Layer 3: Model Identity Verification. When a model is loaded for local inference, the client captures the SHA-256 hash of the weights file and matches it against the HDP Model Registry --- a versioned, on-chain-governed catalog of known models stored on IPFS. Verified model identity enables calibration-based validation: the protocol knows what performance characteristics to expect from a given model on given hardware. Unrecognized models are accepted but receive reduced proof scores.

Layer 4: Computation Timing Profiles. Every model architecture has physics-constrained inference characteristics. The prefill phase (processing the input prompt) exhibits high GPU parallelism and duration proportional to input length. The decode phase (generating output tokens) is sequential and autoregressive, with tokens-per-second determined by the model architecture, quantization, and hardware capabilities. Time-to-first-token is predictable for a given model and hardware combination. The protocol maintains calibration data that maps expected performance ranges to specific model-hardware pairs, allowing it to detect claims that violate physical constraints.

Layer 5: Energy Correlation. Power consumption is a hardware-measured quantity that correlates with computational work. Intel RAPL provides CPU power measurements. NVIDIA power sensors and Apple Silicon power metrics provide GPU and accelerator measurements. The total energy consumed during a computation window must be consistent with the floating-point operations implied by the claimed workload. This measurement is rooted in hardware sensors and cannot be spoofed through software manipulation.

Layer 6: Kernel Audit Logs. GPU computation proceeds through a sequence of kernel dispatches --- GEMM operations, softmax calculations, layer normalization, and attention computations --- that follow a characteristic pattern for transformer-based inference. This pattern differs from gaming, rendering, and cryptocurrency mining workloads. The client captures a hash of the kernel dispatch sequence during computation, providing evidence that the GPU executed operations consistent with AI inference rather than an unrelated workload.

Layer 7: Zero-Knowledge Proofs. zkML provides the strongest possible verification: cryptographic proof that specific model weights were applied to specific inputs to produce specific outputs. Current technology makes full-model proofs impractical for large language models, but partial proofs --- verifying randomly selected layers or embedding consistency --- are feasible today and provide a strong additional signal. zkML proofs are treated as a bonus enhancement rather than a requirement, with a reward multiplier that exceeds the base maximum for contributors who provide them.

HDP's approach to zkML is progressive. In Phase 1, no zkML is required --- hardware attestation, telemetry, and timing provide sufficient verification for honest-majority networks. In Phase 2, optional zkML partial proofs earn a bonus multiplier, incentivizing adoption without mandating it. In Phase 3 and beyond, as zkML technology matures and can handle larger models, increasing proof coverage becomes expected, with the scoring weights shifting to assign greater importance to cryptographic verification.

Each layer contributes to a composite score. The scoring weights are governed on-chain through the ProofRules contract, allowing the protocol to adjust the relative importance of each layer as verification technology matures. The composite score determines both eligibility (receipts below a minimum threshold are rejected) and the reward multiplier applied to the contributor's compute units.

4.4 Tiered Trust Model

Not all AI compute contributions are equal in their verifiability or economic cost. HDP recognizes three tiers of participation, each with distinct proof characteristics and reward structures:

Tier 1: Compute Providers. Users who run AI models locally on their own hardware --- through Ollama, vLLM, llama.cpp, LM Studio, or similar frameworks. Compute Providers produce the strongest proofs because all seven verification layers are available: hardware attestation, GPU telemetry, model file hashing, timing profiles, energy correlation, kernel audit logs, and optionally zkML partial proofs. These contributors bear the highest economic cost (electricity, hardware wear, VRAM allocation) and produce the most verifiable evidence. They earn the highest reward multiplier range.

Tier 2: Orchestrators. Developers and agents who call cloud AI APIs from code, using SDK wrappers that capture timing, token counts, and network metadata. Orchestrators activate a subset of verification layers --- hardware attestation (binding the claim to a physical development machine), computation timing profiles (provider-specific latency characteristics), and network traffic fingerprinting (TLS certificate verification, traffic pattern analysis, destination IP validation). The proof is weaker than Tier 1 because no GPU telemetry or model file hash is available, but the SDK wrapper requires minimal integration effort. Orchestrators earn a medium reward multiplier range.

Tier 3: Consumers. End users who interact with AI through web interfaces --- ChatGPT, Claude, Gemini, Grok --- via the HDP browser extension. The extension captures interaction metadata from the browser environment: estimated token counts, timing data, and provider identification. Consumers activate minimal verification layers, and their proofs are the weakest in the network. They earn the lowest reward multiplier range. However, Tier 3 serves a critical strategic function as the lowest-friction entry point to the network.

The calibration insight. The tiered model enables a powerful statistical property: approximately 30% of network data originating from fully verified Tier 1 and Tier 2 sources provides a ground-truth calibration dataset. This dataset trains statistical models of expected network behavior --- compute distributions, timing patterns, geographic correlations, hardware performance baselines --- against which all claims, including Tier 3 claims, are evaluated. Because fabricated data systematically deviates from real compute patterns in ways that are detectable across populations (even if not individually provable), the presence of a substantial verified dataset makes large-scale fabrication statistically visible and therefore economically irrational.

The multiplier gap between tiers creates a natural migration path. A Tier 3 consumer observes that Tier 1 providers earn significantly higher rewards for comparable compute units, creating economic incentive to transition to local inference. This progression aligns individual incentives with the protocol's goals: more local compute means stronger proofs, greater decentralization, and a more robust calibration dataset.

Why include Tier 3 despite weak proofs? Tier 3 serves as an onboarding funnel. A user starts as a Tier 3 consumer (install browser extension, earn modest rewards), observes the multiplier gap to higher tiers, and is economically motivated to move to Tier 2 (SDK wrapper, one line of code) or Tier 1 (local inference, highest rewards). The protocol bootstraps adoption through the lowest-friction entry point and then incentivizes organic progression toward stronger verification. Additionally, even weakly-verified Tier 3 data contributes to network-level statistical models when correlated against strongly-verified Tier 1 and 2 data, providing marginal improvement to aggregate fraud detection.

4.5 Statistical Validation

Beyond per-receipt scoring, the aggregation layer runs continuous network-wide statistical validation:

Anomaly detection. Each machine builds a behavioral history --- characteristic tokens-per-second, typical daily compute volume, usage time patterns, model preferences. Sudden deviations from established patterns (a machine that historically produces modest compute volume suddenly claiming orders-of-magnitude increases) trigger elevated scrutiny and probationary scoring until the new pattern is confirmed or rejected.

Temporal consistency. Compute claims must exhibit realistic temporal properties. A machine cannot simultaneously claim local inference on one model and API orchestration of another if the hardware profile indicates insufficient resources for concurrent execution. Claims must also exhibit realistic idle periods --- no physical machine runs at 100% utilization continuously.

Cross-machine correlation. The network monitors aggregate statistical properties: compute claim distributions, model popularity patterns, geographic clustering, hardware performance distributions. Fabricated data tends to exhibit unrealistic uniformity or systematic biases that diverge from the natural variance observed in genuine compute populations.

Economic plausibility. API-based claims carry implicit economic constraints. Sustained high-volume claims from a cloud API imply substantial subscription costs. The protocol cross-references claimed usage volumes against economically plausible thresholds, flagging claims that would imply unreasonable API expenditure.

Probationary periods. Newly enrolled machines enter a probationary period during which their claims are subject to heightened scrutiny and reduced reward multipliers. This prevents an attacker from registering many machines, extracting a brief burst of rewards, and abandoning them before statistical detection converges. The probationary period also serves a calibration function: the network uses early receipts from a new machine to establish its behavioral baseline, which is then used for ongoing anomaly detection.

Network-level detection properties. The statistical validation system exhibits an important scaling property: detection accuracy improves with network size. As more machines contribute genuine compute data, the statistical models of expected behavior become more refined, making anomalous patterns more visible. An adversary faces a paradox: the larger the network they attempt to exploit, the better the network becomes at detecting exploitation. This is the inverse of the Sybil problem --- rather than attackers gaining advantage through scale, the network gains advantage.

4.6 Anti-Forgery Analysis

The central question for any compute verification system is: can an adversary profitably fabricate claims? HDP is designed so that the answer is no, through multiple reinforcing mechanisms:

Staking requirements. Every machine must stake HDPT tokens to be eligible for rewards. The staked amount is governed on-chain and represents a capital commitment that is at risk of forfeiture.

Slashing exceeds maximum reward. If a machine fails a validator audit --- a challenge-response verification where the validator requests a specific computation and compares the result against the machine's receipt --- the resulting slash amount exceeds the maximum reward the machine could have earned during the disputed period. This ensures that the expected value of cheating is negative even if detection probability is less than 100%.

Statistical detection at scale. An adversary operating a small number of fraudulent machines might avoid individual detection, but the rewards would be correspondingly small. An adversary operating at scale --- enough machines to extract meaningful value --- creates statistical signatures that diverge from legitimate compute populations, triggering network-level anomaly detection.

Hardware binding. Each machine identity is rooted in hardware attestation. An attacker cannot create virtual machines or software simulations that pass TPM or Secure Enclave verification. Physical hardware must be procured and operated, imposing real costs that approach or exceed the cost of simply performing legitimate computation.

Probationary earnings reduction. New machines earn at reduced multipliers during their probationary period, limiting the profitability of register-extract-abandon strategies.

The combined effect of these mechanisms is that an adversary seeking to extract more value through fabrication than through legitimate computation faces higher costs, lower expected rewards, and significant risk of stake loss. Honest participation is the dominant economic strategy.

4.7 Comparison to Related Proof Systems

Property HDP PoC Filecoin PoRep/PoSt Gensyn PoL Render GPU Verify
What is proven AI compute occurred Data is stored ML training is correct GPU rendering completed
Proof granularity Per-inference receipt Per-sector, ongoing Per-training step Per-render task
Hardware binding TPM / Secure Enclave Sector sealing Gradient checkpoints Task assignment
zkML integration Partial proofs, progressive N/A Probabilistic checking N/A
Tiered verification 3 tiers by contribution type Single proof type Single proof type Single proof type
Statistical layer Network-wide anomaly detection Sector challenge sampling Cross-node verification Deterministic comparison
Resource verified Compute (GPU, CPU, memory) Storage (disk space) Compute (training FLOPs) Compute (rendering)

HDP's key architectural distinction is the tiered verification model. Where other proof systems require a single, uniform verification mechanism, HDP accommodates the reality that different types of AI compute leave different types of evidence. Local inference leaves rich hardware traces. API orchestration leaves timing and network traces. Browser consumption leaves minimal traces. Rather than excluding lower-evidence contributions, HDP assigns them proportionally lower rewards while using the higher-evidence contributions to calibrate network-wide statistical models. This approach maximizes network participation while maintaining aggregate integrity.


5. Network Architecture

5.1 Protocol Stack

HDP implements the five-layer DePIN architecture described in Section 3.4, with specific components at each layer:

+===================================================================+
|                      APPLICATION LAYER                             |
|                                                                   |
|  Desktop App    Browser Extensions    SDKs (TS, Python)           |
|  (macOS)        (Chrome, Firefox)     CLI Tools                   |
|  Dashboards     AI Agent              Developer APIs              |
|  (Admin, Ops,                                                    |
|   Web, Public)                                                    |
+===================================================================+
|                      GOVERNANCE LAYER                              |
|                                                                   |
|  Multi-Sig Administration (3-of-5)    Timelock (2-day delay)      |
|  On-Chain Parameter Updates           Emergency Mechanisms        |
|  ProofRules Versioning                Grace Period Transitions    |
+===================================================================+
|                         DATA LAYER                                 |
|                                                                   |
|  Off-Chain Aggregation    Merkle Tree Construction                |
|  IPFS / Pinata Storage    The Graph Subgraph Indexing             |
|  Compute Receipt Queues   Statistical Validation Engine           |
+===================================================================+
|                      BLOCKCHAIN LAYER                              |
|                                                                   |
|  HDPToken           ProofRegistry       MintingLogic              |
|  ValidatorRegistry  EpochRegistry       RewardClaimer             |
|  ValidatorRegistryV2  ProofRules        MachineRegistry           |
|                                                                   |
|  Base L2 (Optimism Stack) --- Low-cost settlement                 |
+===================================================================+
|                    INFRASTRUCTURE LAYER                             |
|                                                                   |
|  Contributor Machines: Desktop app, browser extension,            |
|  SDK-integrated applications, local inference servers,            |
|  developer workstations, enterprise compute farms                 |
+===================================================================+

Infrastructure Layer. The physical machines that perform AI compute and run HDP client software. Contributors install the desktop application (for local inference monitoring), browser extensions (for web-based AI usage capture), or integrate SDKs into their applications (for API call tracking). Each machine establishes a hardware-attested identity and generates compute receipts for verified AI work.

Blockchain Layer. Nine smart contracts deployed on Base L2 manage the protocol's on-chain state:

Data Layer. Off-chain infrastructure handles the high-throughput data operations that would be prohibitively expensive on-chain. The Go-based aggregator receives compute receipts, scores them against proof rules, builds Merkle trees for epoch batches, and stores receipt data on IPFS. The Graph subgraph indexes on-chain events for efficient querying by applications and dashboards.

Governance Layer. Protocol parameters are managed through multi-signature administration with timelock delays. The ProofRules contract enables on-chain governance of scoring weights, model registry hashes, minimum client versions, and tier multipliers. Emergency mechanisms allow rapid restriction of parameters (increasing minimum scores) without timelock, while parameter relaxation requires the full governance process.

Application Layer. User-facing software provides interfaces for all participant types. Four dashboards serve different audiences: an admin dashboard for protocol operators, an ops dashboard for real-time network monitoring, a web dashboard for individual contributors, and a public stats dashboard for network transparency. SDKs in TypeScript and Python, a CLI client, and an AI agent round out the application ecosystem.

5.2 Participant Roles

Compute Providers. Machine owners who contribute hardware resources to AI workloads. Providers run client software that captures compute evidence, generates receipts, and submits them to the aggregation layer. Providers stake HDPT tokens and earn rewards proportional to their verified compute contribution and tier.

Validators. Network participants who stake HDPT tokens and perform verification duties. Validators review epoch Merkle roots, conduct random deep audits of individual machines (challenge-response verification), participate in epoch finalization through BLS signature aggregation, and earn a share of minting rewards. Validators who approve fraudulent claims or fail to perform duties face stake slashing.

Aggregators. Infrastructure operators who receive compute receipts from providers, run the scoring engine against proof rules, build epoch Merkle trees, submit epoch roots on-chain, and coordinate validator finalization. Aggregators are authorized through on-chain role grants and must maintain high availability. In the current architecture, aggregation is centralized for simplicity and rapid iteration; the roadmap includes multi-aggregator decentralization with geographic distribution and redundancy.

Protocol Governance. The multi-signature administrators who manage protocol upgrades, parameter changes, emergency responses, and treasury operations. Initially a foundation-led council, governance transitions progressively toward community control and ultimately full DAO operation.

SDK Integrators. Application developers who embed HDP SDKs into their software, enabling their end users to earn HDPT rewards for AI compute performed through the application. Integrators do not directly earn rewards but benefit from improved user retention and ecosystem participation.

5.3 Epoch Model

HDP uses a time-based epoch model to batch compute receipts for efficient on-chain settlement. The epoch model addresses the fundamental scalability constraint of blockchain systems: individual on-chain transactions for millions of daily compute receipts would be prohibitively expensive.

Each epoch represents a configurable time window during which the aggregator collects and scores compute receipts. At the end of an epoch, the aggregator constructs a Merkle tree over all eligible receipts, computing each contributor's reward based on their scored compute units and tier multiplier.

The epoch Merkle root, along with summary statistics, is submitted on-chain to the EpochRegistry contract. Validators review the epoch data and signal approval through BLS signature aggregation. When a sufficient quorum of validators has approved (represented as a bitmap in the finalization transaction), the epoch is finalized and rewards become claimable.

Contributors claim their rewards by submitting a Merkle proof demonstrating their inclusion in a finalized epoch. The RewardClaimer contract verifies the Merkle proof, confirms the epoch is finalized, mints the appropriate HDPT amount (minus the treasury fee), and marks the claim as processed. Each claim requires only the Merkle proof data --- a logarithmic-size inclusion proof --- making the gas cost independent of the number of contributors in the epoch.

The epoch model introduces a natural settlement latency (contributors must wait for epoch finalization before claiming) but provides dramatic efficiency gains that make the protocol economically viable at scale. The latency is bounded by the epoch duration plus the validator finalization window, both of which are configurable governance parameters.

5.4 Data Flow

The complete data flow from compute activity to token reward follows this sequence:

+-------------------------------------------------------------------+
|                    END-TO-END DATA FLOW                            |
+-------------------------------------------------------------------+

  Contributor                Aggregator              Blockchain
  Machine                   (Off-Chain)             (Base L2)
       |                         |                       |
  1. AI work performed           |                       |
       |                         |                       |
  2. Client captures evidence    |                       |
     (telemetry, timing,         |                       |
      model hash, etc.)          |                       |
       |                         |                       |
  3. Compute receipt built       |                       |
     and hardware-signed         |                       |
       |                         |                       |
  4. Receipt submitted -------->|                       |
       |                         |                       |
       |                    5. Receipt scored             |
       |                       (7-layer evaluation)      |
       |                         |                       |
       |                    6. Statistical validation     |
       |                       (anomaly detection,       |
       |                        plausibility checks)     |
       |                         |                       |
       |                    7. Receipt added to          |
       |                       epoch Merkle tree         |
       |                         |                       |
       |                    8. Epoch closed, root ------>|
       |                       submitted on-chain        |
       |                         |                       |
       |                    9. Validators verify          |
       |                       and sign (BLS) --------->|
       |                         |                       |
       |                         |                  10. Epoch finalized
       |                         |                      (quorum reached)
       |                         |                       |
  11. Contributor submits   <----|                       |
      Merkle proof claim         |                       |
       |                         |                       |
  12. Claim tx sent ---------------------------------->|
       |                         |                       |
       |                         |                  13. Proof verified,
       |                         |                      HDPT minted,
       |                         |                      treasury fee
       |                         |                      deducted
       |                         |                       |
  14. HDPT tokens received  <-----------------------------|
       |                         |                       |
+-------------------------------------------------------------------+

5.5 V2 Scalability

The V2 architecture achieves dramatic scalability improvements over a naive per-proof on-chain model:

Epoch aggregation. Rather than submitting each compute receipt as an individual on-chain transaction (which would require millions of transactions per day at network scale), the epoch model batches all receipts within a time window into a single Merkle tree. One on-chain transaction per epoch --- approximately 24 per day with hourly epochs --- replaces millions of individual submissions.

Merkle proof claims. Contributors claim rewards by submitting a logarithmic-size Merkle inclusion proof. For an epoch containing one million contributors, the proof requires only 20 hash values. Gas cost is O(log n) rather than O(n), making claims affordable regardless of network size.

BLS signature aggregation. Validator approval uses BLS12-381 signature aggregation, allowing the signatures of thousands of validators to be compressed into a single aggregate signature that can be verified in a single on-chain operation. This enables a validator set of 10,000 or more to reach quorum without proportional gas cost increases.

Off-chain data availability. Full receipt data, scoring details, and epoch contents are stored on IPFS with content hashes anchored on-chain. The Graph subgraph indexes this data for efficient application queries. Only the minimum necessary data --- Merkle roots, finalization status, and claim records --- resides on-chain.

Scalability comparison. The following table illustrates the gas cost advantage of the V2 epoch model over a naive per-proof approach:

Metric Per-Proof On-Chain V2 Epoch Model Improvement
Txns per 1M receipts/day 1,000,000 ~24 (epoch submissions) ~41,000x
Gas per claim (1M users) O(1) per proof O(log n) Merkle proof Logarithmic
Validator verification Per-proof vote Single BLS aggregate 10,000x+
Data on-chain Full receipt data Merkle root + metadata ~1000x less

6. Token Economics

6.1 HDPT Token Overview

$HDPT (Human Dividend Protocol Token) is an ERC-20 utility token deployed on Base L2. It serves four distinct functions within the protocol:

Why Base L2? Base, built on the Optimism Stack, provides the settlement layer. The choice of Base reflects several protocol requirements: low transaction costs (typically under $0.01 per transaction, critical for affordable reward claims), fast finality (approximately 2-second block times, enabling responsive epoch settlement), Ethereum-equivalent security (optimistic rollup proofs posted to Ethereum L1 provide settlement finality), a growing DeFi and application ecosystem (enabling token liquidity and composability), and Coinbase ecosystem alignment (providing distribution potential and institutional credibility). The protocol architecture is designed to support future cross-chain deployment, but Base serves as the canonical deployment for the initial network.

6.2 Supply Distribution

HDPT has a fixed total supply of 1,000,000,000 tokens with no inflation mechanism:

+-------------------------------------------------------------------+
|                  HDPT SUPPLY DISTRIBUTION                          |
|                  Total: 1,000,000,000 HDPT                        |
+-------------------------------------------------------------------+
|                                                                   |
|  +-------------------------------------------------------+       |
|  | Machine Rewards                              50%      |       |
|  | 500,000,000 HDPT                                      |       |
|  | Minted through verified Proof of Compute              |       |
|  +-------------------------------------------------------+       |
|                                                                   |
|  +-----------------------------------------------+               |
|  | Founder / Team                        30%      |               |
|  | 300,000,000 HDPT                               |               |
|  | 4-year vesting, 1-year cliff                   |               |
|  +-----------------------------------------------+               |
|                                                                   |
|  +-----------------------------------+                            |
|  | Ecosystem                  15%    |                            |
|  | 150,000,000 HDPT                  |                            |
|  | Grants, partnerships, liquidity   |                            |
|  +-----------------------------------+                            |
|                                                                   |
|  +-------------------+                                            |
|  | Validators   5%   |                                            |
|  | 50,000,000 HDPT   |                                            |
|  | Validation rewards |                                            |
|  +-------------------+                                            |
|                                                                   |
+-------------------------------------------------------------------+

Machine Rewards (50%). The largest allocation is reserved for compute contributors. These tokens are minted on demand as rewards are claimed, subject to the emission schedule and dynamic rate controller. No tokens from this allocation are pre-minted.

Founder and Team (30%). Subject to a four-year vesting schedule with a one-year cliff. No tokens are accessible during the first year. After the cliff, tokens vest linearly on a monthly basis through year four.

Ecosystem (15%). Allocated for developer grants, partnership incentives, liquidity provisioning, and community programs. A portion is available at launch for initial liquidity; the remainder is released over three years.

Validators (5%). Distributed as validation rewards to network validators who stake tokens and perform verification duties.

6.3 Emission Schedule

Machine rewards follow a halving model inspired by Bitcoin's emission curve, creating predictable scarcity:

Phase Cumulative Tokens Minted Reward Rate Estimated Duration
0 0 -- 100,000,000 Base rate Early network growth
1 100,000,000 -- 200,000,000 50% of base Network expansion
2 200,000,000 -- 300,000,000 25% of base Maturation
3 300,000,000 -- 400,000,000 12.5% of base Sustained operation
4 400,000,000 -- 500,000,000 6.25% of base Long-term tail
... ... Continues halving ...
Floor Any phase Minimum 1 HDPT per eligible claim Perpetual minimum

Each halving occurs when cumulative minted supply crosses a 100,000,000 HDPT threshold. The minimum reward floor of 1 HDPT ensures that participation always yields non-zero compensation regardless of how many halvings have occurred.

The halving model produces a predictable, decelerating emission curve:

Tokens Minted
     ^
500M |.................................................. ---- supply cap
     |                              .....................
400M |                       ........
     |                  .....
300M |             .....
     |          ...
200M |       ...
     |     ..
100M |   ..
     |  .
  0  |.________________________________________> Time
     Phase 0    1    2    3    4    5    6    ...

This curve incentivizes early participation (higher per-unit rewards during the initial phases) while ensuring long-term availability of rewards for future network participants through the decelerating emission rate.

6.4 Dynamic Reward Rate Controller

A dynamic reward rate controller monitors minting velocity and adjusts the base reward rate to maintain a target emission schedule over a configurable time horizon. The controller operates with conservative asymmetry:

Downward adjustment. If minting velocity exceeds the target (the network is issuing tokens faster than planned), the controller can reduce the reward rate by a significant percentage per adjustment cycle. This prevents runaway emission during periods of rapid network growth.

Upward adjustment. If minting velocity falls below the target, the controller increases the reward rate by a smaller percentage per cycle. The asymmetry ensures that the protocol is biased toward conservation --- it is quicker to restrict emission than to expand it.

Bounds. The reward rate is clamped between a minimum floor and a maximum ceiling, both governed on-chain. The controller cannot reduce rewards to zero or inflate them beyond the ceiling regardless of velocity conditions.

Frequency. The controller evaluates minting velocity at a configurable interval and broadcasts rate changes to network participants through the aggregation and dashboard infrastructure.

Why dynamic control matters. Without a rate controller, the protocol faces a dilemma: set the base rate too high, and early network growth could exhaust the machine reward allocation prematurely; set it too low, and early participants receive insufficient incentive to bootstrap the network. The dynamic controller resolves this by allowing aggressive initial reward rates that automatically moderate as the network grows, ensuring that the 500,000,000 HDPT machine reward allocation is distributed over the intended multi-year horizon regardless of network growth trajectory.

6.5 Per-Tier Reward Multipliers

Each contribution tier earns a different reward multiplier applied to its scored compute units:

Tier 1: Compute Providers earn the highest multiplier range. Local inference generates the strongest proofs and imposes the highest economic cost on contributors. The full seven-layer verification is available, and the composite proof score directly scales the multiplier up to the tier maximum.

Tier 2: Orchestrators earn a medium multiplier range. API calls produce moderate proofs through timing, network, and hardware attestation layers. The multiplier cap is lower than Tier 1, reflecting the reduced proof strength and lower direct compute cost.

Tier 3: Consumers earn the lowest multiplier range. Browser-based interaction produces the weakest proofs. The low multiplier acknowledges the proof limitation while still rewarding participation and providing an on-ramp to higher tiers.

Exact multiplier values are governed on-chain through the ProofRules contract, allowing the protocol to adjust tier incentives as the network evolves and verification capabilities mature.

6.6 Staking Mechanism

Machine staking. Machines must stake a governed minimum amount of HDPT to be eligible for rewards. The stake serves as economic collateral --- if a machine is found to be submitting fraudulent receipts through validator audits, the stake is slashed. A seven-day unstaking delay prevents a slash-and-run strategy where a machine operator detects an impending audit and withdraws their stake before the slash executes.

Validator staking. Validators stake HDPT separately to participate in epoch finalization and earn validation rewards. Validator stakes are subject to slashing for malicious validation (approving fraudulent epochs) or persistent unavailability. The minimum validator stake is higher than the machine stake, reflecting the greater responsibility and potential impact of validator misbehavior.

Stake as Sybil resistance. The staking requirement imposes a linear capital cost on the number of machines an adversary can operate. Running 1,000 fraudulent machines requires 1,000 times the minimum stake, and all of it is at risk of forfeiture.

6.7 Fee Structure

Treasury fee. A 2% fee is deducted from each reward claim and directed to the protocol treasury. The treasury funds ongoing development, security audits, ecosystem grants, and operational costs.

Validator reward share. A configured percentage of each epoch's minted rewards is allocated to the validators who finalized the epoch, proportional to their participation.

Proof submission. Submitting compute receipts to the aggregator is free. On-chain claim transactions require standard Base L2 gas fees, which are typically negligible.

Fee sustainability. The fee structure is designed to generate sufficient treasury revenue to fund protocol operations without discouraging participation. The 2% claim fee is low enough that it does not materially reduce contributor rewards, yet at network scale (millions of daily claims), it generates substantial recurring revenue for ongoing development, security audits, and ecosystem growth.

6.8 Long-Term Sustainability

The economic model is designed for multi-decade sustainability:

Emission curve. The halving schedule ensures that machine reward minting decelerates over time, approaching but never reaching the 500,000,000 HDPT allocation asymptotically. Early participants earn the highest per-unit rewards, incentivizing early network growth.

Validator economics. As the network grows, transaction fees and the validator reward share provide increasing revenue to validators independent of the emission schedule. Validator economics are designed to remain attractive even after multiple halvings reduce per-epoch minting.

Treasury management. The 2% claim fee generates ongoing treasury revenue proportional to network activity. Treasury funds are managed by the multi-signature governance council with full transparency.

Deflationary pressure. Slashing events permanently remove tokens from the staked supply and transfer them to the treasury. While not a primary deflationary mechanism, slashing contributes to long-term supply constraint.

Token velocity considerations. The staking requirements for both machines and validators create natural token lock-up that reduces circulating supply. The seven-day unstaking delay for machines and the validator staking commitment further dampen token velocity. As the network grows, the aggregate staked supply increases, creating organic demand that counterbalances emission.

Supply and demand equilibrium. The protocol is designed to reach a natural equilibrium between token supply (emission through compute rewards) and token demand (staking requirements, governance participation, validator collateral). As the network scales and more machines require staking, the aggregate demand for tokens increases. Simultaneously, the halving schedule reduces the rate of new token emission. These opposing forces are designed to converge toward a sustainable equilibrium where token supply growth decelerates while demand grows with network participation.

No hidden emission. All HDPT minting occurs through the smart contract system with full on-chain transparency. There are no off-chain token generation mechanisms, no administrative minting capabilities outside the defined roles, and no mechanism to increase the fixed 1,000,000,000 total supply. The HDPToken contract enforces the total supply cap at the smart contract level, making it immutable regardless of governance decisions.


7. Security Model

7.1 Threat Model

The HDP security model addresses five categories of adversarial behavior:

Sybil attacks. An adversary creates many fake machine identities to extract disproportionate rewards. Mitigated by hardware attestation (identities bound to physical devices), staking requirements (linear capital cost per machine), and per-owner machine limits.

Compute fabrication. An adversary submits fraudulent compute receipts claiming work that was not performed. Mitigated by multi-layer verification (seven independent evidence dimensions), statistical validation (network-wide anomaly detection), validator audits (challenge-response verification), and economic penalties (slashing exceeds maximum reward).

Validator collusion. A group of validators conspires to approve fraudulent epochs. Mitigated by distributed validator sets, BLS quorum requirements, stake-weighted participation, and slashing for validators associated with disputed epochs.

Front-running. An adversary observes pending transactions and attempts to extract value. Mitigated by the epoch model (rewards are determined off-chain before on-chain submission) and Merkle proof claims (claim amounts are fixed in the Merkle tree).

Replay attacks. An adversary resubmits previously valid compute receipts to claim duplicate rewards. Mitigated by monotonic nonces per machine, timestamp validation windows, and receipt deduplication in the aggregation layer.

Supply manipulation. An adversary attempts to manipulate the token supply through the minting mechanism. Mitigated by the fixed total supply cap enforced in the HDPToken contract, the halving schedule that reduces per-unit emission over time, the dynamic rate controller that moderates minting velocity, and allocation-specific caps that prevent any category from exceeding its designated share.

Governance attacks. An adversary acquires sufficient tokens to manipulate governance decisions. Mitigated by timelock delays on all parameter changes, emergency tightening mechanisms that operate asymmetrically (restrictions can be applied immediately, relaxations cannot), and the progressive decentralization model that increases the governance quorum over time.

7.2 Economic Security

Staking and slashing mechanics. Machine stake requirements are governed on-chain with a minimum floor. The slashing amount for a failed audit is calibrated to exceed the maximum reward the machine could have earned during the disputed period. This ensures a negative expected value for cheating at any detection probability above zero.

The economic security invariant can be expressed abstractly: for any adversarial strategy S operating over time period T, the expected slashing loss from S must exceed the expected reward from S. This invariant holds because: (a) the slash amount per audit failure exceeds the maximum reward per period, (b) the probability of detection increases with the scale of the attack (more fraudulent machines create larger statistical anomalies), and (c) the staking requirement means capital is at risk throughout the attack period.

Cost-of-attack analysis. An adversary seeking to extract net positive value through fabrication must simultaneously: procure physical hardware that passes attestation (real cost), stake HDPT on each machine (capital at risk), fabricate receipts that pass seven-layer scoring (technically difficult), avoid statistical detection across the network population (unlikely at scale), and survive validator challenge-response audits (probabilistic detection). The cumulative cost of satisfying these requirements exceeds the reward that can be extracted, even under optimistic assumptions about evasion probability.

7.3 Smart Contract Security

The HDP smart contract suite is built on OpenZeppelin's audited contract libraries for core functionality including ERC-20 token operations, access control, reentrancy protection, and mathematical safety.

An internal security audit identified 56 findings across all severity levels. All 7 critical findings and all 12 high-severity findings have been resolved. All 19 medium-severity findings have been addressed. Of 12 low-severity findings, 11 have been fixed and 1 has been acknowledged with documented rationale. 6 informational findings were noted for future consideration.

An external audit by a recognized smart contract security firm is planned prior to mainnet deployment. The audit scope will cover all nine deployed contracts and their interaction patterns, including cross-contract authorization flows, token minting paths, staking and slashing mechanics, epoch finalization logic, and Merkle proof verification.

The protocol follows a defense-in-depth approach to smart contract security: using battle-tested OpenZeppelin base contracts for standard functionality, minimizing custom logic surface area, applying role-based access control with principle of least privilege, and maintaining comprehensive test coverage (143 smart contract tests across V1 and V2 suites).

7.4 Defense in Depth

HDP implements four layers of defense that operate independently, such that compromise of any single layer does not compromise the system:

Layer 1: Cryptographic. ECDSA secp256k1 signatures authenticate all compute receipts and on-chain transactions. Monotonic nonces prevent replay. SHA-256 and Keccak-256 provide collision-resistant hashing for hardware fingerprints, model identity, and Merkle tree construction.

Layer 2: Economic. Staking requirements impose capital risk on all participants. Slashing penalties exceed potential rewards for dishonest behavior. Probationary periods limit exposure to new, unproven participants. The halving schedule prevents infinite minting regardless of network manipulation.

Layer 3: Operational. Rate limiting constrains receipt submission frequency per machine. Statistical anomaly detection identifies behavioral outliers across the network. Validator challenge-response audits provide probabilistic verification of individual machines. Aggregator scoring rejects receipts that fail minimum quality thresholds.

Layer 4: Governance. Multi-signature administration (3-of-5) prevents unilateral protocol changes. Timelock delays (minimum 2 days) provide community review of parameter modifications. Emergency pause capability allows rapid response to discovered vulnerabilities. Grace periods ensure contributors are not retroactively penalized by rule changes.

7.5 Privacy

HDP is designed to minimize the personal data that enters any system, on-chain or off-chain:

On-chain data. Only aggregate metrics are recorded on-chain: Merkle roots, claim amounts, epoch statistics, and machine registration hashes. No prompt content, response content, usage details, or personal identifying information is stored on-chain.

Hardware fingerprints. Machine identities are represented as cryptographic hashes of hardware characteristics. The hash reveals no specific hardware details --- an observer cannot determine the machine's CPU model, GPU type, or physical location from the on-chain fingerprint.

Receipt privacy. Compute receipts contain telemetry data (GPU utilization, power draw, timing) but never include prompt content or model outputs. The aggregator processes receipt data for scoring but does not retain prompt or response text.

User control. Contributors control their local data and can delete proof history at any time. Participation is pseudonymous --- wallet addresses are the only persistent identifier.

Data minimization principle. The protocol follows a strict data minimization approach: only the minimum data necessary for verification is collected, transmitted, and stored at each layer. The client captures telemetry during computation windows and discards raw measurements after computing summary statistics for the receipt. The aggregator processes receipts for scoring and stores only the scored result and summary metadata. On-chain records contain only Merkle roots and claim status. At no point does any system component store or transmit the content of AI prompts, responses, or user-generated text.

7.6 Bug Bounty

A bug bounty program is planned for deployment through Immunefi prior to mainnet launch:

Severity Description Reward Range
Critical Direct fund loss, contract takeover, unauthorized minting Up to $100,000
High Protocol disruption, significant economic impact Up to $25,000
Medium Limited impact vulnerabilities, edge case exploits Up to $5,000
Low Informational findings, best practice deviations Up to $500

8. Governance

Governance is the mechanism through which the HDP protocol evolves over time. As verification technology matures, network conditions change, and new attack vectors emerge, the protocol must adapt. HDP implements a governance framework that balances the need for rapid response with the imperative of community control and transparency.

8.1 On-Chain Parameter Governance

The ProofRules contract serves as the central governance surface for the Proof of Compute protocol. It stores and manages the following on-chain parameters:

All parameter updates follow the governance process: proposal, multi-sig approval, timelock delay, and epoch-aligned activation. A grace period spanning multiple epochs ensures that contributors have time to update their client software before new rules take effect. No honest proofs are retroactively invalidated by rule changes.

The version transition protocol follows a defined sequence:

  1. New parameter version is proposed and approved through multi-sig
  2. Timelock delay begins (minimum 2 days)
  3. After timelock expires, new version activates at a specified future epoch
  4. During a grace period spanning multiple epochs, both the old and new parameter versions are accepted
  5. After the grace period, only the new version is accepted

This mechanism ensures smooth transitions without penalizing contributors who are slow to update, while maintaining the protocol's ability to evolve its verification standards over time.

8.2 Multi-Sig Administration

Protocol administration is managed through a 3-of-5 multi-signature wallet. Treasury operations, contract upgrades, role grants, and parameter changes all require three of five designated signers to approve. The multi-sig key holders are disclosed to the community, and changes to the signer set follow the same governance process as parameter updates.

8.3 Timelock

All non-emergency parameter changes are subject to a minimum 2-day timelock delay between approval and execution. The timelock ensures that the community has visibility into pending changes and can respond --- through social coordination, validator signaling, or governance proposals --- before changes take effect. The timelock duration is itself a governed parameter.

8.4 Emergency Mechanisms

The protocol includes an emergency tightening mechanism that allows the multi-sig to immediately increase minimum score thresholds, reduce multiplier caps, or pause specific functions without waiting for the timelock. This mechanism is asymmetric by design:

This pattern, borrowed from DeFi circuit breaker designs, ensures rapid response to discovered attacks or vulnerabilities while preventing governance from being used to quietly increase rewards or reduce security requirements.

8.5 Path to Decentralization

Phase 1: Foundation-Led. During the initial network launch and growth period, the Laurelin Labs multi-sig retains direct governance authority. Focus is on rapid iteration, bug response, and parameter tuning based on real-world network data.

Phase 2: Council Governance. As the network matures, governance transitions to a council model with both team-appointed and community-elected representatives. A governance council of seven members (three team, four community-elected) manages parameter decisions. Parameter changes require majority council approval. The founding team retains a security-only veto for situations where proposed changes would introduce vulnerabilities. Council elections occur on a regular cadence with term limits to ensure rotation and prevent entrenchment.

Phase 3: Full DAO. At sufficient network maturity and decentralization, governance transitions to a token-weighted DAO with on-chain proposal and execution. No privileged actors exist. All parameter changes, treasury operations, and protocol upgrades are subject to community vote with quorum thresholds (minimum participation) and approval thresholds (minimum yes-vote percentage). The DAO framework includes delegation capability (token holders can delegate voting power), proposal bonds (submitters must lock tokens to prevent spam), and time-locked execution (approved proposals execute after a delay for community review).

The decentralization timeline is milestone-driven rather than calendar-driven. Phase transitions occur when specific network maturity criteria are met: sufficient validator diversity, demonstrated governance participation, and stable protocol operation over multiple months.


9. Use Cases

9.1 Individual Contributors

An individual machine owner installs the HDP desktop application or SDK wrapper and begins earning HDPT for their everyday AI compute activity. Setup requires no specialized hardware or technical expertise: install the application, connect a wallet, and the client automatically detects AI workloads, generates compute receipts, and submits them for scoring.

The contributor can monitor their earnings, proof scores, and tier status through the web dashboard. Over time, as they install recommended system-level telemetry drivers or enable hardware attestation features, their proof quality and reward multiplier improve progressively.

This progressive quality model is central to HDP's accessibility principle. New participants should not be blocked by complex setup requirements. A contributor who installs the desktop application and connects a wallet begins earning immediately, even with a modest proof score. Each additional capability they enable --- hardware attestation, GPU telemetry drivers, model file monitoring --- incrementally improves their score and rewards. The protocol meets users at their current capability level and provides clear, economically motivated upgrade paths.

9.2 Local Model Operators

Users running local inference through Ollama, vLLM, llama.cpp, LM Studio, or similar frameworks represent the highest-value contributors in the HDP network. The desktop application auto-detects running AI processes, captures GPU telemetry during inference, hashes loaded model files for identity verification, and records per-token timing profiles.

Local model operators qualify for Tier 1 (Compute Provider) status with access to the highest reward multiplier range. Their fully verified compute data contributes to the network's calibration dataset, strengthening statistical validation for all participants.

9.3 AI Developers

Application developers integrate HDP with a single line of code using the SDK wrapper pattern:

Python:   client = track(anthropic.Client())
TypeScript:   const client = track(new Anthropic());

Every API call made through the wrapped client automatically generates a compute receipt capturing timing data, token counts, and network metadata. The developer's users earn HDPT rewards while using the application, improving retention and engagement without requiring the developer to build reward infrastructure. The SDK handles receipt generation and submission transparently.

The SDK integration model is designed for zero-friction adoption. Developers do not need to understand the Proof of Compute protocol, manage wallet interactions, or handle blockchain transactions. The SDK wrapper is a transparent proxy that intercepts API responses, extracts verifiable metadata, constructs compute receipts, and submits them to the aggregation layer in the background. The existing AI API behavior is completely preserved --- the wrapper adds observation without modification.

Both TypeScript and Python SDKs support all major AI providers: Anthropic (Claude), OpenAI (GPT), Google (Gemini), xAI (Grok), and Mistral. Provider detection is automatic based on the client object structure, requiring no configuration from the developer.

9.4 Enterprise Compute Farms

Organizations operating GPU clusters for AI workloads --- research institutions, AI startups, enterprise inference deployments --- can aggregate their compute output through HDP. Machine registration accommodates up to 100 machines per owner address, and the staking and scoring systems scale linearly. Enterprise operators benefit from Tier 1 verification across their fleet, earning the highest multiplier on substantial compute volume. The protocol's per-owner machine limit and staking requirements are designed to accommodate legitimate enterprise-scale deployments while maintaining Sybil resistance.

Enterprise use cases include: AI research labs monetizing inference compute during non-peak hours, GPU cloud providers adding an HDP revenue stream alongside their primary business, edge computing networks where distributed AI inference stations earn rewards for local model serving, and corporate IT departments offsetting the cost of employee AI tool usage through aggregate compute rewards.

9.5 Browser-Based Participation

Users who primarily interact with AI through web interfaces install the HDP browser extension (available for Chrome and Firefox). The extension detects interactions with supported AI providers, estimates token usage from the browser context, captures timing metadata, and submits Tier 3 compute receipts to the aggregator.

Browser-based participation serves as the lowest-friction entry point to the HDP network. Contributors earn at the Tier 3 multiplier while gaining familiarity with the protocol. The visible multiplier differential between Tier 3 and Tier 1 creates economic motivation to transition to local inference, aligning individual incentives with network goals.

The following table summarizes participation requirements and expected experience across tiers:

Dimension Tier 1: Provider Tier 2: Orchestrator Tier 3: Consumer
Setup time 10--30 minutes 5 minutes 2 minutes
Technical skill Moderate (local model setup) Basic (SDK integration) None (browser extension)
Hardware requirement GPU recommended Any machine Any machine
Capital requirement HDPT stake Optional stake None
Proof layers active All 7 3--4 1--2
Reward multiplier Highest range Medium range Lowest range
Verification strength Very strong Moderate Weak
Upgrade path zkML bonus Transition to Tier 1 Transition to Tier 2 or 1

10. Implementation Status and Roadmap

10.1 Current State (February 2026)

HDP is distinguished from many blockchain projects by the breadth and maturity of its working implementation. The protocol is not a concept paper with planned development --- it is functioning software with deployed contracts, passing tests, and validated end-to-end flows. This section describes the current implementation state and the roadmap for remaining milestones.

The Human Dividend Protocol has completed its foundational implementation across all major system components:

Smart Contracts. Nine contracts are deployed to Base Sepolia testnet, all verified on Sourcify:

The complete V2 end-to-end flow has been validated on testnet: proof submission through the aggregator, epoch building, Merkle root submission, validator finalization, and reward claiming with treasury fee deduction.

Testing. Over 885 automated tests pass across 10 application packages, covering smart contract logic, SDK functionality, API endpoints, UI components, and integration scenarios.

Client Software. A macOS desktop application (1.6MB) with GPU telemetry capture, model detection, and compute receipt generation. Browser extensions for Chrome and Firefox with Tier 3 receipt capture. A TypeScript SDK and Python SDK with interceptor-based tracking. A Python CLI client with machine enrollment, receipt submission, and status commands.

Aggregation. A Go-based aggregator implementing the seven-layer scoring engine, calibration tables, statistical validation, epoch building, IPFS storage, and REST APIs for receipt submission and network statistics.

Dashboards. Four operational dashboards: admin (protocol management), ops (real-time monitoring), web (contributor interface), and public stats (network transparency).

Infrastructure. Landing page live at humandividendprotocol.com. Backend API with EIP-191 wallet authentication, JWT sessions, WebSocket real-time feeds, and event processing. The Graph subgraph indexing on-chain events across all deployed contracts with 13 entity types. DNS, SSL, and reverse proxy configured across all service domains. Email infrastructure for community communications.

AI Agent. A conversational AI agent built on Claude provides natural-language interaction with protocol documentation, network statistics, and onboarding assistance. Deployed as a Cloudflare Worker with vector-based knowledge retrieval.

10.2 Phase 1: Testnet and Community (Current)

The current phase focuses on building a robust testnet environment and establishing the initial contributor and validator communities:

10.3 Phase 2: Mainnet Launch

The mainnet launch phase includes:

10.4 Phase 3: zkML Integration

The zkML integration phase advances proof strength:

10.5 Phase 4: Network Maturity

The long-term maturity phase targets network scale, full decentralization, and cross-ecosystem integration:


11. Team

11.1 Laurelin Labs LLC

The Human Dividend Protocol is developed and maintained by Laurelin Labs LLC, a United States limited liability company incorporated in a jurisdiction selected for its comprehensive digital asset legislation, clear legal treatment of utility tokens, and strong privacy protections for LLC members.

11.2 Core Competencies

Domain Capabilities
Blockchain Development Smart contract architecture, EVM development, L2 scaling, DePIN protocol design
Distributed Systems High-availability infrastructure, consensus mechanisms, aggregation engines
AI Infrastructure LLM integration, inference optimization, GPU compute management, model verification
Cryptography Digital signatures, zero-knowledge proofs, secure key management, hardware attestation
Product Development Cross-platform application development, SDK design, developer experience

11.3 Advisory Network

Laurelin Labs maintains relationships with advisors and domain experts in DePIN protocol design and tokenomics, AI and ML infrastructure engineering, legal and regulatory compliance for digital assets, smart contract security and formal verification, and cryptographic systems including zero-knowledge proofs and hardware attestation.

11.4 Transparency Commitment

Laurelin Labs commits to operating with full transparency:


12. Conclusion

12.1 Summary

The Human Dividend Protocol addresses a fundamental asymmetry in the AI economy. Machines contribute compute resources to AI workloads --- electricity, GPU cycles, memory, bandwidth, and hardware wear --- but their owners receive no compensation. As AI becomes the dominant engine of economic productivity, this asymmetry will grow. The global installed base of AI-capable devices represents a distributed compute network of extraordinary scale, contributing daily to AI workloads without measurement, verification, or reward.

HDP establishes the infrastructure for fair AI compute compensation through verified, transparent, and decentralized mechanisms. The protocol does not require cooperation from AI providers, specialized hardware from participants, or advances in zero-knowledge proof technology to function today. It operates with current technology, on existing hardware, through familiar interfaces.

The Proof of Compute protocol combines seven independent verification layers into a composite trust score that makes fabrication economically irrational without requiring every individual claim to be cryptographically proven. The tiered trust model meets contributors where they are --- from browser extension users to local GPU operators --- while creating economic incentives that naturally drive the network toward stronger verification and greater decentralization. As zkML technology matures, the protocol seamlessly integrates stronger cryptographic verification without disrupting existing participants.

12.2 Key Differentiators

First AI compute DePIN. HDP is the first protocol to specifically address client-side AI compute metering and compensation, creating a new category in the DePIN landscape that sits between general-purpose compute networks and AI-specific training verification.

Tiered trust architecture. The three-tier contribution model with calibration-based statistical validation is a novel approach that achieves network-level security without requiring universal cryptographic proof --- a practical necessity given the current state of zkML technology.

Low barrier to participation. A browser extension install, a one-line SDK wrapper, or a desktop application download is sufficient to begin earning. No specialized hardware, technical expertise, or significant capital outlay is required for initial participation.

Working implementation. Nine deployed and verified smart contracts, 885+ automated tests, functional client software across desktop, browser, and SDK modalities, a scoring aggregation engine, and four operational dashboards demonstrate that HDP is not a concept paper but a functioning protocol approaching mainnet readiness.

12.3 Call to Action

HDP invites participation from all corners of the AI compute ecosystem:

The machines are working. Their owners deserve the dividend.

The Human Dividend Protocol is not a promise or a projection. It is working software, deployed contracts, and a tested economic model. The path from testnet to mainnet is defined. The technology is proven. The thesis is clear.

We invite you to participate.


13. References

[1] Nakamoto, S. "Bitcoin: A Peer-to-Peer Electronic Cash System." 2008. https://bitcoin.org/bitcoin.pdf

[2] Protocol Labs. "Filecoin: A Decentralized Storage Network." 2017. https://filecoin.io/filecoin.pdf

[3] Render Network. "The Render Network Whitepaper." https://rendernetwork.com

[4] io.net. "The Internet of GPUs." https://io.net

[5] Akash Network. "Akash Network: Decentralized Cloud Compute Marketplace." https://akash.network

[6] Gensyn. "Gensyn Litepaper: A Protocol for Machine Learning Compute Verification." https://docs.gensyn.ai/litepaper

[7] EZKL. "EZKL: Easy Zero-Knowledge Inference." https://ezkl.xyz

[8] Intel. "Intel Software Guard Extensions (Intel SGX)." https://www.intel.com/content/www/us/en/developer/tools/software-guard-extensions/overview.html

[9] TLSNotary Project. "TLSNotary: A Protocol for Proving Web Data Provenance." https://tlsnotary.org

[10] Opacity Network. "Opacity: Verifiable AI Inference." https://www.opacity.network

[11] Buterin, V. "Ethereum: A Next-Generation Smart Contract and Decentralized Application Platform." 2014. https://ethereum.org/en/whitepaper/

[12] Messari. "State of DePIN." Annual reports, 2024--2025. https://messari.io

[13] Base. "Base: Ethereum L2 Built by Coinbase." https://docs.base.org

[14] OpenZeppelin. "OpenZeppelin Contracts." https://docs.openzeppelin.com/contracts

[15] Ethereum Improvement Proposals. "EIP-20: Token Standard." https://eips.ethereum.org/EIPS/eip-20


Appendix A: Deployed Smart Contracts

All contracts are deployed on Base Sepolia (Chain ID: 84532) and verified on Sourcify.

Contract Address Version Sourcify
HDPToken 0xd72b7c96ecA8834ED4C8f0F42049eb5f1B381275 V1 Verified
ProofRegistry 0x5F0cAC30Ee26C3dF2a26603B9F0E50c1ecFa2f6C V1 Verified
ValidatorRegistry 0xaB8ef875E6f1697e6A33D489DBC0E254C925CC12 V1 Verified
MintingLogic 0x116b37803DF4C923a27F815b21da27d5c2C826db V1 Verified
EpochRegistry 0xa72442B1A11331eF6D166251cF99C98712c29b6f V2 Verified
RewardClaimer 0x873BABf5FFe57F11AaA0637bb5207FBf4F386EcA V2 Verified
ValidatorRegistryV2 0x99Cc172472Ec512adb2035e7830a447811e64Fd0 V2 Verified
ProofRules 0x3b701304bA4b21b4562164300AEC7f816B3E1365 PoC Verified
MachineRegistry 0x4Ab1cB3d0d2650dF094B875c901eF1aBa4D22899 PoC Verified

Mainnet deployment addresses (Base, Chain ID: 8453) will be published at launch.


Appendix B: Glossary

Term Definition
Compute Receipt A structured, hardware-signed document capturing evidence of AI compute work performed. The atomic unit of proof in the HDP network.
Compute Unit (CU) A normalized measure of AI work that accounts for token counts, model complexity, and computation time, enabling fair comparison across workload types.
DePIN Decentralized Physical Infrastructure Network. A category of blockchain protocols that coordinate distributed physical resource contributions through token incentives.
Epoch A configurable time window during which the aggregator collects, scores, and batches compute receipts into a Merkle tree for on-chain settlement.
HDPT Human Dividend Protocol Token. The ERC-20 utility token on Base L2 used for compute rewards, staking, governance, and fee payment.
Hardware Attestation A cryptographic proof, rooted in hardware security modules (TPM, Secure Enclave), that binds a machine identity to physical hardware and verifies client software integrity.
Merkle Proof A logarithmic-size cryptographic proof demonstrating that a specific data element is included in a Merkle tree, used by contributors to claim rewards from finalized epochs.
Proof of Compute (PoC) HDP's multi-layered verification protocol that combines hardware attestation, GPU telemetry, model identity, timing profiles, energy correlation, kernel audits, and zero-knowledge proofs to validate AI compute claims.
ProofRules The on-chain smart contract that stores and governs scoring parameters, tier multipliers, model registry hashes, and minimum client versions for the Proof of Compute protocol.
Slashing A penalty mechanism that forfeits a portion of a participant's staked tokens in response to detected dishonest behavior, such as submitting fraudulent compute receipts or approving invalid epochs.
Tier One of three contribution categories (Provider, Orchestrator, Consumer) that classify participants by their proof strength, compute cost, and reward multiplier range.
Validator A network participant who stakes HDPT tokens and performs verification duties including epoch finalization, challenge-response audits, and BLS signature participation.
Vesting A time-locked token release schedule that restricts access to allocated tokens according to a predefined timeline, used for founder and team allocations.
zkML Zero-knowledge machine learning. Cryptographic techniques that generate verifiable proofs that a specific machine learning model produced a specific output, without revealing the input or model weights.

Appendix C: Technical Specifications

Blockchain

Property Value
Network (Testnet) Base Sepolia
Chain ID (Testnet) 84532
Network (Mainnet) Base
Chain ID (Mainnet) 8453
Stack Optimism (OP Stack)
Settlement Ethereum L1 via optimistic rollup
Token Standard ERC-20 (EIP-20)

Cryptography

Algorithm Usage
ECDSA secp256k1 Transaction and receipt signatures
SHA-256 Hardware fingerprinting, model file hashing, receipt hashing
Keccak-256 Merkle tree construction, Ethereum address derivation
BLS12-381 Validator signature aggregation for epoch finalization

Client Platforms

Platform Implementation
macOS Desktop application (Tauri + React)
Windows Desktop application (planned)
Linux Desktop application (planned)
Chrome Browser extension (Manifest V3)
Firefox Browser extension (Manifest V3, gecko-adapted)

SDKs

Language Package Build Formats
TypeScript @hdp/sdk CJS + ESM + DTS
Python hdp-sdk Sync + Async clients, Pydantic v2 models

Smart Contract Framework

Property Value
Language Solidity 0.8.24
Framework Hardhat
Base Libraries OpenZeppelin Contracts
Testing Hardhat + Chai + Ethers.js
Verification Sourcify (full match)

Infrastructure Components

Component Technology Purpose
Backend API Node.js, Express, TypeScript Authentication, coordination, data serving
Database PostgreSQL Persistent storage for proofs, machines, users
Cache Redis Rate limiting, session management, queues
Aggregator Go 1.22, Gin framework Receipt scoring, epoch building, chain submission
Indexer The Graph (AssemblyScript) On-chain event indexing for application queries
Storage IPFS / Pinata Decentralized receipt and epoch data storage

Legal Disclaimer

THIS WHITEPAPER IS FOR INFORMATIONAL PURPOSES ONLY AND DOES NOT CONSTITUTE:

RISK WARNING:

Participation in the Human Dividend Protocol involves significant risks, including but not limited to: complete loss of staked or earned tokens, smart contract vulnerabilities, regulatory actions that may affect protocol operations or token utility, market volatility, technical failures, and network disruptions. Participants should only commit resources they can afford to lose and should consult qualified legal, financial, and technical professionals before making any decisions regarding participation.

NO GUARANTEES:

The information in this whitepaper represents current plans and intentions of Laurelin Labs LLC, which may change without notice. The protocol, tokenomics, features, timelines, and specifications described herein are subject to modification based on technical requirements, regulatory developments, security considerations, or other factors. No representation is made that the protocol will achieve any particular level of adoption, token value, or network participation.

FORWARD-LOOKING STATEMENTS:

This document contains forward-looking statements regarding the protocol's development, adoption, and capabilities. These statements involve risks and uncertainties, and actual results may differ materially from those projected. Forward-looking statements should not be relied upon as predictions of future events. Past performance of similar protocols or projects does not indicate future results.

REGULATORY COMPLIANCE:

HDPT is designed as a utility token providing access to network services, staking capability, and governance participation. HDPT is not designed to represent equity, debt, revenue share, or investment contracts. Participants are responsible for understanding and complying with the laws and regulations applicable in their jurisdiction. Participation from sanctioned jurisdictions is prohibited.

DATA AND PRIVACY:

The Human Dividend Protocol collects aggregate compute metrics (token counts, timing data, hardware telemetry) for the purpose of compute verification and reward calculation. The protocol does not collect, store, or transmit AI prompt content, response content, personal identifying information, or browsing history unrelated to AI interactions. Hardware fingerprints are cryptographic hashes that reveal no specific hardware details. All data collection is transparent and documented. Users control their local data and may delete proof history at any time.

TECHNOLOGY RISKS:

Zero-knowledge proof technology, hardware attestation mechanisms, and the broader blockchain ecosystem continue to evolve rapidly. The protocol's reliance on these technologies means that unforeseen technical limitations, security vulnerabilities in underlying platforms, or fundamental changes in the AI industry's operating model could impact protocol functionality. The modular architecture is designed to accommodate technology evolution, but no guarantee is made that adaptation will be seamless or costless.


Document Information

Property Value
Version 2.0.0
Date February 2026
Status Public
License CC BY-SA 4.0

Laurelin Labs LLC

Website https://humandividendprotocol.com
Email joinus@humandividendprotocol.com

"Your Machine Powers AI. You Deserve the Dividend."

Copyright 2026 Laurelin Labs LLC. This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0).