Introduction
Welcome to the OpenProt documentation!
This documentation provides comprehensive information about the OpenProt project, including user guides, developer documentation, and API references.
What is OpenProt?
OpenProt is a Rust-based project that provides...
Quick Start
To get started with OpenProt:
cargo xtask build
cargo xtask test
For more detailed instructions, see the Getting Started guide.
Getting Started
Prerequisites
- Rust 1.70 or later
- Cargo
Installation
Clone the repository:
git clone <repository-url>
cd openprot
Build the project:
cargo xtask build
Run tests:
cargo xtask test
Next Steps
- Read the Usage guide
- Check out the Architecture documentation
- Learn about Contributing
Usage
Available Commands
The project uses xtask for automation. Here are the available commands:
Build Commands
cargo xtask build # Build the project
cargo xtask check # Run cargo check
cargo xtask clippy # Run clippy lints
Test Commands
cargo xtask test # Run all tests
Formatting Commands
cargo xtask fmt # Format code
cargo xtask fmt --check # Check formatting
Distribution Commands
cargo xtask dist # Create distribution
Documentation Commands
cargo xtask docs # Build documentation
Utility Commands
cargo xtask clean # Clean build artifacts
cargo xtask cargo-lock # Manage Cargo.lock
OpenPRoT Specification
Version: v0.5 - Work in Progress
Introduction
The concept of a Platform Root of Trust (PRoT) is central to establishing a secure computing environment. A PRoT is a trusted component within a system that serves as the foundation for all security operations. It is responsible for ensuring that the system boots securely, verifying the integrity of the firmware and software, and performing critical cryptographic functions. By acting as a trust anchor, the PRoT provides a secure starting point from which the rest of the system's security measures can be built. This is particularly important in an era where cyber threats are becoming increasingly sophisticated, targeting the lower layers of the computing stack, such as firmware, to gain persistent access to systems.
OpenPRoT is a project intended to enhance the security and transparency of PRoTs by defining and building an open source firmware stack that can be run on a variety of hardware implementations. Open source firmware offers several benefits that can enhance the effectiveness and trustworthiness of a PRoT. Firstly, open source firmware allows for greater transparency, as the source code is publicly available for review and audit. This transparency helps identify and mitigate vulnerabilities more quickly, as a global community of developers and security experts can scrutinize the code. It also reduces the risk of hidden backdoors or malicious code, which can be a concern with proprietary firmware.
Moreover, an open source firmware stack can foster innovation and collaboration within the industry. By providing a common platform that is accessible to all, developers can contribute improvements, share best practices, and develop new security features that benefit the entire ecosystem. This collaborative approach can lead to more robust and resilient firmware solutions, as it leverages the collective expertise of a diverse community. Additionally, open source firmware can enhance interoperability and reduce vendor lock-in, giving organizations more flexibility in choosing hardware and software components that best meet their security needs.
Incorporating an open source firmware stack into a PRoT not only strengthens the security posture of a system but also aligns with broader industry trends towards openness and collaboration. As organizations increasingly recognize the importance of securing the foundational layers of their computing environments, the combination of a PRoT with open source firmware represents a powerful strategy for building trust and resilience in the face of evolving cyber threats.
Background
TBD
Goals
TBD
Use cases
TBD
Industry standards and specifications
OpenPRoT is designed to be a standards-based and interoperable Platform Root of Trust (PRoT) solution. This ensures that OpenPRoT can be integrated into a wide range of platforms and that it leverages proven and well-defined security and management protocols.
Distributed Management Task Force (DMTF)
- DSP0274: Security Protocol and Data Model (SPDM) Version 1.3 or later
- DSP0277: Secured Messages using SPDM over MCTP Binding
- DSP0236: Management Component Transport Protocol (MCTP) Base Specification
- DSP0240: Platform Level Data Model (PLDM) Base Specification
- DSP0248: Platform Level Data Model (PLDM) for Platform Monitoring and Control Specification
- DSP0267: Platform Level Data Model (PLDM) for Firmware Update Specification
Trusted Computing Group (TCG)
- DICE Layering Architecture: Device Identity Composition Engine
- DICE Attestation Architecture: Certificate-based attestation
- DICE Protection Environment (DPE): Runtime attestation service
- TCG DICE Concise Evidence Binding for SPDM: Evidence format specification
National Institute of Standards and Technology (NIST)
- NIST SP 800-193: Platform Firmware Resiliency Guidelines
- NIST FIPS 186-5: Digital Signature Standard (DSS)
- NIST SP 800-90A: Recommendation for Random Number Generation
- NIST SP 800-108: Recommendation for Key Derivation Functions
High Level Architecture
The OpenPRoT architecture is designed to be a flexible and extensible platform Root of Trust (PRoT) solution. It is built upon a layered approach that abstracts hardware-specific implementations, providing standardized interfaces for higher-level applications. This architecture promotes reusability, interoperability, and a consistent security posture across different platforms.
Block Diagram
The following block diagram illustrates the high-level architecture of OpenPRoT.
Architectural Layers
The OpenPRoT architecture can be broken down into the following layers:
- Hardware Abstraction Layer (HAL): At the lowest level, the Driver Development Kit (DDK) provides hardware abstractions. This layer is responsible for interfacing with the specific RoT silicon and platform hardware.
- Operating System: Above the DDK sits the operating system, which provides the foundational services for the upper layers.
- Middleware: This layer consists of standardized communication protocols
that enable secure and reliable communication between different components
of the system. Key protocols include:
- MCTP (Management Component Transport Protocol): Provides a transport layer that is compatible with various hardware interfaces.
- SPDM (Security Protocol and Data Model): Used for establishing secure channels and for attestation.
- PLDM (Platform Level Data Model): Provides interfaces for firmware updates and telemetry retrieval.
- Services: This layer provides a minimal set of standardized services
that align with the OpenPRoT specification. These services include:
- Lifecycle Services: Manages the lifecycle state of the device, including secure debug enablement.
- Attestation: Aggregates attestation reports from platform components.
- Firmware Update & Recovery: Orchestrates the secure update and recovery of firmware for platform components.
- Telemetry: Collects and extracts telemetry data.
- Applications: At the highest level are the applications that implement
the core logic of the PRoT. These applications have room for differentiation
while being built upon standardized interfaces. Key applications include:
- Secure Boot: Orchestrates the secure boot process for platform components.
- Policy Manager: Manages the security policies of the platform.
Threat Model
Assets
- Integrity and authenticity of OpenPRoT firmware
- Integrity and authorization of cryptographic operations
- Integrity of anti-rollback counters
- Integrity and confidentiality of symmetric keys managed by OpenPRoT
- Integrity and confidentiality of private asymmetric keys
- Integrity of boot measurements
- Integrity and authenticity of firmware update payloads
- Integrity and authenticity of OpenPRoT policies
Attacker Profile
The attack profile definition is based on the JIL Application of Attack Potential to Smartcards and Similar Devices Specification version 3.2.1.
- Type of access: physical, remote
- Attacker Proficiency Levels: expert, proficient, laymen
- Knowledge of the TOE: public (open source), critical for signing keys
- Equipment: none, standard, specialized, bespoke
Attacks within Scope
See the JIL specification for examples of attacks.
- Physical attacks
- Perturbation attacks
- Side-channel attacks
- Exploitation of test features
- Attacks on RNG
- Software attacks
- Application isolation
Threat Modeling
To provide a transparent view of the security posture for a given OpenPRoT + hardware implementation, integrators are required to perform a threat modeling analysis. This analysis must evaluate the specific implementation against the assets and attacker profile defined in this document.
The results of this analysis must be documented in table format, with the following columns:
- Threat ID: Unique identifier which can be referenced in documentation and security audits
- Threat Description: Definition of the attack profile and potential attack.
- Target Assets: List of impacted assets
- Mitigation(s): List of countermeasures implemented in hardware and/or software to mitigate the potential attack
- Verification: Results of verification plan used to gain confidence in the mitigation strategy.
Integrators should use the JIL specification as a guideline to identify relevant attacks and must detail the specific mitigation strategies implemented in their design. The table must be populated for the target hardware implementation to allow for a comprehensive security review.
Firmware Resiliency
FW Resiliency Firmware resiliency is a critical concept in modern cybersecurity, particularly as outlined in the NIST SP 800-193 specification. As computing devices become more integral to both personal and organizational operations, the security of their underlying firmware has become paramount. Firmware is often a target for sophisticated cyberattacks because it operates below the operating system, making it a potential vector for persistent threats that can evade traditional security measures. NIST SP 800-193 addresses these concerns by providing a comprehensive framework for enhancing the security and resiliency of platform firmware, ensuring that systems can withstand, detect, and recover from attacks.
The NIST SP 800-193 guidelines focus on three main pillars: protection, detection, and recovery. Protection involves implementing measures to prevent unauthorized modifications to firmware, such as using cryptographic techniques to authenticate updates. Detection is about ensuring that any unauthorized changes to the firmware are quickly identified, which can be achieved through integrity checks and monitoring mechanisms. Recovery is the ability to restore firmware to a known good state after an attack or corruption, ensuring that the system can continue to operate securely. By addressing these areas, the guidelines aim to create a robust defense against firmware-level threats, which are increasingly being exploited by attackers seeking to gain deep access to systems.
In the context of NIST SP 800-193, firmware resiliency is not just about preventing attacks but also about ensuring continuity and trust in the system. The specification recognizes that while it is impossible to eliminate all risks, having a resilient firmware infrastructure can significantly mitigate the impact of potential breaches. This approach is particularly important for critical infrastructure and enterprise environments, where the integrity and availability of systems are crucial. By adopting the principles of NIST SP 800-193, we can enhance our security posture, protect sensitive data, and maintain operational stability in the face of evolving cyber threats.
PRoT Resiliency
TBD
Connected Device Resiliency
TBD
Middleware
OpenPRoT middleware consists of support libraries necessary to implement Root of Trust functionality, telemetry, and firmware management. Support for DMTF protocols such as MCTP, SPDM, and PLDM are provided.
MCTP
Status: Draft.
MCTP OpenPRoT devices shall support MCTP as the transport for all DMTF protocols.
Versions
The minimum required MCTP version is 1.3.1 (DSP0236.) Support for MCTP 2.0.0 (DSP0256) may be introduced in a future version of this spec.
Required Bindings
Currently only one binding is mandatory in the OpenPRoT specification, though this will change in future versions. 1. . MCTP over SMBus (DSP0237, 1.2.0)
Recommended Bindings
- MCTP over I3C (DSP0233, 1.0.1)
- MCTP over PCIe-VDM (DSP0238, 1.2.1)
- Only on platforms utilizing PCIe 6 and up.
- MCTP over USB (DSP0283, 1.0.0)
Required Commands
- Set Endpoint ID
- Get Endpoint ID
- Get MCTP Version Support
- Get Message Type Support
- Get Vendor Defined Message Support
- All commands in the range 0xF0 - 0xFF
Optional Commands
- All other commands are optional, but may become required in future revisions.
Development TCP Binding
- OpenPRoT will provide a TCP binding for developmental purposes.
SPDM
Status: Draft
SPDM OpenPRoT devices shall use SPDM to conduct all attestation operations both with downstream devices (as a requester) and upstream devices (as a responder.) Devices may choose to act as a requester, a responder, or both. All SPDM version references assume alignment with the most recently released versions of the spec (i.e. 1.2.1, 1.3.2.)
-
OCP Attestation Spec 1.1 Alignment OpenPRoT implementations of SPDM must align with the OCP Attestation Spec 1.1, linked above. All following sections have taken this spec into account. Please refer to that specification for details on specific requirements.
-
Baseline Version OpenPRoT sets a baseline version of SPDM 1.2.
-
Requesters OpenPRoT devices implementing an SPDM requester will implement support for SPDM 1.2 minimum and may implement SPDM 1.3 and up. The minimum and maximum supported SPDM versions can be changed if support for other versions is not necessary.
-
Responders OpenPRoT devices implementing an SPDM responder must implement support for SPDM 1.2 or higher. Responders may only report (via
GET_VERSION
) a single supported version of SPDM. -
Required Commands All requesters and responders shall implement the four (4) spec mandatory SPDM commands:
-
GET_VERSION
-
GET_CAPABILITIES
-
NEGOTIATE_ALGORITHMS
-
RESPOND_IF_READY
All requesters and responders shall implement the following spec optional commands:
GET_DIGESTS
GET_CERTIFICATE
CHALLENGE
GET_MEASUREMENTS
GET_CSR
SET_CERTIFICATE
CHUNK_SEND
CHUNK_GET
Requesters and responders may implement the following recommended spec optional commands:
- Events
GET_SUPPORTED_EVENT_TYPES
SUBSCRIBE_EVENT_TYPES
SEND_EVENT
- Encapsulated requests
GET_ENCAPSULATED_REQUEST
DELIVER_ENCAPSULATED_RESPONSE
GET_KEY_PAIR_INFO
SET_KEY_PAIR_INFO
KEY_UPDATE
KEY_EXCHANGE
FINISH
PSK_EXCHANGE
PSK_FINISH
All other spec optional commands may be implemented as the integrator sees fit for their use case.
-
-
Required Capabilities
CERT_CAP
(required forGET_CERTIFICATE
)CHAL_CAP
(required forCHALLENGE
)MEAS_CAP
(required forGET_MEASUREMENT
)MEAS_FRESH_CAP
-
Algorithms The following cryptographic algorithms are accepted for use within OpenPRoT, but may be further constrained by hardware capabilities. At a minimum OpenPRoT hardware must support:
TPM_ALG_ECDSA_ECC_NIST_P384
TPM_ALG_SHA3_384
All others are optional and may be used if supported.
- Asymmetric
TPM_ALG_RSASSA_2048
TPM_ALG_RSAPSS_2048
TPM_ALG_RSASSA_3072
TPM_ALG_RSAPSS_3072
TPM_ALG_ECDSA_ECC_NIST_P256
TPM_ALG_RSASSA_4096
TPM_ALG_RSAPSS_4096
TPM_ALG_ECDSA_ECC_NIST_P384
EdDSA ed25519
EdDSA ed448
TPM_ALG_SHA_384
- Hash
TPM_ALG_SHA_256
TPM_ALG_SHA_384
TPM_ALG_SHA_512
TPM_ALG_SHA3_256
TPM_ALG_SHA3_384
TPM_ALG_SHA3_512
- AEAD Cipher
AES-128-GCM
AES-256-GCM
CHACHA20_POLY1305
-
Attestation Report Format Devices will support either RATS EAT (as CWT) or an SPDM evidence manifest TOC per the TCG DICE Concise Evidence for SPDM specification.
-
Measurement block 0xF0 Devices that do not provide a Measurement Manifest shall locate RATS EAT at SPDM measurement block 0xF0
PLDM
PLDM OpenPRoT devices will support Platform Level Data Model as a responder for FW updates and platform monitoring. This means that OpenPRoT will respond to Type 0, Type 2 and Type 5 as listed in Table 1.
PLDM Base Specifications for Supported Types
Type 0 - Base Specification
- Purpose: Base Specification and Initialization
- Version: 1.2.0
- Platform Level Data Model (PLDM) Base Specification
All responders shall implement the four (4) spec mandatory PLDM commands:
GetTID
GetPLDMVersion
GetPLDMTypes
GetPLDMCommands
All responders shall implement the following optional commands
SetTID
Type 2 - Platform Monitoring and Control
- Purpose: Platform Monitoring and Control
- Version: 1.3.0
- Platform Level Data Model (PLDM) for Platform Monitoring and Control Specification
OpenPRoT will support PLDM Monitoring and Control by providing a PDR, Platform Descriptor Record, repository to a prospective PDLM Manageability Access Point Discovery Agent’s primary PDR. These PDRs will be defined via Json files and included into OpenPRoT at build time. OpenPRoT will not support any dynamic adjustments to the PDR repository. These PDRs should be limited to security features and as such, will only support PLDM sensors and not effectors. PLDM Monitoring PDRs
- Terminus Locator PDR
- Numeric Sensor PDR
Type 5 - Firmware Update
- Purpose: Firmware Update
- Version: 1.3.0
- Platform Level Data Model (PLDM) for Firmware Update Specification
Required Inventory Commands:
QueryDeviceIdentifiers
GetFirmwareParameters
Required Update Commands:
RequestFirmwareUpdate
PassComponentTable
UpdateComponent
TransferComplete
VerifyComplete
ApplyComplete
ActivateFirmware
GetStatus
All responders shall implement the following optional commands
GetPackageData
GetPackageMetaData
Services
- Firmware Update
- Attestation
- Firmware Recovery (TBD)
- Secure Boot (TBD)
- Policy Management (TBD)
Version: 0.1 (Draft) Date: [Current Date] Status: Work in Progress
1. Introduction
1.1 OpenPRoT Attestation Components
The OpenPRoT firmware stack provides the following attestation capabilities:
SPDM Responder: Enables external relying parties to establish trust in OpenPRoT by:
- Responding to attestation requests over SPDM protocol
- Providing cryptographically signed evidence about platform state
- Supporting both initial trust establishment and periodic re-attestation
- Enabling secure session establishment with authenticated endpoints
SPDM Requester: Enables OpenPRoT to establish trust in other platform components by:
- Requesting attestation evidence from downstream devices
- Verifying device identities and configurations
- Establishing secure sessions with attested devices
- Supporting platform composition attestation
Local Verifier: Enables on-platform verification of attestation evidence by:
- Appraising evidence from platform components without external connectivity
- Supporting air-gapped and latency-sensitive deployments
- Enforcing platform-specific security policies
- Making local trust decisions for platform operations
Note: While OpenPRoT includes a local verifier component, verification can also be performed remotely by external verifiers. The choice between local and remote verification depends on deployment requirements, connectivity constraints, and security policies.
2. Scope and Purpose
2.1 Scope
This specification covers the attestation capabilities provided by the OpenPRoT firmware stack:
In Scope:
- SPDM Responder Implementation: How OpenPRoT responds to external attestation requests
- SPDM Requester Implementation: How OpenPRoT requests attestation from platform devices
- Local Verifier Architecture: On-platform evidence appraisal capabilities
- Evidence Generation: How OpenPRoT firmware collects and reports platform measurements
- Evidence Formats: Standardized structures for conveying attestation claims (OCP RATS EAT, Concise Evidence)
- Protocol Bindings: SPDM protocol integration and message flows
- Device Identity Provisioning: Owner identity provisioning workflows
- Reference Value Integration: How OpenPRoT uses CoRIM for verification
- Plugin Architecture: Extensibility for non-OCP evidence formats
2.2 Out of Scope
This specification does not cover:
Hardware-Specific Details:
- PRoT Hardware Implementations: Specific hardware designs, architectures, and capabilities
- Manufacturing Provisioning: Secret provisioning into hardware (vendor-specific)
- Hardware Root of Trust Mechanisms: Boot ROM implementation, key derivation, measurement collection at hardware level
- Attester Composition: Layered measurement and key derivation (hardware-dependent)
- HAL Trait Implementations: Specific implementations of HAL traits for particular hardware platforms (integrator responsibility)
Note on Hardware Variance: OpenPRoT is a software stack that operates on top of PRoT hardware. The security strength and attestation capabilities of an OpenPRoT-based system depend significantly on the underlying hardware implementation. Hardware vendors must document their specific:
- Root of trust initialization and measurement mechanisms
- Key derivation and protection approaches
- Certificate chain structures
- Cryptographic capabilities and algorithms
- Isolation and protection boundaries
Other Out of Scope Items:
- OpenPRoT Firmware Implementation Details: Internal firmware architecture (covered in OpenPRoT project documentation)
- Application-level Attestation Policies: Use-case specific verification policies
- Cryptographic Algorithm Specifications: Defers to NIST and industry standards
- Remote Verifier Implementation: External verifier systems (though evidence format is specified)
- Reference Value Provider Services: CoRIM generation and distribution infrastructure
- Transport Layer Details: Physical/link layer protocols (I2C, I3C, MCTP, etc.)
3. Normative References
The following standards are normatively referenced in this specification:
3.1 IETF Specifications
- RFC 9334: Remote ATtestation procedureS (RATS) Architecture
- RFC 9711: Entity Attestation Token (EAT)
- CoRIM: Concise Reference Integrity Manifest (IETF Draft)
- RFC 8949: Concise Binary Object Representation (CBOR)
- RFC 9052: CBOR Object Signing and Encryption (COSE)
- RFC 5280: Internet X.509 Public Key Infrastructure Certificate and CRL Profile
3.2 TCG Specifications
- DICE Layering Architecture: Device Identity Composition Engine
- DICE Attestation Architecture: Certificate-based attestation
- DICE Protection Environment (DPE): Runtime attestation service
- TCG DICE Concise Evidence Binding for SPDM: Evidence format specification
3.3 DMTF Specifications
- DSP0274: Security Protocol and Data Model (SPDM) Version 1.3 or later
- DSP0277: Secured Messages using SPDM over MCTP Binding
- DSP0236: Management Component Transport Protocol (MCTP) Base Specification
3.4 Other Standards
- NIST FIPS 186-5: Digital Signature Standard (DSS)
- NIST SP 800-90A: Recommendation for Random Number Generation
- NIST SP 800-108: Recommendation for Key Derivation Functions
4. Terminology and Definitions
4.1 Attestation Roles
Following IETF RATS RFC 9334, the OpenPRoT attestation architecture defines the following roles:
Attester: An entity (OpenPRoT firmware and associated platform components) that produces attestation evidence about its state and configuration.
Relying Party: An entity that depends on the validity of attestation evidence to make operational decisions. In OpenPRoT deployments, this is typically:
- External platform owner or management system (for initial trust establishment)
- Platform management controller (for periodic verification)
- Cloud service provider infrastructure (for fleet management)
Verifier: An entity that appraises attestation evidence against reference values and policies to produce attestation results. OpenPRoT supports:
- Local Verifier: Running within OpenPRoT firmware for on-platform verification
- Remote Verifier: External system performing verification (implementation not specified here)
Endorser: An entity that vouches for the authenticity and properties of attestation components. For OpenPRoT:
PRoT Hardware: The underlying hardware platform that provides the root of trust capabilities (secure boot, cryptographic acceleration, isolated execution, OTP storage).
SPDM Responder Role: OpenPRoT acting as an SPDM responder to provide attestation evidence to external requesters.
SPDM Requester Role: OpenPRoT acting as an SPDM requester to obtain attestation evidence from platform devices.
Local Verifier: The verification component within OpenPRoT that appraises evidence from platform devices without requiring external connectivity.
Platform Composition: The complete set of attested components including OpenPRoT and downstream devices.
4.3 Key Attestation Terms
Root of Trust (RoT): The foundational hardware and immutable firmware that serves as the trust anchor for the platform. In OpenPRoT context, this is the PRoT hardware's boot ROM.
Compound Device Identifier (CDI): A cryptographic secret derived from measurements and used as the basis for key derivation in DICE.
Target Environment: A uniquely identifiable component or configuration that is measured and attested. In OpenPRoT:
- OpenPRoT firmware components (bootloader, runtime firmware)
- Hardware configurations (fuse settings, security configurations)
- Platform devices (when acting as SPDM requester)
TCB (Trusted Computing Base): The set of components that must be trusted for the security properties of a system to hold.
Evidence: Authenticated claims about platform state produced by the Attester. OpenPRoT generates evidence in multiple formats:
- DICE certificates with TCBInfo extensions
- TCG Concise Evidence
- RATS Entity Attestation Token (EAT)
Reference Values: Known-good measurements provided by the Reference Value Provider for comparison during verification. Typically distributed as CoRIM (Concise Reference Integrity Manifest).
Endorsement: Authenticated statements about device properties or certifications.
Appraisal Policy: Rules used by the Verifier to evaluate evidence against reference values.
Freshness: Property ensuring that evidence represents current platform state, typically achieved through nonces or timestamps.
4.4 DICE/DPE Terms
UDS (Unique Device Secret): A hardware-unique secret provisioned during manufacturing, stored in OTP/fuses, used as the root secret for DICE key derivation.
IDevID (Initial Device Identity): The manufacturer-provisioned device identity derived from UDS.
LDevID (Local Device Identity): An operator-provisioned device identity that can be used in place of IDevID.
Alias Key: A DICE-derived key that represents a specific layer in the boot chain.
DPE (DICE Protection Environment): A service that extends DICE principles to runtime, allowing dynamic context creation and key derivation.
DPE Context: A chain of measurements representing a specific execution path through the system.
DPE Handle: An identifier for a specific DPE context, used to extend measurements or derive keys.
4.5 SPDM Terms
SPDM Session: An authenticated and optionally encrypted communication channel between SPDM requester and responder.
Measurement Block: A collection of measurements representing a specific component or configuration.
Slot: An SPDM certificate chain storage location (Slot 0-7).
GET_MEASUREMENTS: SPDM command to retrieve attestation measurements.
GET_CERTIFICATE: SPDM command to retrieve certificate chains.
CHALLENGE: SPDM command to request signed evidence with freshness.
5. Attestation Architecture Overview
5.1 RATS Architecture Mapping
OpenPRoT implements the IETF RATS architecture with specific role assignments:
High-Level Flow:
- Relying Party (Platform Owner, CSP, Management System) needs to establish trust in the platform
- Relying Party requests attestation evidence from OpenPRoT via SPDM
- OpenPRoT (Attester) generates evidence containing measurements and claims
- Verifier (Remote or Local) receives evidence and appraises it
- Verifier retrieves reference values and endorsements from Reference Value Provider
- Verifier applies appraisal policy and generates attestation result
- Attestation result is conveyed to Relying Party
- Relying Party makes trust decision based on attestation result
Components:
- Attester: OpenPRoT Firmware Stack + PRoT Hardware
- Verifier: Remote verifier system OR OpenPRoT Local Verifier (for device attestation)
- Relying Party: External management system OR OpenPRoT (when verifying devices)
- Reference Value Provider: OpenPRoT project, hardware vendors, platform integrators
- Endorser: Hardware vendors, OpenPRoT project, platform integrators
5.2 Evidence Format Strategy
OpenPRoT adopts a standardized approach to evidence generation and verification:
5.2.1 OpenPRoT Evidence Generation (SPDM Responder)
When acting as an SPDM Responder, OpenPRoT produces attestation evidence in the following formats:
Primary Evidence Format: RATS EAT with OCP Profile
OpenPRoT generates Entity Attestation Tokens (EAT) following the OCP RATS EAT Attestation Profile. This format provides:
- Standardized container for attestation claims
- CBOR-encoded for efficiency
- COSE-signed for authenticity
- Nonce-based freshness
- TCG Concise Evidence embedded in measurements claim
EAT Structure:
The OpenPRoT EAT follows the OCP RATS EAT Attestation Profile specification. For complete details on the EAT structure, claims, and encoding, see:
https://opencomputeproject.github.io/Security/ietf-eat-profile/HEAD/
Supporting Evidence Formats:
- DICE Certificates with TCBInfo: Certificate chain establishing device identity and boot measurements
- TCG Concise Evidence: Standalone format containing reference-triples for measurements
- SPDM Measurement Blocks: Native SPDM measurement format for basic compatibility
5.2.2 OpenPRoT Evidence Verification (Local Verifier)
When acting as a Local Verifier, OpenPRoT supports multiple evidence formats:
Native Support: OCP RATS EAT Profile
The OpenPRoT Local Verifier natively supports appraisal of evidence in OCP RATS EAT Attestation Profile format. This enables:
- Standardized verification logic for OCP-compliant devices
- Consistent appraisal policy across vendors
- Interoperability with OCP ecosystem devices
- Direct comparison against CoRIM reference values
Verification Process for OCP EAT:
- Validate EAT signature using device certificate chain
- Verify nonce freshness
- Extract Concise Evidence from measurements claim
- Retrieve CoRIM reference values using corim-locator
- Compare evidence reference-triples against CoRIM reference-triples
- Apply appraisal policy
- Generate attestation result
Extended Support: Evidence Format Plugins
To accommodate diverse platform ecosystems, OpenPRoT includes an extensibility mechanism for non-OCP-compliant evidence formats:
Plugin Architecture:
- Evidence Parser Plugins: Parse vendor-specific evidence formats
- Claim Extractor Plugins: Extract measurements and claims from proprietary formats
- Policy Adapter Plugins: Map vendor-specific claims to OpenPRoT appraisal policies
Use Cases for Plugins:
- Legacy devices with proprietary attestation formats
- Vendor-specific evidence structures not yet migrated to OCP profile
- Specialized evidence formats for specific device classes
- Transitional support during ecosystem migration to OCP standards
Plugin Interface Requirements:
Plugins must implement the following interfaces:
- parse_evidence(): Convert vendor format to internal representation
- extract_claims(): Extract target environments and measurements
- validate_signature(): Verify evidence authenticity
- get_reference_values(): Retrieve or map to reference values
- apply_policy(): Execute appraisal logic
Plugin Integration:
Platform integrators can add custom plugins to OpenPRoT's local verifier to support their specific device ecosystem while maintaining the core OCP-compliant verification path for standard devices.
5.3 OpenPRoT Dual Role Architecture
OpenPRoT operates in two distinct attestation roles depending on the interaction:
5.3.1 OpenPRoT as Attester (SPDM Responder)
When external relying parties need to establish trust in OpenPRoT:
Flow:
- External Relying Party (SPDM Requester) initiates SPDM session
- SPDM version negotiation and capability exchange
- Algorithm negotiation
- Certificate chain retrieval (GET_CERTIFICATE)
- Measurement request (GET_MEASUREMENTS) with nonce
- OpenPRoT (SPDM Responder) generates EAT with OCP profile
- OpenPRoT returns signed EAT containing Concise Evidence
- Verifier (remote) appraises evidence against reference values
- Attestation result returned to Relying Party
Evidence Provided by OpenPRoT:
- Certificate chain (structure determined by underlying PRoT hardware implementation)
- RATS EAT with OCP Profile containing:
- TCG Concise Evidence with reference-triples
- Freshness nonce
- CoRIM locator URI
- COSE signature using attestation key provided by underlying hardware
Note on Hardware Dependencies:
The certificate chain structure and attestation key derivation mechanisms are determined by the underlying PRoT hardware implementation and are outside the scope of OpenPRoT firmware. OpenPRoT leverages the attestation capabilities provided by the hardware platform. Hardware vendors should document their specific:
- Certificate chain structure and hierarchy
- Key derivation mechanisms
- Supported cryptographic algorithms
- Identity provisioning approach
Use Cases:
- Initial platform deployment and trust establishment
- Periodic re-attestation for fleet management
- Pre-workload-deployment verification
- Compliance auditing
5.3.2 OpenPRoT as Verifier (SPDM Requester + Local Verifier)
When OpenPRoT needs to establish trust in platform devices:
Flow:
- OpenPRoT (SPDM Requester) initiates SPDM session with platform device
- SPDM version negotiation and capability exchange
- Algorithm negotiation
- Certificate chain retrieval from device (GET_CERTIFICATE)
- Measurement request (GET_MEASUREMENTS) with nonce
- Platform Device (SPDM Responder) returns evidence
- OpenPRoT Local Verifier receives evidence
- If OCP EAT format: Native verification path
- If non-OCP format: Plugin-based verification path
- Local Verifier appraises evidence against reference values
- Local trust decision made by OpenPRoT
- Result used for platform composition decisions
Standard Measurement Report:
OpenPRoT follows the SPDM Standard Measurement Report format for evidence collection from devices. This standardized approach ensures consistent evidence structure across different device types and vendors.
For complete details on the Standard Measurement Report format, see:
https://github.com/steven-bellock/libspdm/blob/96d08a730ecbe3f05fa3a2cdbf0b7c2613b24a2f/doc/standard_measurement_report.md
Evidence Received by OpenPRoT:
- Device certificate chain (structure varies by device implementation)
- Device evidence (OCP EAT preferred, plugin-supported formats allowed)
- Device measurements and claims in Standard Measurement Report format
Verification Paths:
- OCP-Compliant Devices: Direct verification using native OCP EAT verifier
- Non-OCP Devices: Plugin-based parsing and verification
- Hybrid Platforms: Mix of OCP and non-OCP devices verified appropriately
Use Cases:
- Verifying network cards, storage controllers, accelerators, soc
- Establishing trust in platform composition
- Air-gapped deployments without external verifier access
- Real-time device trust decisions
5.4 Attestation Flow
The basic attestation flow follows these steps:
Phase 1: Measurement Collection (Boot Time)
- PRoT Hardware Boot ROM (immutable) starts execution
- Boot ROM measures OpenPRoT bootloader (First Mutable Code)
- Hardware-specific key derivation and certificate generation occurs
- Control transfers to OpenPRoT bootloader
- Bootloader measures OpenPRoT runtime firmware
- Hardware-specific measurement chain continues
- Control transfers to OpenPRoT runtime firmware
- Runtime firmware initializes attestation services
- Runtime firmware measures platform components (optional)
Note: The specific measurement and key derivation mechanisms in steps 3 and 6 are hardware-dependent and outside the scope of OpenPRoT firmware.
Phase 2: Evidence Generation (On Request)
- External requester initiates SPDM session with OpenPRoT
- OpenPRoT SPDM Responder receives attestation request (GET_MEASUREMENTS)
- OpenPRoT collects current measurements from platform state
- OpenPRoT formats measurements as TCG Concise Evidence (reference-triples)
- OpenPRoT constructs RATS EAT with OCP Profile:
- Sets issuer to OpenPRoT identifier
- Includes requester-provided nonce
- Embeds Concise Evidence in measurements claim
- Adds CoRIM locator URI
- OpenPRoT signs EAT using hardware-provided attestation key (COSE signature)
- OpenPRoT returns EAT and certificate chain to requester
Phase 3: Evidence Conveyance
- SPDM Responder transmits evidence via SPDM protocol
- Evidence includes:
- Certificate chain (for signature verification)
- Signed EAT (containing measurements)
- Optional: Additional endorsements
- Transport layer delivers evidence to verifier
Phase 4: Reference Value Retrieval
- Verifier extracts CoRIM locator from EAT
- Verifier retrieves reference values CoRIM from repository
- Verifier retrieves endorsements (device identity, certifications)
- Verifier validates CoRIM signatures
- Verifier loads appraisal policy
Phase 5: Appraisal
- Verifier validates EAT signature using certificate chain
- Verifier checks certificate chain to trusted root
- Verifier verifies nonce freshness
- Verifier extracts Concise Evidence from EAT measurements claim
- Verifier parses reference-triples from Concise Evidence
- For each target environment in evidence:
- Compare against CoRIM reference values
- Check measurements match expected values
- Verify SVN meets minimum requirements
- Apply policy rules
- Verifier generates attestation result
Phase 6: Trust Decision
- Attestation result conveyed to Relying Party
- Relying Party evaluates result against requirements
- Relying Party makes operational decision:
- Accept platform for use
- Reject platform
- Request additional evidence
- Apply restricted usage policy
5.5 Trust Model
OpenPRoT's attestation architecture relies on the following trust assumptions:
5.5.1 Hardware Trust Anchor
Trusted Components:
- PRoT Hardware Boot ROM (immutable code)
- Hardware cryptographic accelerators
- OTP/Fuse storage for device secrets
- Hardware isolation mechanisms
Assumptions:
- Boot ROM is free from vulnerabilities
- Hardware random number generation is cryptographically secure
- Secrets in OTP/fuses cannot be extracted
- Hardware isolation prevents unauthorized access to secrets
Hardware-Specific Trust:
The specific trust properties and security guarantees are determined by the underlying PRoT hardware implementation. Hardware vendors must document:
- Root of trust initialization process
- Secret storage mechanisms
- Key derivation approach
- Isolation boundaries
- Cryptographic capabilities
5.5.2 Firmware Trust Chain
Trust Establishment:
- Boot ROM measures and authenticates OpenPRoT bootloader
- Bootloader measures and authenticates OpenPRoT runtime
- Each layer's measurements are recorded
- Compromise of any layer results in detectable measurement changes
Properties:
- Measurements cannot be forged without detection
- Certificate chain provides cryptographic proof of boot integrity
- Hardware-specific key binding ensures authenticity
OpenPRoT Scope:
OpenPRoT firmware operates within the trust chain established by the underlying hardware. The firmware:
- Collects and reports measurements
- Generates evidence in standardized formats
- Implements SPDM responder and requester roles
- Provides local verification capabilities
The underlying measurement and key derivation mechanisms are hardware-dependent.
5.5.3 Cryptographic Trust
Cryptographic Assumptions:
- Digital signatures cannot be forged without private key
- Hash collisions are computationally infeasible
- Key derivation functions provide one-way security
- COSE signature scheme provides authenticity and integrity
Key Protection: Threat: Attacker provides false reference values to verifier
Mitigation:
- CoRIM signed by trusted authority
- Verifier validates CoRIM signature before use
- Secure distribution channels for reference values
- Verifier configured with trusted root certificates
5.6.7 Man-in-the-Middle Attacks
Threat: Attacker intercepts and modifies attestation messages
Mitigation:
- SPDM secure sessions provide encryption and authentication
- Evidence signed by device prevents modification
- Certificate-based mutual authentication
- Integrity protection on all messages
5.6.8 Plugin Exploitation
Threat: Attacker provides false reference values to verifier
Mitigation:
- CoRIM signed by trusted authority
- Verifier validates CoRIM signature before use
- Secure distribution channels for reference values
- Verifier configured with trusted root certificates
5.6.7 Man-in-the-Middle Attacks
Threat: Attacker intercepts and modifies attestation messages
Mitigation:
- SPDM secure sessions provide encryption and authentication
- Evidence signed by device prevents modification
- Certificate-based mutual authentication
- Integrity protection on all messages
5.7 Device Identity Provisioning
OpenPRoT supports flexible device identity provisioning to accommodate different deployment models and ownership scenarios.
5.7.1 Identity Types
Manufacturer-Provisioned Identity:
- Provisioned by hardware manufacturer during production
- Rooted in hardware-unique secrets
- Provides vendor attestation anchor
- Permanent identity tied to hardware
Owner-Provisioned Identity:
- Provisioned by platform owner during deployment
- Enables owner-controlled attestation anchor
- Supports organizational PKI integration
- Can be updated by authorized owner
5.7.2 Owner Identity Provisioning with OpenPRoT
OpenPRoT implements the OCP Device Identity Provisioning specification to enable platform owners to provision owner-controlled identities to devices under their control. OpenPRoT acts as the intermediary between the owner and the device, facilitating secure identity provisioning.
Provisioning Process:
The owner identity provisioning follows the standardized flow defined in the OCP specification:
-
Owner Initiates Provisioning: Owner uses OpenPRoT to begin owner identity provisioning process
-
CSR Collection: OpenPRoT collects Certificate Signing Request (CSR) from the target device
- Device generates identity key pair internally
- Device creates CSR containing public key
- OpenPRoT retrieves CSR from device
-
Trust Establishment: OpenPRoT establishes trust in the device's identity key
- Verifies device's manufacturer-provisioned identity certificate chain
- Validates that CSR is signed by device
- Confirms key is hardware-protected
- Provides attestation evidence to owner
-
Endorsement Generation: Owner generates identity endorsement
- Owner reviews device attestation evidence
- Owner verifies device trustworthiness
- Owner signs CSR with owner CA
- Owner creates identity certificate
-
Endorsement Provisioning: OpenPRoT provisions the endorsement to the device
- OpenPRoT receives signed identity certificate from owner
- OpenPRoT provisions identity certificate to device
- Device validates and stores identity certificate
-
Verification: OpenPRoT verifies successful provisioning
- Requests device to use owner-provisioned identity for attestation
- Validates identity certificate chain
- Confirms device can sign with identity key
OpenPRoT's Role:
- CSR Broker: Retrieves CSRs from devices
- Trust Validator: Verifies device identity and key protection before owner endorsement
- Provisioning Agent: Delivers owner-signed certificates to devices
- Verification Service: Confirms successful identity provisioning
Benefits:
- Owner Control: Platform owners control their attestation trust anchor
- PKI Integration: Enables integration with organizational PKI infrastructure
- Privacy: Owner-provisioned identity can provide privacy from manufacturer tracking
- Flexibility: Supports diverse deployment and ownership models
- Automated Workflow: OpenPRoT automates the provisioning process
Specification Reference:
For complete details on the device identity provisioning process, see:
https://opencomputeproject.github.io/Security/device-identity-provisioning/HEAD/
6. Claims and Target Environments
TODO: Define OpenPRoT-specific claims and target environment structures
7. Evidence Formats
TODO: Detail evidence format specifications for OpenPRoT
8. Reference Values and Endorsements
TODO: Describe reference value and endorsement mechanisms
9. SPDM Protocol Integration
TODO: Specify SPDM protocol bindings and requirements
10. Local Verifier
TODO: Define local verifier architecture and capabilities
11. Attestation Use Cases
TODO: Document common attestation scenarios and workflows
12. Security Considerations
TODO: Additional security considerations beyond threat model
13. Implementation Guidelines
TODO: Guidance for implementers of OpenPRoT attestation
Firmware Update
Status: Draft
Overview
This section details the OpenPRoT firmware update mechanism, incorporating the DMTF standards for PLDM and SPDM, while emphasizing the security and resilience principles of the project.
Goals
- To provide a secure and reliable method for updating OpenPRoT firmware.
- To ensure that firmware updates are authenticated and authorized.
- To provide a recovery mechanism in the event of a failed update.
- To align with industry standards for firmware updates (PLDM, SPDM).
Use Cases
- Updating the OpenPRoT firmware itself.
- Updating the firmware of downstream devices managed by OpenPRoT.
- Applying critical security updates and bug fixes.
- Updating firmware to enable new features.
PLDM for Firmware Update
OpenPRoT devices will support PLDM Type 5 version 1.3.0 for Firmware Updates. This will be the primary mechanism for transferring firmware images and metadata to the device. PLDM provides a standardized method for managing firmware updates and is particularly well-suited for out-of-band management scenarios.
PLDM Firmware Update Package
The firmware update package is essential for conveying the information required for the PLDM Firmware Update commands.
Package Header
The package will contain a header that describes the contents of the firmware update package, including:
- Overall packaging version and date.
- Device identifier records to specify the target OpenPRoT devices.
- Downstream device identifier records to describe target downstream devices.
- Component image information, including classification, offset, size, and version.
- A checksum for integrity verification.
- Package Payload: Contains the actual firmware component images to be updated
Package Header Information
Field | Size (bytes) | Definition |
---|---|---|
PackageHeaderIdentifier | 16 | Set to 0x7B291C996DB64208801B0202E6463C78 (v1.3.0 UUID) (big endian) |
PackageHeaderFormatRevision | 1 | Set to 0x04 (v1.3.0 header format revision) |
PackageHeaderSize | 2 | The total byte count of this header structure, including fields within the Package Header Information, Firmware Device Identification Area, Downstream Device Identification Area, Component Image Information Area, and Checksum sections. |
PackageReleaseDateTime | 13 | The date and time when this package was released in timestamp104 formatting. Refer to the PLDM Base Specification for field format definition. |
ComponentBitmapBitLength | 2 | Number of bits used to represent the bitmap in the ApplicableComponents field for a matching device. This value is a multiple of 8 and is large enough to contain a bit for each component in the package. |
PackageVersionStringType | 1 | The type of string used in the PackageVersionString field. Refer to DMTF Firmware Update Specification v.1.3.0 Table 33 for values. |
PackageVersionStringLength | 1 | Length, in bytes, of the PackageVersionString field. |
PackageVersionString | Variable | Package version information, up to 255 bytes. Contains a variable type string describing the version of this firmware update package. |
DeviceIDRecordCount | uint8 | The count of firmware device ID records that are defined within this package. |
FirmwareDeviceIDRecords | Variable | Contains a record, a set of descriptors, and optional package data for each firmware device within the count provided from the DeviceIDRecordCount field. |
Firmware Device ID Descriptor
Field | Size (bytes) | Definition |
---|---|---|
RecordLength | 2 | The total length in bytes for this record. The length includes the RecordLength, DescriptorCount, DeviceUpdateOptionFlags, ComponentImageSetVersionStringType, ComponentSetVersionStringLength, FirmwareDevicePackageDataLength, ApplicableComponents, ComponentImageSetVersionString, and RecordDescriptors, and FirmwareDevicePackageData fields. |
DescriptorCount | 1 | The number of descriptors included within the RecordDescriptors field for this record. |
DeviceUpdateOptionFlags | 4 | 32-bit field where each bit represents an update option. bit 0 is set to 1 (Continue component updates after failure). |
ComponentImageSetVersionStringType | 1 | The type of string used in the ComponentImageSetVersionString field. Refer to DMTF Firmware Update Specification v.1.3.0 Table 33 for values. |
ComponentImageSetVersionStringLength | 1 | Length, in bytes, of the ComponentImageSetVersionString. |
FirmwareDevicePackageDataLength | 2 | Length in bytes of the FirmwareDevicePackageData field. If no data is provided, set to 0x0000. |
ReferenceManifestLength | 4 | Length in bytes of the ReferenceManifestData field. If no data is provided, set to 0x00000000. |
ApplicableComponents | Variable | Bitmap indicating which firmware components apply to devices matching this Device Identifier record. A set bit indicates the Nth component in the payload is applicable to this device. bit 0: OpenPRoT RT Image bit 1: Downstream SoC Manifest bit 2 : Downstream SoC Firmware bit 3:: Downstream EEPROM |
ComponentImageSetVersionString | Variable | Component Image Set version information, up to 255 bytes. Describes the version of component images applicable to the firmware device indicated in this record. |
RecordDescriptors | Variable | These descriptors are defined by the vendor. Refer to DMTF Firmware Update Specification v.1.3.0 Table 7 for details of these fields and the values that can be selected. |
FirmwareDevicePackageData | Variable | Optional data provided within the firmware update package for the FD during the update process. If FirmwareDevicePackageDataLength is 0x0000, this field contains no data. |
ReferenceManifestData | Variable | Optional data field containing a Reference Manifest for the firmware update. If present, it describes the firmware update provided by this package. If ReferenceManifestLength is 0x00000000, this field contains no data. |
Downstream Device ID Descriptor
Field | Size | Definition |
---|---|---|
DownstreamDeviceIDRecordCount | 1 | 0 |
Component Image Information
Field | Size | Definition |
---|---|---|
ComponentClassification | 2 | 0x000A: Downstream EEPROM, Downstream SoC Firmware, and OpenPRoT RT Image (Firmware), 0x0001: Downstream SoC Manifest (Other) |
ComponentIdentifier | 2 | Unique value selected by the FD vendor to distinguish between component images. 0x0001: OpenPRoT RT Image, 0x0002: Downstream SoC Manifest, 0x0003: 0x0003: Downstream EEPROM 0x1000-0xFFFF - Reserved for other vendor-defined SoC images |
ComponentComparisonStamp | 4 | Value used as a comparison in determining if a firmware component is down-level or up-level. When ComponentOptions bit 1 is set, this field should use a comparison stamp format (e.g., MajorMinorRevisionPatch). If not set, use 0xFFFFFFFF. |
ComponentOptions | 2 | Refer to ComponentOptions definition in DMTF Firmware Update Specification v.1.3.0 |
RequestedComponentActivationMethod | 2 | Refer to RequestedComponentActivationMethoddefinition inDMTF Firmware Update Specification v.1.3.0 |
ComponentLocationOffset | 4 | Offset in bytes from byte 0 of the package header to where the component image begins. |
ComponentSize | 4 | Size in bytes of the Component image. |
ComponentVersionStringType | 1 | Type of string used in the ComponentVersionString field. Refer toDMTF Firmware Update Specification v.1.3.0 Table 33 for values. |
ComponentVersionStringLength | 1 | Length, in bytes, of the ComponentVersionString. |
ComponentVersionString | Variable | Component version information, up to 255 bytes. Contains a variable type string describing the component version. |
ComponentOpaqueDataLength | 4 | Length in bytes of the ComponentOpaqueData field. If no data is provided, set to 0x00000000. |
ComponentOpaqueData | Variable | Optional data transferred to the FD/FDP during the firmware update |
Component Identifiers
Component Image | Name | Description |
---|---|---|
0x0 | OpenPRoT RT Image | OpenPRoT manifest and firmware images (e.g. BL0, RT firmware). |
0x1 | Downstream SoC Manifest | SoC manifest covering firmware images. Used to stage verification of the firmware payload. |
0x2 | Downstream SoC Firmware | SoC firmware payload. |
0x3 | Downstream EEPROM | Bulk update of downstream EEPROM |
>= 0x1000 | Vendor defined components |
PLDM Firmware Update Process
The update process will involve the following steps:
- RequestUpdate: The Update Agent (UA) initiates the firmware update by
sending the
RequestUpdate
command to the OpenPRoT device. We refer to OpenPRoT as the Firmware Device (FD). - GetPackageData: If there is optional package data for the Firmware Device (FD), the UA will transfer it to the FD prior to transferring component images.
- GetDeviceMetaData: The UA may also optionally retrieve FD metadata that will be saved and restored after all components are updated.
- PassComponentTable: The UA will send the
PassComponentTable
command with information about the component images to be updated. This includes the identifier, component comparison stamp, classification, and version information for each component image. - UpdateComponent: The UA will send the
UpdateComponent
command for each component, which includes: component classification, component version, component size, and update options. The UA will subsequently transfer component images using theRequestFirmwareData
command.. - TransferComplete: After successfully transferring component data, the FD
will send a
TransferComplete
command. - VerifyComplete: Once a component transfer is complete the FD will perform a verification of the image.
- ApplyComplete: The FD will use the
ApplyComplete
command to signal that the component image has been successfully applied. - ActivateFirmware: After all components are transferred, the UA sends the
ActivateFirmware
command. If self-contained activation is supported, the FD should immediately enable the new component images. Otherwise, the component enters a "pending activation" state which will require a reset to complete the activation. - GetStatus: The UA will periodically use the
GetStatus
command to detect when the activation process has completed.
For downstream device updates, the UA will use RequestDownstreamDeviceUpdate
to initiate the update sequence on the FDP. The rest of the process is similar,
with the FDP acting as a proxy for the downstream device.
PLDM Firmware Update Error Handling and Recovery
- The PLDM specification defines a set of completion codes for error conditions.
- OpenPRoT will adhere to the timing specifications defined in the PLDM specification (DSP0240 and DSP0267) for command timeouts and retries.
- The
CancelUpdateComponent
command is available to cancel the update of a component image, and theCancelUpdate
command can be used to exit from update mode. The UA should attempt to complete the update and avoid cancelling if possible. - OpenPRoT devices will implement a dual-bank approach for firmware
components. This will allow for a fallback to a known-good firmware image in
case of a failed update. If a power loss occurs prior to the
ActivateFirmware
command, the FD will continue to use the currently active image, and the UA can restart the firmware update process.
Device Abstraction
Status: Draft
The OpenPRoT Driver Development Kit (Device Development Kit) provides a set of generic Rust traits and types for interacting with I/O peripherals and cryptographic algorithm accelerators encountered in the class of devices that perform Root of Trust (RoT) functions.
The DDK isolates the OpenPRoT developer from the underlying embedded processor and operating system.
Scope
This section provides a non-exhaustive list of peripherals that fall within the scope of the Device Driver Kit (DDK).
I/O Peripherals
Device | Description |
---|---|
SMBus/I2C Monitor/Filter | |
Delay | Delay execution for specified durations in microseconds or milliseconds. |
Cryptographic Functions
Cryptographic Algorithm | Description |
---|---|
AES | Symmetric encryption and decryption |
ECC | ECDSA signature and verification |
digest | Cryptographic hash functions |
RSA | RSA signature and verification |
We will refer to the collection of I/O peripherals and cryptographic algorithm accelerators as peripherals from now on.
Design Goals
Platform Agnostic
The goal of the DDK is to provide a consistent and flexible interface for applications to invoke peripheral functionality, regardless of whether the interaction with the underlying peripheral driver is through system calls to a kernel mode device driver, inter-task communication or direct access to memory-mapped peripheral registers.
Execution Model Agnostic
The DDK should be agnostic of the execution model and provide flexibility for its users.
The collection of traits in the DDK is to be segregated in different crates according to the APIs it exposes. i.e. synchronous, asynchronous, and non-blocking APIs.
These crates ensure that DDK can cater to various execution models, making it versatile for different application requirements.
- Synchronous APIs: The main open-prot-ddk crate contains blocking traits where operations are performed synchronously before returning.
- Asynchronous APIs: The open-prot-ddk-async crate provides traits for asynchronous operations using Rust's async/await model.
- Non-blocking APIs: The open-prot-ddk-nb crate offers traits for non-blocking operations which allows for polling-based execution.
Design Principles
Minimalism
The design of the DDK prioritizes simplicity, making it straightforward for developers to implement. By avoiding unnecessary complexity, it ensures that the core traits and functionalities remain clear and easy to understand.
Zero Cost
This principle ensures that using the DDK introduces no additional overhead. In other words, the abstraction layer should neither slow down the system nor consume more resources than direct hardware access.
Composability
The HAL shall be designed to be modular and flexible, allowing developers to easily combine different components. This composability means that various drivers and peripherals can work together seamlessly, making it easier to build complex systems from simple, reusable parts.
Robust Error Handling
Trait methods must be designed to handle potential failures, as hardware interactions can be unpredictable. This means that methods invoking hardware should return a Result type to account for various failure scenarios, including misconfiguration, power issues, or disabled hardware.
#![allow(unused)] fn main() { pub trait SpiRead<W> { type Error; fn read(&mut self, words: &mut [W]) -> Result<(), Self::Error>; } }
While the default approach should be to use fallible methods, HAL implementations can also provide infallible versions if the hardware guarantees no failure. This ensures that generic code can rely on robust error handling, while platform-specific code can avoid unnecessary boilerplate when appropriate.
#![allow(unused)] fn main() { use core::convert::Infallible; pub struct MyInfallibleSpi; impl SpiRead<u8> for MyInfallibleSpi { type Error = Infallible; fn read(&mut self, words: &mut [u8]) -> Result<(), Self::Error> { // Perform the read operation Ok(()) } } }
Separate Control and Data Path Operations
- Clarity: By separating configuration (control path) from data transfer (data path), each part of the code has a clear responsibility. This makes the code easier to understand and maintain.
- Modularity: It allows for more modular design, where control and data handling can be developed and tested independently.
Example
This example is extracted from Tock's TRD3 design document. Uart functionality is decomposed into fine grained traits defined for control path (Configure) and data path operations (Transmit and Receive).
#![allow(unused)] fn main() { pub trait Configure { fn configure(&self, params: Parameters) -> ReturnCode; } pub trait Transmit<'a> { fn set_transmit_client(&self, client: &'a dyn TransmitClient); fn transmit_buffer( &self, tx_buffer: &'static mut [u8], tx_len: usize, ) -> (ReturnCode, Option<&'static mut [u8]>); fn transmit_word(&self, word: u32) -> ReturnCode; fn transmit_abort(&self) -> ReturnCode; } pub trait Receive<'a> { fn set_receive_client(&self, client: &'a dyn ReceiveClient); fn receive_buffer( &self, rx_buffer: &'static mut [u8], rx_len: usize, ) -> (ReturnCode, Option<&'static mut [u8]>); fn receive_word(&self) -> ReturnCode; fn receive_abort(&self) -> ReturnCode; } pub trait Uart<'a>: Configure + Transmit<'a> + Receive<'a> {} pub trait UartData<'a>: Transmit<'a> + Receive<'a> {} }
Use Case : Device Sharing
- Peripheral Client Task: This task is only exposed to data path operations, such as reading from or writing to the peripheral. It interacts with the peripheral server to perform these operations without having direct access to the configuration settings.
- Peripheral Server Task: This task is responsible for managing and sharing the peripheral functionality across multiple client tasks. It has the exclusive role of configuring the peripheral for data transfer operations, ensuring that all configuration changes are centralized and controlled. This separation allows for robust access control and simplifies the management of peripheral settings.
Methodology
In order to accomplish this goal in an efficient fashion the DDK should not try to reinvent the wheel but leverage existing work in the Rust community such as the Embedded Rust Workgroup's embedded-hal or the RustCrypto projects.
As much as possible, the OpenPRoT workgroup should evaluate, curate, and recommend existing abstractions that have already gained wide adoption.
By leveraging well-established and widely accepted abstractions, the DDK can ensure compatibility, reliability, and ease of integration across various platforms and applications. This approach not only saves development time and resources but also promotes standardization and interoperability within the ecosystem.
When abstractions need to be invented, as is the case for the I3C protocol, for instance the OpenPRot workgroup will design it according to the community guidelines for the project it is curating from and make contributions upstream.
Use Cases
This section illustrates the contexts where the DDK can be used.
Low Level Driver
A low level driver implements a peripheral (or a cryptographic algorithm) driver
trait by accessing memory mapped registers directly and it is distributed as a
no_std
crate.
A no_std
crate like the one depicted below would be linked directly into a
user mode task with exclusive peripheral ownership. This use case is encountered
in microkernel-based embedded O/S such as Oxide HUBRIS where drivers run in
unprivileged mode.

Proxy for a Kernel Mode Device Driver
In this section, we explore how a trait from the Device Development Kit (DDK) can enhance portability by decoupling the application writer from the underlying embedded stack.
The user of the peripheral is an application that is interacting with a kernel mode device driver via system calls, but is completely isolated from the underlying implementation.
This is applicable to any O/S with device drivers living in the kernel, like the Tock O/S.

Proxy for a Peripheral Server Task
In this section, we will explore once more how traits from the Device Development Kit (DDK) can enhance portability by decoupling the application writer from the underlying Operating System architecture. This scenario is applicable to any microkernel-based O/S
The xyz-i2c-ipc-impl depicted below is distributed as a no_std
driver crate
and is linked to a I2C client task. The I2C client task is an application that
is interacting with a user mode device driver, named the I2C server task via
message passing.
The I2C Server task owns the actual peripheral and is linked to a xyz-i2c-drv-imp driver crate, which is a low-level driver. .

The I2C client task sends requests to the I2C peripheral owned by the server task via message passing, completely oblivious to the underlying implementation.

Terminology
The following acronyms and abbreviations are used throughout this document.
Abbreviation | Description |
---|---|
AES | Advanced Encryption Standard |
BMC | Baseboard Management Controller |
CA | Certificate Authority |
CPU | Central Processing Unit |
CRL | Certificate Revocation List |
CSR | Certificate Signing Request |
CSP | Critical Security Parameter |
DICE | Device Identifier Composition Engine |
DRBG | Deterministic Random Bit Generator |
ECDSA | Elliptic Curve Digital Signature Algorithm |
FMC | FW First Measured Code |
GPU | Graphics Processing Unit |
HMAC | Hash-based message authentication code |
IDevId | Initial Device Identifier |
iRoT | Internal RoT |
KAT | Known Answer Test |
KDF | Key Derivation Function |
LDevId | Locally Significant Device Identifier |
MCTP | Management Component Transport Protocol |
NIC | Network Interface Card |
NIST | National Institute of Standards and technology |
OCP | Open Compute Project |
OTP | One-time programmable |
PCR | Platform Configuration Register |
PKI | Public Key infrastructure |
PLDM | Platform Level Data Model |
PUF | Physically unclonable function |
RoT | Root of Trust |
RTI | RoT for Identity |
RTM | RoT for Measurement |
RTRec | RoT for Recovery |
RTU | RoT for Update |
SHA | Secure Hash Algorithm |
SoC | System on Chip |
SPDM | Security Protocol and Data Model |
SSD | Solid State Drive |
TCB | Trusted Computing Base |
TCI | TCB Component Identifier |
TCG | Trusted Computing Group |
TEE | Trusted Execution Environment |
TRNG | True Random Number Generator |
Architecture
Project Structure
openprot/
├── openprot/ # Main application
│ ├── src/
│ │ ├── lib.rs # Library code
│ │ └── main.rs # Binary entry point
│ └── Cargo.toml
├── xtask/ # Build automation
│ ├── src/
│ │ ├── main.rs # Task runner
│ │ ├── cargo_lock.rs # Cargo.lock management
│ │ └── docs.rs # Documentation generation
│ └── Cargo.toml
├── docs/ # Documentation source
├── .cargo/ # Cargo configuration
└── Cargo.toml # Workspace configuration
Components
Main Application (openprot/
)
The main application provides...
Build System (xtask/
)
The xtask system provides automated build tasks including:
- Building and testing
- Code formatting and linting
- Distribution creation
- Documentation generation
- Dependency management
Documentation (docs/
)
Documentation is built using mdbook and includes:
- User guides
- Developer documentation
- API references
- Architecture documentation
Contributing
Development Setup
- Clone the repository
- Install dependencies:
cargo xtask check
- Run tests:
cargo xtask test
- Format code:
cargo xtask fmt
Code Style
- Use
cargo xtask fmt
to format code - Run
cargo xtask clippy
to check for lints - Ensure all tests pass with
cargo xtask test
Documentation
- Update documentation in the
docs/
directory - Build docs with
cargo xtask docs
- Documentation is built with mdbook
Pull Requests
- Fork the repository
- Create a feature branch
- Make your changes
- Run the full test suite
- Submit a pull request
Issues
Please report issues on the GitHub issue tracker.
Design
Generic Digest Server Design Document
This document describes the design and architecture of a generic digest server for Hubris OS that supports both SPDM and PLDM protocol implementations.
Requirements
Primary Requirement
Enable SPDM and PLDM Protocol Support: The digest server must provide cryptographic hash services to support both SPDM (Security Protocol and Data Model) and PLDM (Platform Level Data Model) protocol implementations in Hubris OS.
Derived Requirements
R1: Algorithm Support
- R1.1: Support SHA-256 for basic SPDM operations and PLDM firmware integrity validation
- R1.2: Support SHA-384 for enhanced security profiles in both SPDM and PLDM
- R1.3: Support SHA-512 for maximum security assurance
- R1.4: Reject unsupported algorithms (SHA-3) with clear error codes
R2: Session Management
- R2.1: Support incremental hash computation for large certificate chains and firmware images
- R2.2: Support multiple concurrent digest sessions (hardware-dependent capacity)
- R2.3: Provide session isolation between different SPDM and PLDM protocol flows
- R2.4: Automatic session cleanup to prevent resource exhaustion
- R2.5: Session timeout mechanism for abandoned operations
R3: SPDM and PLDM Use Cases
- R3.1: Certificate chain verification (hash large X.509 certificate data)
- R3.2: Measurement verification (hash firmware measurement data)
- R3.3: Challenge-response authentication (compute transcript hashes)
- R3.4: Session key derivation (hash key exchange material)
- R3.5: Message authentication (hash SPDM message sequences)
- R3.6: PLDM firmware image integrity validation (hash received firmware chunks)
- R3.7: PLDM component image verification (validate assembled image against manifest digest)
- R3.8: PLDM signature verification support (hash image data for signature validation)
R4: Performance and Resource Constraints
- R4.1: Memory-efficient operation suitable for embedded systems
- R4.2: Zero-copy data processing using Hubris leased memory
- R4.3: Deterministic resource allocation (no dynamic allocation)
- R4.4: Bounded execution time for real-time guarantees
R5: Hardware Abstraction
- R5.1: Generic interface supporting any hardware digest accelerator
- R5.2: Mock implementation for testing and development
- R5.3: Type-safe hardware abstraction with compile-time verification
- R5.4: Consistent API regardless of underlying hardware
R6: Error Handling and Reliability
- R6.1: Comprehensive error reporting for SPDM protocol diagnostics
- R6.2: Graceful handling of hardware failures
- R6.3: Session state validation and corruption detection
- R6.4: Clear error propagation to SPDM layer
R7: Integration Requirements
- R7.1: Synchronous IPC interface compatible with Hubris task model
- R7.2: Idol-generated API stubs for type-safe inter-process communication
- R7.3: Integration with Hubris memory management and scheduling
- R7.4: No dependency on async runtime or futures
R8: Supervisor Integration Requirements
- R8.1: Configure appropriate task disposition (Restart recommended for production)
- R8.2: SPDM clients handle task generation changes transparently (no complex recovery logic needed)
- R8.3: Digest server fails fast on unrecoverable hardware errors rather than returning complex error states
- R8.4: Support debugging via jefe external interface during development
Design Overview
This digest server provides a generic implementation that can work with any device implementing the required digest traits from openprot-hal-blocking
. The design supports both single-context and multi-context hardware through hardware-adaptive session management.
Architecture
System Context
graph LR subgraph "SPDM Client Task" SC[SPDM Client] SCV[• Certificate verification<br/>• Transcript hashing<br/>• Challenge-response<br/>• Key derivation] end subgraph "PLDM Client Task" PC[PLDM Firmware Update] PCV[• Image integrity validation<br/>• Component verification<br/>• Signature validation<br/>• Running digest computation] end subgraph "Digest Server" DS[ServerImpl<D>] DSV[• Session management<br/>• Generic implementation<br/>• Resource management<br/>• Error handling] end subgraph "Hardware Backend" HW[Hardware Device] HWV[• MockDigestDevice<br/>• Actual HW accelerator<br/>• Any device with traits] end SC ---|Synchronous<br/>IPC/Idol| DS PC ---|Synchronous<br/>IPC/Idol| DS DS ---|HAL Traits| HW SC -.-> SCV PC -.-> PCV DS -.-> DSV HW -.-> HWV
Component Architecture
ServerImpl<D>
├── Generic Type Parameter D
│ └── Trait Bounds: DigestInit<Sha2_256/384/512>
├── Session Management
│ ├── Static session storage (hardware-dependent capacity)
│ ├── Session lifecycle (init → update → finalize)
│ └── Automatic timeout and cleanup
└── Hardware Abstraction
├── Static dispatch (no runtime polymorphism)
├── Algorithm-specific methods
└── Error translation layer
Data Flow
SPDM Client Request
↓
Idol-generated stub
↓
ServerImpl<D> method
↓
Session validation/allocation
↓
Hardware context management (save/restore)
↓
Direct hardware streaming
↓
Result processing
↓
Response to client
Hardware-Adaptive Implementation
Platform-Specific Trait Implementations
#![allow(unused)] fn main() { // Single-context hardware (ASPEED HACE) - context management happens in OpContext impl DigestInit<Sha2_256> for Ast1060HashDevice { type OpContext<'a> = Ast1060DigestContext<'a> where Self: 'a; type Output = Digest<8>; fn init<'a>(&'a mut self, _: Sha2_256) -> Result<Self::OpContext<'a>, Self::Error> { // Direct hardware initialization - no session management needed Ok(Ast1060DigestContext::new_sha256(self)) } } impl DigestOp for Ast1060DigestContext<'_> { fn update(&mut self, data: &[u8]) -> Result<(), Self::Error> { // Direct streaming to hardware - blocking until complete self.hardware.stream_data(data) } fn finalize(self) -> Result<Self::Output, Self::Error> { // Complete and return result - hardware auto-resets self.hardware.finalize_sha256() } } // Multi-context hardware (hypothetical) - context switching hidden in traits impl DigestInit<Sha2_256> for MultiContextDevice { type OpContext<'a> = MultiContextDigestContext<'a> where Self: 'a; type Output = Digest<8>; fn init<'a>(&'a mut self, _: Sha2_256) -> Result<Self::OpContext<'a>, Self::Error> { // Complex session allocation happens here, hidden from server let context_id = self.allocate_hardware_context()?; Ok(MultiContextDigestContext::new(self, context_id)) } } impl DigestOp for MultiContextDigestContext<'_> { fn update(&mut self, data: &[u8]) -> Result<(), Self::Error> { // Context switching happens transparently here self.hardware.ensure_context_active(self.context_id)?; self.hardware.stream_data(data) } } }
Hardware-Specific Processing Patterns
Single-Context Hardware (ASPEED HACE Pattern)
sequenceDiagram participant C1 as SPDM Client participant C2 as PLDM Client participant DS as Digest Server participant HW as ASPEED HACE Note over C1,HW: Clients naturally serialize via blocking IPC C1->>DS: init_sha256() DS->>HW: Initialize SHA-256 (direct hardware access) HW-->>DS: Context initialized DS-->>C1: session_id = 1 par Client 2 blocks waiting C2->>DS: init_sha384() (BLOCKS until C1 finishes) end C1->>DS: update(session_id=1, data_chunk_1) DS->>HW: Stream data directly to hardware HW->>HW: Process data incrementally HW-->>DS: Update complete DS-->>C1: Success C1->>DS: finalize_sha256(session_id=1) DS->>HW: Finalize computation HW->>HW: Complete hash calculation HW-->>DS: Final digest result DS-->>C1: SHA-256 digest Note over DS,HW: Hardware available for next client DS->>HW: Initialize SHA-384 for Client 2 HW-->>DS: Context initialized DS-->>C2: session_id = 2 (C2 unblocks)
Multi-Context Hardware Pattern (Hypothetical)
sequenceDiagram participant C1 as SPDM Client participant C2 as PLDM Client participant DS as Digest Server participant HW as Multi-Context Hardware participant RAM as Context Storage Note over C1,RAM: Complex session management with context switching C1->>DS: init_sha256() DS->>HW: Initialize SHA-256 context DS->>DS: current_session = 0 DS-->>C1: session_id = 1 C1->>DS: update(session_id=1, data_chunk_1) DS->>HW: Stream data to active context HW-->>DS: Update complete DS-->>C1: Success C2->>DS: init_sha384() Note over DS,RAM: Context switching required DS->>RAM: Save session 0 context (SHA-256 state) DS->>HW: Initialize SHA-384 context DS->>DS: current_session = 1 DS-->>C2: session_id = 2 C2->>DS: update(session_id=2, data_chunk_2) DS->>HW: Stream data to active context HW-->>DS: Update complete DS-->>C2: Success C1->>DS: update(session_id=1, data_chunk_3) Note over DS,RAM: Switch back to session 0 DS->>RAM: Save session 1 context (SHA-384 state) DS->>RAM: Restore session 0 context (SHA-256 state) DS->>HW: Load SHA-256 context to hardware DS->>DS: current_session = 0 DS->>HW: Stream data to restored context HW-->>DS: Update complete DS-->>C1: Success C1->>DS: finalize_sha256(session_id=1) DS->>HW: Finalize computation HW-->>DS: Final digest result DS-->>C1: SHA-256 digest DS->>DS: current_session = None
IPC Interface Definition
The digest server exposes its functionality through a Hubris Idol IPC interface that provides both session-based streaming operations and one-shot convenience methods.
Idol Interface Specification
#![allow(unused)] fn main() { // digest.idol - Hubris IPC interface definition Interface( name: "Digest", ops: { // Session-based streaming operations (enabled by owned API) "init_sha256": ( args: {}, reply: Result( ok: "u32", // Returns session ID for the digest context err: CLike("DigestError"), ), ), "init_sha384": ( args: {}, reply: Result( ok: "u32", // Returns session ID for the digest context err: CLike("DigestError"), ), ), "init_sha512": ( args: {}, reply: Result( ok: "u32", // Returns session ID for the digest context err: CLike("DigestError"), ), ), "update": ( args: { "session_id": "u32", "len": "u32", }, leases: { "data": (type: "[u8]", read: true, max_len: Some(1024)), }, reply: Result( ok: "()", err: CLike("DigestError"), ), ), "finalize_sha256": ( args: { "session_id": "u32", }, leases: { "digest_out": (type: "[u32; 8]", write: true), }, reply: Result( ok: "()", err: CLike("DigestError"), ), ), "finalize_sha384": ( args: { "session_id": "u32", }, leases: { "digest_out": (type: "[u32; 12]", write: true), }, reply: Result( ok: "()", err: CLike("DigestError"), ), ), "finalize_sha512": ( args: { "session_id": "u32", }, leases: { "digest_out": (type: "[u32; 16]", write: true), }, reply: Result( ok: "()", err: CLike("DigestError"), ), ), "reset": ( args: { "session_id": "u32", }, reply: Result( ok: "()", err: CLike("DigestError"), ), ), // One-shot convenience operations (using scoped API internally) "digest_oneshot_sha256": ( args: { "len": "u32", }, leases: { "data": (type: "[u8]", read: true, max_len: Some(1024)), "digest_out": (type: "[u32; 8]", write: true), }, reply: Result( ok: "()", err: CLike("DigestError"), ), ), "digest_oneshot_sha384": ( args: { "len": "u32", }, leases: { "data": (type: "[u8]", read: true, max_len: Some(1024)), "digest_out": (type: "[u32; 12]", write: true), }, reply: Result( ok: "()", err: CLike("DigestError"), ), ), "digest_oneshot_sha512": ( args: { "len": "u32", }, leases: { "data": (type: "[u8]", read: true, max_len: Some(1024)), "digest_out": (type: "[u32; 16]", write: true), }, reply: Result( ok: "()", err: CLike("DigestError"), ), ), }, ) }
IPC Design Rationale
Session-Based Operations
- init_sha256/384/512(): Creates new session using owned API, returns session ID for storage
- update(session_id, data): Updates specific session using move-based context operations
- finalize_sha256/384/512(session_id): Completes session and recovers controller for reuse
- reset(session_id): Cancels session early and recovers controller
One-Shot Operations
- digest_oneshot_sha256/384/512(): Complete digest computation in single IPC call using scoped API
- Convenience methods: For simple use cases that don't need streaming
Zero-Copy Data Transfer
- Leased memory: All data transfer uses Hubris leased memory system
- Read leases: Input data (
data
) passed by reference, no copying - Write leases: Output digests (
digest_out
) written directly to client memory - Bounded transfers: Maximum 1024 bytes per update for deterministic behavior
Type Safety
- Algorithm-specific finalize:
finalize_sha256
only works with SHA-256 sessions - Sized output arrays:
[u32; 8]
for SHA-256,[u32; 12]
for SHA-384,[u32; 16]
for SHA-512 - Session validation: Invalid session IDs return
DigestError::InvalidSession
IPC Usage Patterns
SPDM Certificate Verification (Streaming)
#![allow(unused)] fn main() { // Client code using generated Idol stubs let digest = Digest::from(DIGEST_SERVER_TASK_ID); let session_id = digest.init_sha256()?; for chunk in certificate_data.chunks(1024) { digest.update(session_id, chunk.len() as u32, chunk)?; } let mut cert_hash = [0u32; 8]; digest.finalize_sha256(session_id, &mut cert_hash)?; }
Simple Hash Computation (One-Shot)
#![allow(unused)] fn main() { // Client code for simple operations let digest = Digest::from(DIGEST_SERVER_TASK_ID); let mut hash_output = [0u32; 8]; digest.digest_oneshot_sha256(data.len() as u32, data, &mut hash_output)?; }
Detailed Design
Session Model
Session Lifecycle
┌─────────┐ init_sha256/384/512() ┌─────────┐
│ FREE │ ────────────────────────→ │ ACTIVE │
│ │ │ │
└─────────┘ └─────────┘
↑ │
│ finalize_sha256/384/512() │ update(data)
│ reset() │ (stream to hardware)
│ timeout_cleanup() │
└───────────────────────────────────────┘
Hardware-Specific Session Management
Different hardware platforms have varying capabilities for concurrent session support:
#![allow(unused)] fn main() { // Platform-specific capability trait pub trait DigestHardwareCapabilities { const MAX_CONCURRENT_SESSIONS: usize; const SUPPORTS_HARDWARE_CONTEXT_SWITCHING: bool; } // AST1060 implementation - single session, simple and efficient impl DigestHardwareCapabilities for Ast1060HashDevice { const MAX_CONCURRENT_SESSIONS: usize = 1; // Work with hardware, not against it const SUPPORTS_HARDWARE_CONTEXT_SWITCHING: bool = false; } // Example hypothetical multi-context implementation impl DigestHardwareCapabilities for HypotheticalMultiContextDevice { const MAX_CONCURRENT_SESSIONS: usize = 16; // Hardware-dependent capacity const SUPPORTS_HARDWARE_CONTEXT_SWITCHING: bool = true; } // Generic server implementation pub struct ServerImpl<D: DigestHardwareCapabilities> { sessions: FnvIndexMap<u32, DigestSession, {D::MAX_CONCURRENT_SESSIONS}>, hardware: D, next_session_id: u32, } pub struct DigestSession { algorithm: SessionAlgorithm, timeout: Option<u64>, // Hardware-specific context data only if supported } }
Generic Hardware Abstraction with Platform-Adaptive Session Management
Trait Requirements
The server is generic over type D
where:
#![allow(unused)] fn main() { D: DigestInit<Sha2_256> + DigestInit<Sha2_384> + DigestInit<Sha2_512> + ErrorType }
With the actual openprot-hal-blocking
trait structure:
#![allow(unused)] fn main() { // Hardware device implements DigestInit for each algorithm impl DigestInit<Sha2_256> for MyDigestDevice { type OpContext<'a> = MyDigestContext<'a> where Self: 'a; type Output = Digest<8>; fn init<'a>(&'a mut self, _: Sha2_256) -> Result<Self::OpContext<'a>, Self::Error> { // All hardware complexity (context management, save/restore) handled here } } // The context handles streaming operations impl DigestOp for MyDigestContext<'_> { type Output = Digest<8>; fn update(&mut self, data: &[u8]) -> Result<(), Self::Error> { // Hardware-specific streaming implementation // Context switching (if needed) happens transparently } fn finalize(self) -> Result<Self::Output, Self::Error> { // Complete digest computation // Context cleanup happens automatically } } }
Hardware-Adaptive Architecture
- Single-Context Hardware: Direct operations, clients naturally serialize via blocking IPC
- Multi-Context Hardware: Native hardware session switching when supported
- Compile-time optimization: Session management code only included when needed
- Platform-specific limits:
MAX_CONCURRENT_SESSIONS
based on hardware capabilities - Synchronous IPC alignment: Works naturally with Hubris blocking message passing
Concurrency Patterns by Hardware Type
Single-Context Hardware (ASPEED HACE):
Client A calls init_sha256() → Blocks until complete → Returns session_id
Client B calls init_sha384() → Blocks waiting for A to finish → Still blocked
Client A calls update(session_id) → Blocks until complete → Returns success
Client B calls update(session_id) → Still blocked waiting for A to finalize
Client A calls finalize() → Releases hardware → Client B can now proceed
Multi-Context Hardware (Hypothetical):
Client A calls init_sha256() → Creates session context → Returns immediately
Client B calls init_sha384() → Creates different context → Returns immediately
Client A calls update(session_id) → Uses session context → Returns immediately
Client B calls update(session_id) → Uses different context → Returns immediately
Session Management Flow (Hardware-Dependent)
Single-Context Hardware: Direct Operation → Hardware → Result
Multi-Context Hardware: Session Request → Hardware Context → Process → Save Context → Result
Static Dispatch Pattern
- Compile-time algorithm selection: No runtime algorithm switching
- Type safety: Associated type constraints ensure output size compatibility
- Zero-cost abstraction: No virtual function calls or dynamic dispatch
- Hardware flexibility: Any device implementing the traits can be used
Memory Management
Static Allocation Strategy (Hardware-Adaptive)
#![allow(unused)] fn main() { // Session storage sized based on hardware capabilities static mut SESSION_STORAGE: [SessionData; D::MAX_CONCURRENT_SESSIONS] = [...]; }
- Hardware-aligned limits: Session count matches hardware capabilities
- Single-context optimization: No session overhead for simple hardware
- Multi-context support: Full session management when hardware supports it
- Deterministic memory usage: No dynamic allocation
- Real-time guarantees: Bounded memory access patterns
Hardware-Adaptive Data Flow
- Zero-copy IPC: Uses Hubris leased memory system
- Platform optimization: Direct operations for single-context hardware
- Session management: Only when hardware supports multiple contexts
- Bounded updates: Maximum 1024 bytes per update call (hardware limitation)
- Memory safety: All buffer accesses bounds-checked
- Synchronous semantics: Natural blocking behavior with Hubris IPC
Platform-Specific Processing
Single-Context: Client Request → Direct Hardware → Result → Client Response
Multi-Context: Client Request → Session Management → Hardware Context → Result → Client Response
Error Handling Strategy
Hardware-Adaptive Error Model
Hardware Layer Error → DigestError → RequestError<DigestError> → Client Response
Platform-Specific Error Categories
- Hardware failures:
DigestError::HardwareFailure
(all platforms) - Session management:
DigestError::InvalidSession
,DigestError::TooManySessions
(multi-context only) - Input validation:
DigestError::InvalidInputLength
(hardware-specific limits) - Algorithm support:
DigestError::UnsupportedAlgorithm
(capability-dependent)
Hardware-Adaptive Session Architecture
Instead of imposing a complex context management layer, the digest server adapts to hardware capabilities:
graph TB subgraph "Single-Context Hardware (ASPEED HACE)" SC1[Client Request] SC2[Direct Hardware Operation] SC3[Immediate Response] SC1 --> SC2 --> SC3 end subgraph "Multi-Context Hardware (Hypothetical)" MC1[Session Pool] MC2[Context Scheduler] MC3[Hardware Contexts] MC4[Session Management] MC1 --> MC2 --> MC3 --> MC4 end
Hardware Capability Detection
The digest server adapts to different hardware capabilities through compile-time trait bounds:
#![allow(unused)] fn main() { pub trait DigestHardwareCapabilities { const MAX_CONCURRENT_SESSIONS: usize; // Hardware-dependent: 1 for single-context, 16+ for multi-context const SUPPORTS_CONTEXT_SWITCHING: bool; const MAX_UPDATE_SIZE: usize; } }
Examples of hardware-specific session limits:
- ASPEED AST1060:
MAX_CONCURRENT_SESSIONS = 1
(single hardware context) - Multi-context accelerators:
MAX_CONCURRENT_SESSIONS = 16
(or higher based on hardware design) - Software implementations: Can support many concurrent sessions limited by memory
Session Management Strategy
- Single-context platforms: Direct hardware operations, no session state
- Multi-context platforms: Full session management with context switching
- Compile-time optimization: Dead code elimination for unused features
- Context Initialization: When starting new session
#![allow(unused)] fn main() { }
Clean Server Implementation
With proper trait encapsulation, the server implementation becomes much simpler:
#![allow(unused)] fn main() { impl<D> ServerImpl<D> where D: DigestInit<Sha2_256> + DigestInit<Sha2_384> + DigestInit<Sha2_512> + ErrorType { fn update_session(&mut self, session_id: u32, data: &[u8]) -> Result<(), DigestError> { let session = self.get_session_mut(session_id)?; // Generic trait call - all hardware complexity hidden session.op_context.update(data) .map_err(|_| DigestError::HardwareFailure)?; Ok(()) } fn finalize_session(&mut self, session_id: u32) -> Result<DigestOutput, DigestError> { let session = self.take_session(session_id)?; // Trait handles finalization and automatic cleanup session.op_context.finalize() .map_err(|_| DigestError::HardwareFailure) } } }
Hardware Complexity Encapsulation
- No save/restore methods: All context management hidden in trait implementations
- No platform-specific code: Server only calls generic trait methods
- Automatic optimization: Single-context hardware avoids unnecessary overhead
- Transparent complexity: Multi-context hardware handles switching internally
Concurrency Model
Session Isolation
- Each session operates independently
- No shared mutable state between sessions
- Session IDs provide access control
- Timeout mechanism prevents resource leaks
SPDM and PLDM Integration Points
- SPDM Certificate Verification: Hash certificate chains incrementally
- SPDM Transcript Computation: Hash sequences of SPDM messages
- SPDM Challenge Processing: Compute authentication hashes
- SPDM Key Derivation: Hash key exchange material
- PLDM Firmware Integrity: Hash received firmware image chunks during transfer
- PLDM Component Validation: Verify assembled components against manifest digests
- PLDM Multi-Component Updates: Concurrent digest computation for multiple firmware components
Failure Scenarios
Session Management Failures
Session Exhaustion Scenarios
Single-Context Hardware (ASPEED HACE) - No Exhaustion Possible
sequenceDiagram participant S1 as SPDM Client 1 participant S2 as SPDM Client 2 participant DS as Digest Server participant HW as ASPEED HACE Note over DS,HW: Hardware only supports one active session S1->>DS: init_sha256() DS->>HW: Direct hardware initialization DS-->>S1: session_id = 1 S2->>DS: init_sha384() (BLOCKS on IPC until S1 finishes) Note over S2: Client automatically waits - no error needed S1->>DS: finalize_sha256(session_id=1) DS->>HW: Complete and release hardware DS-->>S1: digest result Note over DS,HW: Hardware now available DS->>HW: Initialize SHA-384 for S2 DS-->>S2: session_id = 2 (S2 unblocks)
Multi-Context Hardware (Hypothetical) - True Session Exhaustion
sequenceDiagram participant S1 as Client 1 participant S2 as Client 9 participant DS as Digest Server participant HW as Multi-Context Hardware Note over DS: Hardware capacity reached, all contexts active S2->>DS: init_sha256() DS->>DS: find_free_hardware_context() DS-->>S2: Error: TooManySessions Note over S2: Client must wait for context to free up S2->>DS: init_sha256() (retry after delay) DS->>HW: Allocate available hardware context DS-->>S2: session_id = 9
Session Timeout Recovery
sequenceDiagram participant SC as SPDM Client participant DS as Digest Server participant T as Timer SC->>DS: init_sha256() DS-->>SC: session_id = 3 Note over T: 10,000 ticks pass T->>DS: timer_tick DS->>DS: cleanup_expired_sessions() DS->>DS: session[3].timeout expired DS->>DS: session[3] = FREE SC->>DS: update(session_id=3, data) DS->>DS: validate_session(3) DS-->>SC: Error: InvalidSession Note over SC: Client must reinitialize SC->>DS: init_sha256() DS-->>SC: session_id = 3 (reused)
Hardware Failure Scenarios
Hardware Device Failure
flowchart TD A[SPDM/PLDM Client Request] --> B[Digest Server] B --> C{Hardware Available?} C -->|Yes| D[Call hardware.init] C -->|No| E[panic! - Hardware unavailable] D --> F{Hardware Response} F -->|Success| G[Process normally] F -->|Error| H[panic! - Hardware failure] G --> I[Return result to client] E --> J[Task fault → Jefe supervision] H --> J style E fill:#ffcccc style H fill:#ffcccc style J fill:#fff2cc
Resource Exhaustion Scenarios
Memory Pressure Handling
flowchart LR A[Large Data Update] --> B{Buffer Space Available?} B -->|Yes| C[Accept data into session buffer] B -->|No| D[Return InvalidInputLength] C --> E{Session Buffer Full?} E -->|No| F[Continue accepting updates] E -->|Yes| G[Client must finalize before more updates] D --> H[Client must use smaller chunks] G --> I[finalize_sha256/384/512] H --> J[Retry with smaller data] style D fill:#ffcccc style G fill:#fff2cc style H fill:#ccffcc
Session Lifecycle Error States
stateDiagram-v2 [*] --> FREE FREE --> ACTIVE_SHA256: init_sha256() + hardware context init FREE --> ACTIVE_SHA384: init_sha384() + hardware context init FREE --> ACTIVE_SHA512: init_sha512() + hardware context init ACTIVE_SHA256 --> ACTIVE_SHA256: update(data) → stream to hardware ACTIVE_SHA384 --> ACTIVE_SHA384: update(data) → stream to hardware ACTIVE_SHA512 --> ACTIVE_SHA512: update(data) → stream to hardware ACTIVE_SHA256 --> FREE: finalize_sha256() → hardware result ACTIVE_SHA384 --> FREE: finalize_sha384() → hardware result ACTIVE_SHA512 --> FREE: finalize_sha512() → hardware result ACTIVE_SHA256 --> FREE: reset() + context cleanup ACTIVE_SHA384 --> FREE: reset() + context cleanup ACTIVE_SHA512 --> FREE: reset() + context cleanup ACTIVE_SHA256 --> FREE: timeout + context cleanup ACTIVE_SHA384 --> FREE: timeout + context cleanup ACTIVE_SHA512 --> FREE: timeout + context cleanup state ERROR_STATES { [*] --> InvalidSession: Wrong session ID [*] --> WrongAlgorithm: finalize_sha384() on SHA256 session [*] --> ContextSwitchError: Hardware context save/restore failure [*] --> HardwareError: Hardware streaming failure } ACTIVE_SHA256 --> ERROR_STATES: Error conditions ACTIVE_SHA384 --> ERROR_STATES: Error conditions ACTIVE_SHA512 --> ERROR_STATES: Error conditions
SPDM Protocol Impact Analysis
Certificate Verification Failure Recovery
Single-Context Hardware (ASPEED HACE) - No Session Exhaustion
sequenceDiagram participant SPDM as SPDM Protocol participant DS as Digest Server participant HW as ASPEED HACE SPDM->>DS: verify_certificate_chain() DS->>HW: Direct hardware operation (blocks until complete) HW-->>DS: Certificate hash result DS-->>SPDM: Success Note over SPDM: No session management complexity needed
Multi-Context Hardware (Hypothetical) - True Session Management
sequenceDiagram participant SPDM as SPDM Protocol participant DS as Digest Server participant HW as Multi-Context Hardware SPDM->>DS: verify_certificate_chain() alt Hardware context available DS->>HW: Allocate context and process HW-->>DS: Certificate hash result DS-->>SPDM: Success else All contexts busy DS-->>SPDM: Error: TooManySessions Note over SPDM: Client retry logic or wait SPDM->>DS: verify_certificate_chain() (retry) DS-->>SPDM: Success (context now available) end
Transcript Hash Failure Impact
flowchart TD A[SPDM Message Exchange] --> B[Compute Transcript Hash] B --> C{Digest Server Available?} C -->|Yes| D[Normal transcript computation] C -->|No| E[Digest server failure] E --> F{Failure Type} F -->|Session Exhausted| G[Retry with backoff] F -->|Hardware Failure| H[Abort authentication] F -->|Timeout| I[Reinitialize session] G --> J{Retry Successful?} J -->|Yes| D J -->|No| K[Authentication failure] H --> K I --> L{Reinit Successful?} L -->|Yes| D L -->|No| K D --> M[Continue SPDM protocol] K --> N[Report to security policy] style E fill:#ffcccc style K fill:#ff9999 style N fill:#ffcccc
Failure Recovery Strategies
Error Propagation Chain
flowchart LR HW[Hardware Layer] -->|Any Error| PANIC[Task Panic] DS[Digest Server] -->|Recoverable DigestError| RE[RequestError wrapper] RE -->|IPC| CLIENTS[SPDM/PLDM Clients] CLIENTS -->|Simple Retry| POL[Security Policy] PANIC -->|Task Fault| JEFE[Jefe Supervisor] JEFE -->|Task Restart| DS_NEW[Fresh Digest Server] DS_NEW -->|Next IPC| CLIENTS subgraph "Recoverable Error Types" E1[InvalidSession] E2[TooManySessions] E3[InvalidInputLength] end subgraph "Simple Client Recovery" R1[Session Cleanup] R2[Retry with Backoff] R3[Use One-shot API] R4[Authentication Failure] end DS --> E1 DS --> E2 DS --> E3 CLIENTS --> R1 CLIENTS --> R2 CLIENTS --> R3 CLIENTS --> R4 style PANIC fill:#ffcccc style DS_NEW fill:#ccffcc
System-Level Failure Handling
graph TB subgraph "Digest Server Internal Failures" F1[Session Exhaustion] F2[Recoverable Hardware Failure] F3[Input Validation Errors] end subgraph "Task-Level Failures" T1[Unrecoverable Hardware Failure] T2[Memory Corruption] T3[Syscall Faults] T4[Explicit Panics] end subgraph "SPDM Client Responses" S1[Retry with Backoff] S2[Fallback to One-shot] S3[Graceful Degradation] S4[Abort Authentication] end subgraph "Jefe Supervisor Actions" J1[Task Restart - Restart Disposition] J2[Hold for Debug - Hold Disposition] J3[Log Fault Information] J4[External Debug Interface] end subgraph "System-Level Responses" R1[Continue with Fresh Task Instance] R2[Debug Analysis Mode] R3[System Reboot - Jefe Fault] end F1 --> S1 F2 --> S1 F3 --> S4 T1 --> J1 T2 --> J1 T3 --> J1 T4 --> J1 J1 --> R1 J2 --> R2 S1 --> R1 S2 --> R1 S3 --> R1 R2 --> R3 R1 --> R4 R2 --> R4
Supervisor Integration and System-Level Failure Handling
Jefe Supervisor Role
The digest server operates under the supervision of Hubris OS's supervisor task ("jefe"), which provides system-level failure management beyond the server's internal error handling.
Supervisor Architecture
graph TB subgraph "Supervisor Domain (Priority 0)" JEFE[Jefe Supervisor Task] JEFE_FEATURES[• Fault notification handling<br/>• Task restart decisions<br/>• Debugging interface<br/>• System restart capability] end subgraph "Application Domain" DS[Digest Server] SPDM[SPDM Client] OTHER[Other Tasks] end KERNEL[Hubris Kernel] -->|Fault Notifications| JEFE JEFE -->|reinit_task| KERNEL JEFE -->|system_restart| KERNEL DS -.->|Task Fault| KERNEL SPDM -.->|Task Fault| KERNEL OTHER -.->|Task Fault| KERNEL JEFE -.-> JEFE_FEATURES
Task Disposition Management
Each task, including the digest server, has a configured disposition that determines jefe's response to failures:
- Restart Disposition: Automatic recovery via
kipc::reinit_task()
- Hold Disposition: Task remains faulted for debugging inspection
Failure Escalation Hierarchy
sequenceDiagram participant HW as Hardware participant DS as Digest Server participant SPDM as SPDM Client participant K as Kernel participant JEFE as Jefe Supervisor Note over DS: Fail immediately on any hardware failure HW->>DS: Hardware fault DS->>DS: panic!("Hardware failure detected") DS->>K: Task fault occurs K->>JEFE: Fault notification (bit 0) JEFE->>K: find_faulted_task() K-->>JEFE: task_index (digest server) alt Restart disposition (production) JEFE->>K: reinit_task(digest_server, true) K->>DS: Task reinitialized with fresh hardware state Note over SPDM: Next IPC gets fresh task, no special handling needed else Hold disposition (debug) JEFE->>JEFE: Mark holding_fault = true Note over DS: Task remains faulted for debugging Note over SPDM: IPC returns generation mismatch error end
System Failure Categories and Responses
Recoverable Failures (Handled by Digest Server)
- Session Management:
TooManySessions
,InvalidSession
→ Return error to client - Input Validation:
InvalidInputLength
→ Return error to client
Task-Level Failures (Handled by Jefe)
- Any Hardware Failure: Hardware errors of any kind → Task panic → Jefe restart
- Hardware Resource Exhaustion: Hardware cannot allocate resources → Task panic → Jefe restart
- Memory Corruption: Stack overflow, heap corruption → Task fault → Jefe restart
- Syscall Faults: Invalid kernel IPC usage → Task fault → Jefe restart
- Explicit Panics:
panic!()
in digest server code → Task fault → Jefe restart
System-Level Failures (Handled by Kernel)
- Supervisor Fault: Jefe task failure → System reboot
- Kernel Panic: Critical kernel failure → System reset
- Watchdog Timeout: System hang detection → Hardware reset
Key Design Principle: The digest server fails immediately on any hardware error without attempting recovery. This maximally simplifies the implementation and ensures consistent system behavior through jefe's supervision.
External Debugging Interface
Jefe provides an external interface for debugging digest server failures:
#![allow(unused)] fn main() { // External control commands available via debugger (Humility) enum JefeRequest { Hold, // Stop automatic restart of digest server Start, // Manually restart digest server Release, // Resume automatic restart behavior Fault, // Force digest server to fault for testing } }
This enables development workflows like:
- Hold faulting server: Examine failure state without automatic restart
- Analyze dump data: Extract task memory and register state
- Test recovery: Manually trigger restart after fixes
- Fault injection: Test SPDM client resilience
Integration Requirements Update
R8: Supervisor Integration Requirements
- R8.1: Configure appropriate task disposition (Restart recommended for production)
- R8.2: SPDM clients handle task generation changes transparently (no complex recovery logic needed)
- R8.3: Digest server fails fast on unrecoverable hardware errors rather than returning complex error states
- R8.4: Support debugging via jefe external interface during development
SPDM Integration Examples
Certificate Chain Verification (Requirement R3.1)
#![allow(unused)] fn main() { // SPDM task verifying a certificate chain fn verify_certificate_chain(&mut self, cert_chain: &[u8]) -> Result<bool, SpdmError> { let digest = Digest::from(DIGEST_SERVER_TASK_ID); // Create session for certificate hash (R2.1: incremental computation) let session_id = digest.init_sha256()?; // R1.1: SHA-256 support // Process certificate data incrementally (R4.2: zero-copy processing) for chunk in cert_chain.chunks(512) { digest.update(session_id, chunk.len() as u32, chunk)?; } // Get final certificate hash let mut cert_hash = [0u32; 8]; digest.finalize_sha256(session_id, &mut cert_hash)?; // Verify against policy self.verify_hash_against_policy(&cert_hash) } }
SPDM Transcript Hash Computation (Requirement R3.3)
#![allow(unused)] fn main() { // Computing hash of SPDM message sequence for authentication fn compute_transcript_hash(&mut self, messages: &[SpdmMessage]) -> Result<[u32; 8], SpdmError> { let digest = Digest::from(DIGEST_SERVER_TASK_ID); let session_id = digest.init_sha256()?; // R2.3: session isolation // Hash all messages in the SPDM transcript (R3.5: message authentication) for msg in messages { let msg_bytes = msg.serialize()?; digest.update(session_id, msg_bytes.len() as u32, &msg_bytes)?; } let mut transcript_hash = [0u32; 8]; digest.finalize_sha256(session_id, &mut transcript_hash)?; // R7.1: synchronous IPC Ok(transcript_hash) } }
Sequential SPDM Operations (Requirement R2.1)
#![allow(unused)] fn main() { // SPDM task performing sequential operations using incremental hashing impl SpdmResponder { fn handle_certificate_and_transcript(&mut self, cert_data: &[u8], messages: &[SpdmMessage]) -> Result<(), SpdmError> { let digest = Digest::from(DIGEST_SERVER_TASK_ID); // Operation 1: Certificate verification (R2.1: incremental computation) let cert_session = digest.init_sha256()?; // R1.1: SHA-256 support // Process certificate incrementally for chunk in cert_data.chunks(512) { digest.update(cert_session, chunk.len() as u32, chunk)?; } let mut cert_hash = [0u32; 8]; digest.finalize_sha256(cert_session, &mut cert_hash)?; // Operation 2: Transcript hash computation (sequential, after cert verification) let transcript_session = digest.init_sha256()?; // R2.3: new isolated session // Hash all SPDM messages in sequence for msg in messages { let msg_bytes = msg.serialize()?; digest.update(transcript_session, msg_bytes.len() as u32, &msg_bytes)?; } let mut transcript_hash = [0u32; 8]; digest.finalize_sha256(transcript_session, &mut transcript_hash)?; // Use both hashes for SPDM protocol self.process_verification_results(&cert_hash, &transcript_hash) } } }
PLDM Integration Examples
PLDM Firmware Image Integrity Validation (Requirement R3.6)
#![allow(unused)] fn main() { // PLDM task validating received firmware chunks fn validate_firmware_image(&mut self, image_chunks: &[&[u8]], expected_digest: &[u32; 8]) -> Result<bool, PldmError> { let digest = Digest::from(DIGEST_SERVER_TASK_ID); // Create session for running digest computation (R2.1: incremental computation) let session_id = digest.init_sha256()?; // R1.1: SHA-256 commonly used in PLDM // Process firmware image incrementally as chunks are received (R4.2: zero-copy processing) for chunk in image_chunks { digest.update(session_id, chunk.len() as u32, chunk)?; } // Get final image digest let mut computed_digest = [0u32; 8]; digest.finalize_sha256(session_id, &mut computed_digest)?; // Compare with manifest digest Ok(computed_digest == *expected_digest) } }
PLDM Component Verification During Transfer (Requirement R3.7)
#![allow(unused)] fn main() { // PLDM task computing running digest during TransferFirmware fn transfer_firmware_with_validation(&mut self, component_id: u16) -> Result<(), PldmError> { let digest = Digest::from(DIGEST_SERVER_TASK_ID); // Initialize digest session for this component transfer (R2.3: session isolation) let session_id = digest.init_sha384()?; // R1.2: SHA-384 for enhanced security // Store session for this component transfer self.component_sessions.insert(component_id, session_id); // Firmware chunks will be processed via update() calls as they arrive // This enables real-time validation during transfer rather than after Ok(()) } fn process_firmware_chunk(&mut self, component_id: u16, chunk: &[u8]) -> Result<(), PldmError> { let digest = Digest::from(DIGEST_SERVER_TASK_ID); // Retrieve session for this component let session_id = self.component_sessions.get(&component_id) .ok_or(PldmError::InvalidComponent)?; // Add chunk to running digest (R3.6: firmware image integrity) digest.update(*session_id, chunk.len() as u32, chunk)?; Ok(()) } }
PLDM Multi-Component Concurrent Updates (Requirement R2.2)
#![allow(unused)] fn main() { // PLDM task handling multiple concurrent firmware updates impl PldmFirmwareUpdate { fn handle_concurrent_updates(&mut self) -> Result<(), PldmError> { let digest = Digest::from(DIGEST_SERVER_TASK_ID); // Component 1: Main firmware using SHA-256 let main_fw_session = digest.init_sha256()?; // Component 2: Boot loader using SHA-384 let bootloader_session = digest.init_sha384()?; // R1.2: SHA-384 support // Component 3: FPGA bitstream using SHA-512 let fpga_session = digest.init_sha512()?; // R1.3: SHA-512 support // All components can be updated concurrently (hardware-dependent capacity - R2.2) // Each maintains independent digest state (R2.3: isolation) // Store sessions for component tracking self.component_sessions.insert(MAIN_FW_COMPONENT, main_fw_session); self.component_sessions.insert(BOOTLOADER_COMPONENT, bootloader_session); self.component_sessions.insert(FPGA_COMPONENT, fpga_session); Ok(()) } } }
Requirements Validation
✅ Requirements Satisfied
Requirement | Status | Implementation |
---|---|---|
R1.1 SHA-256 support | ✅ | init_sha256() , finalize_sha256() with hardware context |
R1.2 SHA-384 support | ✅ | init_sha384() , finalize_sha384() with hardware context |
R1.3 SHA-512 support | ✅ | init_sha512() , finalize_sha512() with hardware context |
R1.4 Reject unsupported algorithms | ✅ | SHA-3 functions return UnsupportedAlgorithm |
R2.1 Incremental hash computation | ✅ | True streaming via update_hardware_context() |
R2.2 Multiple concurrent sessions | ✅ | Hardware-dependent capacity with context switching |
R2.3 Session isolation | ✅ | Independent hardware contexts in non-cacheable RAM |
R2.4 Automatic cleanup | ✅ | cleanup_expired_sessions() with context cleanup |
R2.5 Session timeout | ✅ | SESSION_TIMEOUT_TICKS with hardware context release |
R3.1-R3.5 SPDM use cases | ✅ | All supported via streaming session-based API |
R3.6-R3.8 PLDM use cases | ✅ | Firmware validation, component verification, streaming support |
R4.1 Memory efficient | ✅ | Static allocation, hardware context simulation |
R4.2 Zero-copy processing | ✅ | Direct streaming to hardware, no session buffering |
R4.3 Deterministic allocation | ✅ | No dynamic memory allocation |
R4.4 Bounded execution | ✅ | Fixed context switch costs, predictable timing |
R5.1 Generic hardware interface | ✅ | ServerImpl<D> with context management traits |
R5.2 Mock implementation | ✅ | MockDigestDevice with context simulation |
R5.3 Type-safe abstraction | ✅ | Associated type constraints + context safety |
R5.4 Consistent API | ✅ | Same streaming interface regardless of hardware |
R6.1 Comprehensive errors | ✅ | Full DigestError enumeration + context errors |
R6.2 Hardware failure handling | ✅ | HardwareFailure error propagation + context cleanup |
R6.3 Session state validation | ✅ | validate_session() + context state checks |
R6.4 Clear error propagation | ✅ | RequestError<DigestError> wrapper |
R7.1 Synchronous IPC | ✅ | No async/futures dependencies |
R7.2 Idol-generated stubs | ✅ | Type-safe IPC interface |
R7.3 Hubris integration | ✅ | Uses userlib, leased memory |
R7.4 No async runtime | ✅ | Pure synchronous implementation |
R8.1 Task disposition configuration | ✅ | Configured in app.toml |
R8.2 Transparent task generation handling | ✅ | SPDM clients get fresh task transparently |
R8.3 Fail-fast hardware error handling | ✅ | Task panic on unrecoverable hardware errors |
R8.4 Debugging support | ✅ | Jefe external interface available |
Generic Design Summary
The ServerImpl<D>
struct is now generic over any device D
that implements:
Key Features
- True Hardware Streaming: Data flows directly to hardware contexts with proper save/restore
- Context Management: Multiple sessions share hardware via non-cacheable RAM context switching
- Type Safety: Associated type constraints ensure digest output sizes match expectations
- Zero Runtime Cost: Uses static dispatch for optimal performance
- Memory Efficient: Static session storage with hardware context simulation
- Concurrent Sessions: Hardware-dependent concurrent digest operations with automatic context switching
Usage Example
To use with a custom hardware device that supports context management:
#![allow(unused)] fn main() { // Your hardware device must implement the required traits struct MyDigestDevice { // Hardware-specific context management fields current_context: Option<DigestContext>, context_save_addr: *mut u8, // Non-cacheable RAM base } impl DigestInit<Sha2_256> for MyDigestDevice { type Output = Digest<8>; fn init(&mut self, _: Sha2_256) -> Result<DigestContext, HardwareError> { // Initialize hardware registers for SHA-256 // Set up context for streaming operations Ok(DigestContext::new_sha256()) } } impl DigestInit<Sha2_384> for MyDigestDevice { type Output = Digest<12>; // Similar implementation for SHA-384 } impl DigestInit<Sha2_512> for MyDigestDevice { type Output = Digest<16>; // Similar implementation for SHA-512 } impl DigestCtrlReset for MyDigestDevice { fn reset(&mut self) -> Result<(), HardwareError> { // Reset hardware to clean state // Clear any active contexts Ok(()) } } // Context management methods (hardware-specific) impl MyDigestDevice { fn save_context_to_ram(&mut self, session_id: usize) -> Result<(), HardwareError> { // Save current hardware context to non-cacheable RAM // Hardware-specific register read and memory write operations } fn restore_context_from_ram(&mut self, session_id: usize) -> Result<(), HardwareError> { // Restore session context from non-cacheable RAM to hardware // Hardware-specific memory read and register write operations } } // Then use it with the streaming server let server = ServerImpl::new(MyDigestDevice::new()); }
Implementation Status and Development Notes
Critical Findings and Resolutions
Trait Lifetime Incompatibility with Session-Based Operations - RESOLVED
During implementation, a fundamental incompatibility was discovered between the openprot-hal-blocking
digest traits and the session-based streaming operations described in this design document. This issue has been resolved through the implementation of a dual API structure with owned context variants.
The Original Problem
The openprot-hal-blocking
digest traits were originally designed for scoped operations, but the digest server API expected persistent sessions. These requirements were fundamentally incompatible due to lifetime constraints.
Root Cause: Lifetime Constraints in Scoped API
The original scoped trait definition created lifetime constraints:
#![allow(unused)] fn main() { pub trait DigestInit<T: DigestAlgorithm>: ErrorType { type OpContext<'a>: DigestOp<Output = Self::Output> where Self: 'a; fn init(&mut self, init_params: T) -> Result<Self::OpContext<'_>, Self::Error>; } }
The OpContext<'a>
had a lifetime tied to &'a mut self
, meaning:
- Context could not outlive the function call that created it
- Context could not be stored in a separate struct
- Context could not persist across IPC boundaries
- Sessions could not maintain persistent state between operations
The Solution: Dual API with Move-Based Resource Management
The incompatibility has been completely resolved through implementation of a dual API structure:
1. Scoped API (Original) - For simple, one-shot operations:
#![allow(unused)] fn main() { pub mod scoped { pub trait DigestInit<T: DigestAlgorithm>: ErrorType { type OpContext<'a>: DigestOp<Output = Self::Output> where Self: 'a; fn init<'a>(&'a mut self, init_params: T) -> Result<Self::OpContext<'a>, Self::Error>; } } }
2. Owned API (New) - For session-based, streaming operations:
#![allow(unused)] fn main() { pub mod owned { pub trait DigestInit<T: DigestAlgorithm>: ErrorType { type OwnedContext: DigestOp<Output = Self::Output>; fn init_owned(&mut self, init_params: T) -> Result<Self::OwnedContext, Self::Error>; } pub trait DigestOp { type Output; fn update(&mut self, data: &[u8]) -> Result<(), Self::Error>; fn finalize(self) -> Result<Self::Output, Self::Error>; fn cancel(self) -> Self::Controller; } } }
How the Owned API Enables Sessions
The owned API uses move-based resource management to solve the lifetime problem:
#![allow(unused)] fn main() { // ✅ NOW POSSIBLE: Digest server with owned contexts and controller use openprot_hal_blocking::digest::owned::{DigestInit, DigestOp}; struct DigestServer<H, C> { controller: Option<H>, // Hardware controller active_session: Option<C>, // Single active session } impl<H, C> DigestServer<H, C> where H: DigestInit<Sha2_256, Context = C>, C: DigestOp<Controller = H>, { fn init_session(&mut self) -> Result<(), Error> { let controller = self.controller.take().ok_or(Error::Busy)?; let context = controller.init(Sha2_256)?; // ✅ Owned context self.active_session = Some(context); // ✅ Store in server Ok(()) } fn update_session(&mut self, data: &[u8]) -> Result<(), Error> { let context = self.active_session.take().ok_or(Error::NoSession)?; let updated_context = context.update(data)?; // ✅ Move-based update self.active_session = Some(updated_context); // ✅ Store updated state Ok(()) } fn finalize_session(&mut self) -> Result<Digest<8>, Error> { let context = self.active_session.take().ok_or(Error::NoSession)?; let (digest, controller) = context.finalize()?; self.controller = Some(controller); // ✅ Controller recovery Ok(digest) } } }
Key Benefits of the Move-Based Solution
- True Streaming Support: Contexts can be stored and updated incrementally
- Session Isolation: Each session owns its context independently
- Resource Recovery:
cancel()
method allows controller recovery - Rust Ownership Safety: Move semantics prevent use-after-finalize
- Backward Compatibility: Scoped API remains unchanged for simple use cases
Implementation Examples
Session-Based Streaming (Now Possible):
#![allow(unused)] fn main() { // SPDM certificate chain verification with streaming let session_id = digest_server.init_sha256()?; for cert_chunk in certificate_chain.chunks(1024) { digest_server.update(session_id, cert_chunk)?; } let cert_digest = digest_server.finalize_sha256(session_id)?; }
One-Shot Operations (Still Supported):
#![allow(unused)] fn main() { // Simple hash computation using scoped API let digest = digest_device.compute_sha256(complete_data)?; }
Current Implementation Status
The dual API solution is fully implemented and working:
- ✅ Scoped API: Original lifetime-constrained API for simple operations
- ✅ Owned API: New move-based API enabling persistent sessions
- ✅ Mock Implementation: Both APIs implemented in baremetal mock platform
- ✅ Comprehensive Testing: Session storage patterns validated
- ✅ Documentation: Complete analysis comparing both approaches
Architectural Resolution
The dual API approach resolves all original limitations:
- ✅ Session-based streaming is now possible with the owned API
- ✅ Both one-shot and streaming operations supported via appropriate API choice
- ✅ Design document architecture is now implementable using owned contexts
- ✅ Streaming large data sets fully supported with persistent session state
This demonstrates how API design evolution can solve fundamental architectural constraints while maintaining backward compatibility. The move-based resource management pattern provides the persistent contexts needed for server applications while preserving the simplicity of scoped operations for basic use cases.
Converting Rust HAL Traits to Idol Interfaces: A Practical Guide
Overview
This guide explains how to transform Rust Hardware Abstraction Layer (HAL) traits into Idol interface definitions for use in Hubris-based systems. Based on practical experience converting the digest traits, this guide covers the key patterns, challenges, and solutions.
Table of Contents
- Understanding the Transformation
- Core Design Patterns
- Step-by-Step Conversion Process
- Common Challenges and Solutions
- Type System Considerations
- Error Handling Patterns
- Performance Considerations
- Testing and Validation
Understanding the Transformation
From Trait-Based to IPC-Based
Rust HAL traits provide compile-time polymorphism with:
- Associated types
- Lifetime parameters
- Generic parameters
- Zero-cost abstractions
- Direct memory access
Idol interfaces provide runtime communication with:
- Concrete types
- Message passing
- Serialization boundaries
- Process isolation
- Memory leases for data transfer
Key Conceptual Shifts
Rust Trait Concept | Idol Equivalent | Transformation Strategy |
---|---|---|
&mut self methods | Session-based operations | Use session IDs |
Associated types | Concrete types | Define enums/structs |
Lifetimes | Ownership transfer | Memory leases |
Generic parameters | Multiple operations | One operation per type |
Zero-cost abstractions | IPC overhead | Optimize message structure |
Core Design Patterns
1. Session-Based State Management
Problem: Rust traits use &mut self
for stateful operations.
Solution: Use session IDs to track state across IPC boundaries.
#![allow(unused)] fn main() { // Original Trait pub trait DigestOp: ErrorType { fn update(&mut self, input: &[u8]) -> Result<(), Self::Error>; fn finalize(self) -> Result<Self::Output, Self::Error>; } }
// Idol Interface
Interface(
name: "Digest",
ops: {
"init_sha256": (
reply: Result(ok: "u32", err: CLike("DigestError")), // Returns session ID
),
"update": (
args: { "session_id": "u32", "len": "u32" },
leases: { "data": (type: "[u8]", read: true, max_len: Some(1024)) },
reply: Result(ok: "()", err: CLike("DigestError")),
),
"finalize_sha256": (
args: { "session_id": "u32" },
leases: { "digest_out": (type: "[u32; 8]", write: true) },
reply: Result(ok: "()", err: CLike("DigestError")),
),
},
)
2. Generic Type Expansion
Problem: Rust traits use generics to support multiple types. Solution: Create separate operations for each concrete type.
#![allow(unused)] fn main() { // Original Generic Trait pub trait DigestInit<T: DigestAlgorithm>: ErrorType { fn init(&mut self, params: T) -> Result<Self::OpContext<'_>, Self::Error>; } }
// Idol Interface - Expanded Operations
"init_sha256": (/* ... */),
"init_sha384": (/* ... */),
"init_sha512": (/* ... */),
"init_sha3_256": (/* ... */),
// etc.
3. Memory Lease Patterns
Problem: Rust uses references and slices for zero-copy operations. Solution: Use Idol memory leases for efficient data transfer.
Rust Pattern | Idol Lease Pattern | Use Case |
---|---|---|
&[u8] | read: true | Input data |
&mut [u8] | write: true | Output buffers |
&T | read: true | Configuration structs |
&mut T | write: true | Result structs |
4. Error Type Consolidation
Problem: Traits use associated error types and generic error handling. Solution: Define comprehensive concrete error enums.
#![allow(unused)] fn main() { // Original - Generic Error pub trait ErrorType { type Error: Error; } pub trait Error: core::fmt::Debug { fn kind(&self) -> ErrorKind; } }
#![allow(unused)] fn main() { // Idol - Concrete Error Enum #[derive(Copy, Clone, Debug, FromPrimitive, Eq, PartialEq, IdolError, counters::Count)] #[repr(u32)] pub enum DigestError { InvalidInputLength = 1, UnsupportedAlgorithm = 2, // ... comprehensive error cases #[idol(server_death)] ServerRestarted = 100, } }
Step-by-Step Conversion Process
Step 1: Analyze the Original Trait
-
Identify State Management Patterns
- Methods that take
&mut self
→ Need session management - Methods that consume
self
→ Need session cleanup - Associated types → Need concrete type definitions
- Methods that take
-
Map Data Flow
- Input parameters → Idol args + read leases
- Output parameters → Idol return values + write leases
- Mutable references → Write leases
-
Catalog Error Cases
- Collect all possible error conditions
- Map generic
ErrorKind
to specific error variants
Step 2: Design the Idol Interface
-
Create the IDL File
mkdir -p hubris/idl/ touch hubris/idl/my_trait.idol
-
Define Operations Structure
Interface( name: "MyTrait", ops: { // Initialization operations "init_*": (/* ... */), // State manipulation operations "operation_*": (/* ... */), // Cleanup operations "reset": (/* ... */), // Convenience operations "oneshot_*": (/* ... */), }, )
-
Design Session Management
- Use
u32
session IDs - Return session ID from init operations
- Pass session ID to subsequent operations
- Use
Step 3: Create the API Package
-
Directory Structure
hubris/drv/my-trait-api/ ├── Cargo.toml ├── build.rs └── src/ └── lib.rs
-
Configure Cargo.toml
[package] name = "drv-my-trait-api" version = "0.1.0" edition = "2021" [dependencies] idol-runtime.workspace = true num-traits.workspace = true zerocopy.workspace = true zerocopy-derive.workspace = true counters = { path = "../../lib/counters" } derive-idol-err = { path = "../../lib/derive-idol-err" } userlib = { path = "../../sys/userlib" } [build-dependencies] idol.workspace = true [lib] test = false doctest = false bench = false [lints] workspace = true
-
Create build.rs
fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> { idol::client::build_client_stub("../../idl/my_trait.idol", "client_stub.rs")?; Ok(()) }
Step 4: Implement Type Definitions
-
Create Zerocopy-Compatible Types
#![allow(unused)] fn main() { #[derive( Copy, Clone, Debug, PartialEq, Eq, zerocopy::IntoBytes, zerocopy::FromBytes, zerocopy::Immutable, zerocopy::KnownLayout, )] #[repr(C, packed)] // Use packed for complex structs pub struct MyConfig { pub field1: u32, pub field2: u8, // Avoid bool - use u8 instead pub enabled: u8, } }
-
Define Error Types
#![allow(unused)] fn main() { #[derive( Copy, Clone, Debug, FromPrimitive, Eq, PartialEq, IdolError, counters::Count, )] #[repr(u32)] pub enum MyTraitError { // Map from original ErrorKind InvalidInput = 1, HardwareFailure = 2, // ... #[idol(server_death)] ServerRestarted = 100, } }
-
Create Enum Types for IPC
#![allow(unused)] fn main() { #[derive( Copy, Clone, Debug, PartialEq, Eq, zerocopy::IntoBytes, zerocopy::Immutable, zerocopy::KnownLayout, FromPrimitive, )] #[repr(u32)] // Use u32 for enums pub enum MyAlgorithm { Algorithm1 = 0, Algorithm2 = 1, } }
Step 5: Handle Memory Management
-
Input Data Patterns
"process_data": ( args: { "len": "u32" }, leases: { "input_data": (type: "[u8]", read: true, max_len: Some(4096)), }, ),
-
Output Data Patterns
"get_result": ( args: { "session_id": "u32" }, leases: { "output_buffer": (type: "[u8]", write: true, max_len: Some(1024)), }, ),
-
Configuration Patterns
"configure": ( args: { "session_id": "u32" }, leases: { "config": (type: "MyConfig", read: true), }, ),
Common Challenges and Solutions
Challenge 1: Associated Types
Problem: Rust traits use associated types for flexibility.
#![allow(unused)] fn main() { pub trait DigestAlgorithm { const OUTPUT_BITS: usize; type Digest; } }
Solution: Define concrete types and use constants.
#![allow(unused)] fn main() { pub const SHA256_WORDS: usize = 8; pub type Sha256Digest = DigestOutput<SHA256_WORDS>; #[repr(C)] pub struct DigestOutput<const N: usize> { pub value: [u32; N], } }
Challenge 2: Lifetime Parameters
Problem: Rust contexts have lifetime dependencies.
#![allow(unused)] fn main() { pub trait DigestInit<T>: ErrorType { type OpContext<'a>: DigestOp where Self: 'a; fn init<'a>(&'a mut self, params: T) -> Result<Self::OpContext<'a>, Self::Error>; } }
Solution: Replace with session-based state management.
#![allow(unused)] fn main() { // Server maintains context mapping struct DigestServer { contexts: HashMap<u32, DigestContext>, next_session_id: u32, } }
Challenge 3: Generic Methods
Problem: Single generic method supports multiple types.
#![allow(unused)] fn main() { fn process<T: Algorithm>(&mut self, data: &[u8], algo: T) -> Result<T::Output, Error>; }
Solution: Create type-specific operations.
"process_sha256": (/* ... */),
"process_sha384": (/* ... */),
"process_aes": (/* ... */),
Challenge 4: Complex Return Types
Problem: Rust can return complex generic types.
#![allow(unused)] fn main() { fn finalize(self) -> Result<Self::Output, Self::Error>; }
Solution: Use output leases for complex types.
"finalize": (
args: { "session_id": "u32" },
leases: { "result": (type: "MyResult", write: true) },
reply: Result(ok: "()", err: CLike("MyError")),
),
Type System Considerations
Zerocopy Compatibility
All types used in Idol interfaces must be zerocopy-compatible:
#![allow(unused)] fn main() { // ✅ Good - Zerocopy compatible #[derive(zerocopy::IntoBytes, zerocopy::FromBytes, zerocopy::Immutable)] #[repr(C)] pub struct GoodConfig { pub value: u32, pub enabled: u8, // Not bool! pub _padding: [u8; 3], // Explicit padding } // ❌ Bad - Not zerocopy compatible pub struct BadConfig { pub value: u32, pub enabled: bool, // bool doesn't implement FromBytes pub data: Vec<u8>, // Dynamic allocation } }
Enum Representations
#![allow(unused)] fn main() { // ✅ Good - Use u32 for enums #[derive(FromPrimitive)] #[repr(u32)] pub enum MyEnum { Variant1 = 0, Variant2 = 1, } // ❌ Bad - u8 enums with FromBytes need 256 variants #[repr(u8)] pub enum SmallEnum { A = 0, B = 1, // Only 2 variants - FromBytes won't work } }
Padding and Alignment
#![allow(unused)] fn main() { // ✅ Good - Use packed for complex layouts #[repr(C, packed)] pub struct PackedStruct { pub field1: u8, pub field2: u32, // No padding issues } // ✅ Good - Manual padding control #[repr(C)] pub struct PaddedStruct { pub field1: u8, pub _pad: [u8; 3], // Explicit padding pub field2: u32, } }
Error Handling Patterns
Comprehensive Error Mapping
Map all possible error conditions from the original trait:
#![allow(unused)] fn main() { // Original trait error kinds pub enum ErrorKind { InvalidInputLength, UnsupportedAlgorithm, HardwareFailure, // ... } // Idol error enum - comprehensive mapping #[derive(Copy, Clone, Debug, FromPrimitive, IdolError, counters::Count)] #[repr(u32)] pub enum MyTraitError { // Map each ErrorKind to a specific variant InvalidInputLength = 1, UnsupportedAlgorithm = 2, HardwareFailure = 3, // Add IPC-specific errors InvalidSession = 10, TooManySessions = 11, // Required for Hubris #[idol(server_death)] ServerRestarted = 100, } }
Error Context Preservation
#![allow(unused)] fn main() { // Add context-specific error variants pub enum MyTraitError { // Operation-specific errors InitializationFailed = 20, UpdateFailed = 21, FinalizationFailed = 22, // Resource-specific errors OutOfMemory = 30, BufferTooSmall = 31, InvalidConfiguration = 32, } }
Performance Considerations
Minimize Message Overhead
-
Batch Operations: Combine related parameters into single calls
// ✅ Good - Single call with all parameters "configure_and_start": ( args: { "algorithm": "MyAlgorithm", "buffer_size": "u32", "timeout_ms": "u32", }, ), // ❌ Bad - Multiple round trips "set_algorithm": (args: {"algo": "MyAlgorithm"}), "set_buffer_size": (args: {"size": "u32"}), "set_timeout": (args: {"timeout": "u32"}), "start": (),
-
Efficient Data Transfer: Use appropriate lease sizes
leases: { // Size limits based on expected usage "small_data": (type: "[u8]", read: true, max_len: Some(256)), "large_data": (type: "[u8]", read: true, max_len: Some(4096)), }
Memory Lease Optimization
- Right-size Buffers: Don't over-allocate
- Reuse Sessions: Avoid constant init/cleanup
- Batch Updates: Process multiple chunks in one call when possible
Testing and Validation
Build Verification
-
ARM Target Build:
cargo build -p drv-my-trait-api --target thumbv7em-none-eabihf
-
Generated Code Inspection:
ls target/thumbv7em-none-eabihf/debug/build/drv-my-trait-api*/out/ head -50 target/thumbv7em-none-eabihf/debug/build/drv-my-trait-api*/out/client_stub.rs
API Surface Validation
- Check Generated Operations: Verify all expected operations are present
- Type Safety: Ensure all types compile correctly
- Error Handling: Verify error propagation works
Integration Testing
- Mock Server: Create a simple server implementation
- Client Testing: Test all operation patterns
- Error Scenarios: Test error handling paths
Example: Complete Conversion
Here's a complete example showing the transformation of a simple trait:
Original Rust Trait
#![allow(unused)] fn main() { pub trait Crypto: ErrorType { type Algorithm: CryptoAlgorithm; type Context<'a>: CryptoOp where Self: 'a; fn init<'a>(&'a mut self, algo: Self::Algorithm) -> Result<Self::Context<'a>, Self::Error>; } pub trait CryptoOp: ErrorType { type Output; fn process(&mut self, data: &[u8]) -> Result<(), Self::Error>; fn finalize(self) -> Result<Self::Output, Self::Error>; } }
Converted Idol Interface
Interface(
name: "Crypto",
ops: {
"init_aes": (
reply: Result(ok: "u32", err: CLike("CryptoError")),
),
"init_chacha": (
reply: Result(ok: "u32", err: CLike("CryptoError")),
),
"process": (
args: { "session_id": "u32", "len": "u32" },
leases: { "data": (type: "[u8]", read: true, max_len: Some(1024)) },
reply: Result(ok: "()", err: CLike("CryptoError")),
),
"finalize_aes": (
args: { "session_id": "u32" },
leases: { "output": (type: "[u8; 16]", write: true) },
reply: Result(ok: "()", err: CLike("CryptoError")),
),
"finalize_chacha": (
args: { "session_id": "u32" },
leases: { "output": (type: "[u8; 32]", write: true) },
reply: Result(ok: "()", err: CLike("CryptoError")),
),
},
)
API Package Implementation
#![allow(unused)] fn main() { // drv/crypto-api/src/lib.rs #![no_std] use derive_idol_err::IdolError; use userlib::{sys_send, FromPrimitive}; #[derive(Copy, Clone, Debug, PartialEq, Eq, zerocopy::IntoBytes, zerocopy::Immutable, FromPrimitive)] #[repr(u32)] pub enum CryptoAlgorithm { Aes = 0, ChaCha = 1, } #[derive(Copy, Clone, Debug, FromPrimitive, Eq, PartialEq, IdolError, counters::Count)] #[repr(u32)] pub enum CryptoError { InvalidInput = 1, InvalidSession = 2, HardwareFailure = 3, #[idol(server_death)] ServerRestarted = 100, } include!(concat!(env!("OUT_DIR"), "/client_stub.rs")); }
Conclusion
Converting Rust HAL traits to Idol interfaces requires careful consideration of:
- State Management: Sessions instead of lifetimes
- Type Systems: Concrete types instead of generics
- Memory Management: Leases instead of references
- Error Handling: Comprehensive concrete error enums
- Performance: Efficient message design
The key is to preserve the semantic meaning and safety guarantees of the original trait while adapting to the constraints and patterns of the Hubris IPC system.
By following these patterns and guidelines, you can successfully transform complex Rust HAL traits into efficient, type-safe Idol interfaces that maintain the robustness and performance characteristics expected in embedded systems.