The ZC-1 consensus protocol implements a sophisticated 9-phase process that ensures Byzantine fault tolerance while maintaining exceptional performance:
Phase 1: Epoch Setup
~200ms
Purpose: Initialize new consensus epoch with cryptographic foundation
Technical Details:
- Generates cryptographic seed for the entire consensus round
- Establishes validator set and stake distributions
- Creates secure random beacon for subsequent phases
- Initializes network parameters and safety thresholds
Key Metrics:
- Epoch seed generation: SHA-256 based
- Network stake validation: 2-7 million ZBC typically
- Validator initialization: 21 nodes with varied stake weights
Phase 2: VRF Committee Selection
~800ms
Purpose: Fair, verifiable committee selection using VRF technology
Technical Details:
- Verifiable Random Function ensures unpredictable but verifiable selection
- AI-assisted fairness algorithms prevent validator concentration
- Cryptographic proofs of selection validity
- Stake-weighted selection probability
Selection Process:
- Each validator generates VRF proof
- Proofs verified by network participants
- Lowest hash values selected for committee
- Selection verified through cryptographic proofs
Phase 3: DAG Blocklet Ordering
~1200ms
Purpose: Create structured transaction ordering using DAG principles
Technical Details:
- Creates blocklets with references to N-f prior blocks (typically 14 references)
- Maintains DAG structure for parallel transaction processing
- Ensures causal ordering while maximizing throughput
- Implements topological sorting for final ordering
Blocklet {
ID: Unique identifier
Transactions: Batch of validated transactions
DAG_References: Array of prior blocklet references
Proposer: Validator who proposed this blocklet
Timestamp: Creation time
Merkle_Root: Transaction batch verification
}
Phase 4: Data Availability Sampling
~1000ms
Purpose: Ensure transaction data availability across the network
Technical Details:
- Reed-Solomon encoding with configurable redundancy
- Distributed storage across validator network
- Probabilistic sampling for availability verification
- Cryptographic proofs of data possession
Availability Guarantees:
- Target availability: 99.7%
- Redundancy factor: 3x
- Recovery capability: Up to 67% data loss
- Verification samples: 100+ per blocklet
Phase 5: BFT Prevote Phase
~900ms
Purpose: First phase of Byzantine Fault Tolerant voting
Technical Details:
- Validators cast preliminary votes on checkpoint candidates
- Cryptographic signatures ensure vote authenticity
- Aggregation of votes by stake weight
- Safety threshold: 67% of stake weight required
Phase 6: BFT Precommit Phase
~1100ms
Purpose: Final voting phase for checkpoint commitment
Technical Details:
- Final commitment votes after prevote threshold reached
- Enhanced cryptographic verification
- Preparation for finalization
- Double-spending and conflict detection
Critical Thresholds:
- Required stake weight: 67%+ (typically ~1,695/2,400 stake achieved)
- Safety margin: 70.6% typical achievement
- Finality guarantee: Irreversible after this phase
Phase 7: Checkpoint Finalization
~600ms
Purpose: Finalize checkpoint and generate receipts
Technical Details:
- Checkpoint committed to permanent record
- Finality receipts generated for all transactions
- State machine advancement
- Obsolete DAG sections marked for pruning
Phase 8: Fusaka Quantum Validation
~800ms
Purpose: Quantum-resistant state validation and security enhancement
Technical Details:
- Dilithium signature verification (post-quantum)
- ZK-STARK proof generation (47,000+ proofs per round)
- Quantum entropy management (target: 0.952+ entropy level)
- Post-quantum cryptographic state validation
Fusaka Components:
- Quantum Entropy Pool: Continuously maintained randomness source
- Dilithium Signatures: NIST-standardized post-quantum signatures
- ZK-STARK Proofs: Zero-knowledge proofs for state transitions
- Fusion Coefficient: Quantum resistance measurement (target: 1.634)
Phase 9: DAG Pruning & Cleanup
~400ms
Purpose: Network maintenance and optimization
Technical Details:
- Removal of obsolete blocklets and references
- Garbage collection of temporary consensus data
- Network state optimization
- Storage efficiency maintenance
Cleanup Metrics:
- Obsolete blocks removed: 200+ per round typically
- Storage reclaimed: Variable based on network activity
- DAG size optimization: Maintains <3GB typical size
- Network efficiency: Maintains 98.7% gossip efficiency