DeepVerifier: Learning to Update Test Sequences for Coverage-Guided Verification
Hardware Review Research

DeepVerifier: Learning to Update Test Sequences for Coverage-Guided Verification

Admin (Updated: ) 2 min read

Abstract

DeepVerifier introduces a novel machine learning framework designed to optimize hardware verification efficiency by intelligently updating existing test sequences for coverage closure. The system utilizes deep reinforcement learning (DRL) to guide the modification of test vectors, focusing on maximizing coverage metrics while minimizing redundant simulation cycles. This approach provides a significant acceleration in achieving high functional coverage compared to traditional random or constrained-random verification methodologies.

Report

DeepVerifier: Learning to Update Test Sequences for Coverage-Guided Verification

Key Highlights

  • ML-Driven Verification: DeepVerifier represents a critical advancement in Verification Technology, applying Deep Learning specifically to the challenge of test sequence refinement.
  • Focus on Update and Modification: Unlike tools that generate test vectors from scratch, DeepVerifier specializes in analyzing existing simulation traces and intelligently mutating or extending sequences to hit previously uncovered code paths.
  • Coverage Acceleration: The primary goal is achieving rapid coverage closure, particularly for hard-to-reach corner cases or complex state transitions that are usually prohibitively expensive to verify using traditional constrained-random simulation (CRS).
  • Seamless Integration: Designed to integrate with existing coverage-guided verification (CGV) frameworks, making it applicable to standard UVM (Universal Verification Methodology) environments.

Technical Details

  • Architecture: The core architecture is based on a Deep Reinforcement Learning (DRL) agent, which treats the test sequence update process as a sequential decision-making problem.
  • State Space: The DRL agent's input state includes the current coverage map (e.g., branch coverage, instruction coverage), the sequence history, and relevant architectural registers or memory states.
  • Action Space: The actions available to the agent involve parameterized mutations on the test sequence, such as instruction insertion, operand modification, register choice alteration, or conditional sequence branching.
  • Reward Function: The reward structure is carefully engineered to incentivize the agent to discover novel coverage points, assigning higher positive rewards for hitting low-frequency or high-priority verification goals (e.g., specific hazard conditions or illegal instruction behaviors).
  • Training Data: The system trains iteratively using feedback loops derived from standard hardware simulators, where each simulation run generates data on the effectiveness of the sequence update.

Implications

  • RISC-V Verification Quality: For the RISC-V ecosystem, where core designers frequently implement custom instruction set extensions (ISAs), efficient verification is paramount. DeepVerifier directly tackles the verification complexity associated with these highly configurable and modular architectures, ensuring higher quality silicon.
  • Reduced Time-to-Market: By automating and accelerating the most resource-intensive part of the design cycle—coverage closure—DeepVerifier allows RISC-V companies and startups to drastically reduce verification time, speeding up product release.
  • Democratization of Advanced Verification: Traditional verification expertise is costly. By introducing an intelligent, self-optimizing framework, DeepVerifier lowers the barrier for smaller teams or academic projects working on sophisticated RISC-V designs to achieve commercial-grade verification standards.
  • Future of EDA: This work further establishes machine learning as an indispensable tool in the Electronic Design Automation (EDA) flow, shifting verification from purely static or randomized methods toward intelligent, goal-oriented optimization.