Support Vector Machines Classification on Bendable RISC-V

Support Vector Machines Classification on Bendable RISC-V

Abstract

This paper addresses the challenge of implementing power-hungry Machine Learning (ML) algorithms on ultra-low-cost, lightweight Flexible Electronics (FE). The authors propose an open-source framework and a custom, precision-scalable Support Vector Machine (SVM) accelerator specifically designed for the Bendable RISC-V core. Experimental results demonstrate significant optimization, achieving an average 21x improvement in both inference execution time and energy efficiency for low-power edge intelligence.

Report

Structured Report: Support Vector Machines Classification on Bendable RISC-V

Key Highlights

  • Target Platform: The research focuses on integrating complex ML capabilities onto the Bendable RISC-V core, utilizing Flexible Electronics (FE) technology.
  • Core Innovation: Introduction of an open-source framework designed for developing specialized ML co-processors for the Bendable RISC-V architecture.
  • Accelerator Design: A custom ML accelerator architecture is presented specifically for Support Vector Machine (SVM) classification.
  • Efficiency Gains: The implementation yields dramatic performance improvements, showcasing a 21x enhancement in both inference execution time and energy efficiency, on average.
  • Scalability: The design incorporates generic precision scalability, supporting various weight formats (4-bit, 8-bit, and 16-bit).

Technical Details

  • Base Architecture: Bendable RISC-V core, chosen for its suitability in ultra-low-cost, lightweight, and environmentally-friendly Flexible Electronics.
  • ML Algorithm: Support Vector Machines (SVM) classification.
  • Supported SVM Methods: The accelerator is engineered to support both multi-class classification strategies: one-vs-one (OvO) and one-vs-rest (OvR) algorithms.
  • Precision Control: The architecture utilizes a precision-scalable design, allowing developers to choose between 4-bit, 8-bit, or 16-bit representations for the weights, optimizing for either speed/power or accuracy.
  • Performance Metric: The primary experimental result highlights a 21x average improvement in both execution time (latency) and power consumption (energy efficiency) compared to running the ML workload without the custom accelerator.

Implications

  • Democratization of Edge AI: By overcoming the traditional constraints of rigidity, cost, and high power consumption associated with conventional silicon, this work enables the widespread deployment of intelligent, autonomous systems in lightweight and flexible contexts (e.g., smart sensors embedded in everyday objects).
  • Advancing Flexible Electronics (FE): This research directly addresses the power and size challenges that previously constrained ML realization on FE devices, proving that complex classification tasks like SVM are viable on these platforms.
  • RISC-V Ecosystem Growth: Providing an open-source framework for developing ML co-processors significantly lowers the barrier to entry for developers wishing to create custom hardware acceleration for the Bendable RISC-V core, strengthening the open-source hardware ecosystem.
  • Low-Power Intelligence: The achieved 21x energy efficiency improvement is crucial for battery-powered or energy-harvesting flexible devices, making real-time ML inference practical at the strict power limits of the extreme edge.
lock-1

Technical Deep Dive Available

This public summary covers the essentials. The Full Report contains exclusive architectural diagrams, performance audits, and deep-dive technical analysis reserved for our members.

Read Full Report →