Assessing Tenstorrent's RISC-V MatMul Acceleration Capabilities
Originally published on ArXiv - Hardware Architecture
Computer Science > Performance
arXiv:2505.06085v3 (cs)
[Submitted on 9 May 2025 (v1), last revised 20 Jun 2025 (this version, v3)]
Title:Assessing Tenstorrent's RISC-V MatMul Acceleration Capabilities
Authors:Hiari Pizzini Cavagna, Daniele Cesarini, Andrea Bartolini
View a PDF of the paper titled Assessing Tenstorrent's RISC-V MatMul Acceleration Capabilities, by Hiari Pizzini Cavagna and 2 other authors
Abstract:The increasing demand for generative AI as Large Language Models (LLMs) services has driven the need for specialized hardware architectures that optimize computational efficiency and energy consumption. This paper evaluates the performance of the Tenstorrent Grayskull e75 RISC-V accelerator for basic linear algebra kernels at reduced numerical precision, a fundamental operation in LLM computations. We present a detailed characterization of Grayskull's execution model, gridsize, matrix dimensions, data formats, and numerical precision impact computational efficiency. Furthermore, we compare Grayskull's performance against state-of-the-art architectures with tensor acceleration, including Intel Sapphire Rapids processors and two NVIDIA GPUs (V100 and A100). Whilst NVIDIA GPUs dominate raw performance, Grayskull demonstrates a competitive trade-off between power consumption and computational throughput, reaching a peak of 1.55 TFLOPs/Watt with BF16.
Comments:
Subjects:
Performance (cs.PF); Artificial Intelligence (cs.AI); Hardware Architecture (cs.AR)
Cite as:
arXiv:2505.06085 [cs.PF]
(or arXiv:2505.06085v3 [cs.PF] for this version)
https://doi.org/10.48550/arXiv.2505.06085
Focus to learn more
arXiv-issued DOI via DataCite
Submission history
From: Hiari Pizzini Cavagna [view email]
[v1] Fri, 9 May 2025 14:29:37 UTC (600 KB)
[v2] Thu, 15 May 2025 13:07:31 UTC (597 KB)
[v3] Fri, 20 Jun 2025 13:34:13 UTC (446 KB)
Full-text links:
Access Paper:
View a PDF of the paper titled Assessing Tenstorrent's RISC-V MatMul Acceleration Capabilities, by Hiari Pizzini Cavagna and 2 other authors
Current browse context:
cs.PF
Change to browse by:
References & Citations
export BibTeX citation Loading…
BibTeX formatted citation
×
loading…
Data provided by:
Bookmark
[

](http://www.bibsonomy.org/BibtexHandler?requTask=upload&url=https://arxiv.org/abs/2505.06085&description=Assessing Tenstorrent's RISC-V MatMul Acceleration Capabilities "Bookmark on BibSonomy")[

](https://reddit.com/submit?url=https://arxiv.org/abs/2505.06085&title=Assessing Tenstorrent's RISC-V MatMul Acceleration Capabilities "Bookmark on Reddit")
Bibliographic Tools
Bibliographic and Citation Tools
Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media
Code, Data and Media Associated with this Article
alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos
Demos
Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers
Recommenders and Search Tools
Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
- Author
- Venue
- Institution
- TopicAbout arXivLabs
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
AI Analysis
Key Highlights
- Hardware Focus: The study specifically assesses the performance of the Tenstorrent Grayskull e75 RISC-V accelerator.
- Efficiency Metric: Grayskull achieved a high energy efficiency peak of 1.55 TFLOPs/Watt when utilizing BF16 reduced numerical precision.
- AI Workload: The evaluation focused on basic linear algebra kernels (MatMul), foundational operations for modern Large Language Models (LLMs).
- Competitive Standing: While NVIDIA GPUs (V100 and A100) demonstrated superior raw performance, Grayskull provided a compelling trade-off between computational throughput and power consumption.
Technical Details
- Accelerator: Tenstorrent Grayskull e75 (a RISC-V architecture).
- Test Kernels: Basic linear algebra kernels, focusing on MatMul (Matrix Multiplication).
- Precision: Evaluation used reduced numerical precision, specifically achieving peak efficiency with BF16 (Bfloat16).
- Comparison Benchmarks: Performance was measured against current industry standard tensor accelerators, including Intel Sapphire Rapids processors and NVIDIA V100 and A100 GPUs.
- Characterization Parameters: The analysis included a detailed examination of Grayskull’s execution model, grid size, matrix dimensions, data formats, and the impact of numerical precision on efficiency.
Implications
- Validation of RISC-V in AI: The findings validate that RISC-V based architectures, like Grayskull, can offer competitive power-performance ratios suitable for energy-conscious AI and LLM deployment scenarios.
- Alternative Compute Viability: Although not achieving the absolute highest raw throughput of top-tier GPUs, the high TFLOPs/Watt figure positions Tenstorrent as a strong alternative for data centers or edge applications where power budgets are constrained.
- Ecosystem Growth: Providing public performance data on a major commercial RISC-V AI accelerator contributes critical benchmarks, encouraging further development and investment in the open-source instruction set architecture for demanding computational fields like AI/ML.