Embedded FPGA Optimised for AI and Machine Learning

0
521
eFPGA

Achronix offers its latest Speedcore Gen4 eFPGA IP optimised for AI/ML and networking hardware acceleration applications

Achronix has announced the availability of the Speedcore Gen4, a new embedded FPGA (eFPGA) IP designed as an AI (artificial intelligence) accelerator for integration into users’ SoCs. The new IP architecture reckons substantial improvements in performance, power and area: all issues that are important to silicon designers.

The company notes that Speedcore Gen4 increases performance by 60%, reduces power by 50% and die area by 65% while retaining the original Speedcore eFPGA IP’s abilities to bring programmable hardware-acceleration capabilities to a broad range of compute, networking and storage systems for interface protocol bridging/switching, algorithmic acceleration and packet processing applications.

Higher performance for AI/ML applications

Advertisement

By adding the new Machine Learning Processor (MLP) to the library of available blocks, the Speedcore Gen4 architecture claims to offer  300% higher system performance for artificial intelligence and machine learning (AI/ML) applications.

“MLP blocks are highly flexible, compute engines tightly coupled with embedded memories to give the highest performance per watt and lowest cost solution for AI/ML applications”, the company said.

“Achronix Speedcore eFPGA with Gen4 architecture provides an optimal balance of hardware acceleration previously found only in ASIC implementations,” said Robert Blake, president and CEO of Achronix Semiconductor. “Our new architecture adds the flexibility and reprogrammability of our proven FPGA technology to support exploding demand for new AI/ML and high data bandwidth applications.”

“The dramatic increase in fixed and wireless network bandwidth, coupled with the redistribution of processing, and the emergence of billions of IoT devices will stress traditional network and compute infrastructure. Classic Cloud and Enterprise Data Centre computing resources and communications infrastructure can no longer keep pace with exponential growth in data rates, the rapidly changing security protocols, or the many new networking and connectivity requirements. Traditional multicore CPUs and SoCs cannot meet these requirements unaided. They need hardware accelerators, often reprogrammable to pre-process and offload computations to increase the systems’ overall compute performance”, Achronix said in the announcement.

READ
New Supercaps Offer High Pulse Power Handling For Industrial/Consumer Electronics

“The AI/ML wave continues to gain momentum with more IP geared toward AI applications. Achronix’s announcement about its new IP architecture is encouraging and offers substantial improvements in performance, power and area: all issues that are important to silicon designers,” said Rich Wawrzyniak, senior analyst, ASIC Services. “The introduction of eFPGA IP aimed at AI/ML applications that can be cost-effective in both Cloud servers for training and in end-point devices for inference applications reinforces the view that AI functionality will become a ‘check-list’ item in most silicon solutions going forward.”

Speedcore Gen4 design tools

Achronix’s ACE design tools include pre-configured, Speedcore Gen4 eFPGA example instances users can use to evaluate Speedcore Gen4 quality of results for performance, resource usage, and compile times. The ACE design tools with support for Speedcore Gen4 are available today.

Availability

Speedcore Gen4 IP is available for licensing on the most advanced FinFET processes today.

Click here for further details.

Advertisement

SHARE YOUR THOUGHTS & COMMENTS

Please enter your comment!
Please enter your name here

Are you human? *