Scalable and Power-Efficient Neural Processing Units

The Cadence Neo NPUs offer energy-efficient hardware-based AI engines that can be paired with any host processor for offloading artificial intelligence and machine learning (AI/ML) processing. The Neo NPUs target a wide variety of applications, including sensor, audio, voice/speech, vision, radar, and more. The comprehensive performance range makes the Neo NPUs well-suited for ultra-power-sensitive applications such as IoT, hearables/wearables, high-performance systems in AR/VR, automotive, and more.

The product architecture natively supports the processing required for many network topologies and operators, allowing for a complete or near-complete offload from the host processor. Depending on the application’s needs, the host processor can be an application processor, a general-purpose MCU, or a DSP for pre-/post-processing and associated signal processing, with the inferencing managed by the NPU.

cubes image

Differentiate Your Design, While Delivering Market-Leading Capabilities

Flexible System Integration

The Neo NPUs can be integrated with any host processor to offload the AI portions of the application

Scalable Design and Configurability

The Neo NPUs support up to 80 TOPS with a single-core and are architected to enable multi-core solutions of 100s of TOPS

Efficient in Mapping State-of-the-Art AI/ML Workloads

Best-in-class performance for inferences per second with low latency and high throughput, optimized for achieving high performance within a low-energy profile for classic and generative AI

Industry-Leading Performance and Power Efficiency

High Inferences per second per area (IPS/mm2 and per power (IPS/W)

End-to-End Software Toolchain for All Markets and a Large Number of Frameworks

NeuroWeave SDK provides a common tool for compiling networks across IP, with flexibility for performance, accuracy, and run-time environments

Performance Options to Fit Your Application

  • Single-core performance up to 80 TOPS
    • Configurable range of 256 MACs per cycle to 32k MACs per cycle
    • Upward-scalable with multi-core topologies for 100s of TOPS
  • Efficient offload and execution of neural network processing from any application host processor
  • Built-in support for many networks, including CNN, RNN, Transformer, and more
  • Built-in support for multiple data types, including int4, int8, int16, and fp16
  • Application targets varied across many domains (sensor, audio, vision, radar) and markets (IoT, hearables/wearables, AR/VR, automotive)

Need Help

Cadence is committed to keeping design teams highly productive with a range of support offerings and processes designed to keep users focused on reducing time to market and achieving silicon success.

無料ソフトウェア評価

SDKソフトウェア開発ツールキットを15日間無料でお試しいただけます。EclipseベースのIDEがいかに簡単に使えるかを実感いただけます。

Apply Now

トレーニング

テンシリカのハンズオントレーニングは、テンシリカのツールの理解と製品の活用を飛躍的にスピードアップさせることが実証されています。

Browse Catalog

オンラインサポート

最新の記事や技術資料を掲載したナレッジベースに24時間いつでもアクセスできます。(ログインが必要です)

Access Now

Xtensa Processor Generator (XPG)

Xtensa Processor Generator (XPG)は、Xtensaのテクノロジーの中核となるものです。この特許取得済みのクラウドベースのシステムは、構築しながら正しいプロセッサが得られ、それに関連するすべてのソフトウェア、モデルなどすべてを自動的に作成します。(ログインが必要です)

Launch XPG

Technical Forums

Find community on the technical forums to discuss and elaborate on your design ideas.

Find Answers