CadenceLIVE Israel – OnDemand

Keynote

Fueling the Data-Centric Revolution

Lip-Bu Tan, CEO, Cadence

Computational Software for Intelligent System Design

Cadence is a leading provider of computational software, hardware, and IP for designing next-generation intelligent electronic systems. Learn how the company has expanded from its early days of electronic design automation (EDA) software specifically for semiconductor design to full solutions that address bigger and faster chips and systems targeting cutting-edge technology for autonomous vehicles, 5G, industrial IoT, intelligent medical devices, cloud infrastructure, and so much more. Nearly every major technology trend relies on Cadence for world-class hardware and software to design—from cloud computing and AI, to aerospace, data science, and robotics. Our customers can design and verify their enormously complex chip designs accurately and with a shortened time to market.

Dr. Anirudh Devgan, President, Cadence

Startup Nation – A Government Facilitating Innovation

Mr. Aharon Aharon, CEO of the Istael Innovation Authority

Cloud

Embracing Cloud for Global, High-Performance Design Teams

With the rapid growth in design complexity and demands of leading process nodes, the compute and infrastructure needs for next-generation designs pose new, daunting challenges. That’s why every high-performance team is looking at Cloud with great interest. The scalability and agility offered by cloud addresses many of the gaps in design infrastructure. However, transitioning to cloud requires thoughtful decisions about cloud architecture, data management, infrastructure setup, security, to name a few.

In this session, we will discuss the pros and cons of various cloud architectures, their suitability for design flows and IT needs for successful cloud transition. We will also describe the Cadence Cloud solutions used by over 100 customers to successfully embrace cloud for their production designs.

Ketan Joshi, Cadence

Scaling Semiconductor Design Workflows on AWS

David Pellerin, AWS

Custom/Analog Design

Advance Analog Design Utilities

Tower Semiconductor presents its advanced analog design support using Cadence design flow. As analog designs get more complex, require higher accuracy and first-time success, Tower offers advanced analog tool support such as Voltus-Fi, Virtuoso Automated Layout Enhancer (VALE) and special SiPho design flow. In this presentation we will show how the technology has been created as well as usability, demonstrating on a reference design that was created together with Cadence. SiPho pcells using Cadence CurvyCore engine, VALE – yield enhancer tool working directly on Virtuoso Layout, and Voltus-Fi – for analog IR drop analysis.

Ofer Tamir, Tower Semiconductor

Tech Overview - Custom/Analog 1

Yuval Shay, Cadence

Technology Overview- Art Schaldenbrand

Art Schaldenbrand, Cadence

Synchronous Detection for Ultra-Wide Band Communication

The demand for reliable high speed communication services has been increased with the fast development of modern communication techniques. These techniques support, complement and replace the existing wired solutions, handling ever increasing amounts of data. To enable a high data rate communication, the product of spectrum efficiency (in units of bits/sec/Hz) and channel bandwidth (in units of Hz) should be high enough. At microwave frequencies the available bandwidth is relatively low so the spectrum efficiency should be very high. This leads to complex and sophisticated circuitry and higher-order modulations, and hence in high power consumption and cost. A possible solution to relax the complexity of the circuitry and the digital processing effort is bydecreasing the spectrum efficiency and increasing the bandwidth accordingly. Wireless communication systems that operate at frequencies of 71-86, 240 and 300 GHz can address this issue. However despite the dramatically decreased spectrum efficiency, the increasing high-speed data rates resulting from the wide bandwidth become a very challenging issue to process. Analog-to-digital converter and digital signal processor implementation is seen as a major limitation due to their high power consumption and cost. This can be addressed by performing a carrier phase and frequency synchronisation of the received signal in the analog domain with relatively low additional hardware effort and power. In this research we present a modulation-independent carrier recovery technique. The technique uses a feed-forward controlled leakage transmission carrier, which is transmitted together with the modulated data. The integrated receiver, designed for BPSK modulation, was implemented in a SiGe 0.13um heterojunction bipolar transistor process by IHP. The receiver was tested with an E-band BPSK signal of 640 Mbps.

Aleksey Dyskin, Inxpect

Latest Updates in Virtuoso EM-IR Flow

Joy Han and Harsha Hakkal, Cadence

Digital Implementation and Signoff

Technology Overview- Digital Implemintation

Rod Metcalfe, Cadence

Tech Overview - Front-End Design

Rob Knoth, Cadence

Tech Overview - Signoff

Brandon Bautz, Cadence

22FDX Adaptive Body Bias multi-core, multi-voltage SoC test chip design using Cadence digital implementation flow

GLOBALFOUNDRIES 22FDX fully depleted silicon-on-insulator (FD-SOI) technology offers an optimal combination of performance, low power and cost for mid and high performance CPU based designs powering differentiating solutions in the mobile and pervasive computing space. GLOBALFOUNDRIES has executed several design optimization studies to demonstrate the benefits of GLOBALFOUNDRIES 22FDX technology exploiting the additional optimization space provided by body biasing techniques. GLOBALFOUNDRIES has developed a family of 22FDX test chips which provides the infrastructure for implementing various SoC CPU cores. The presentation discusses a SoC test chip implementation comprising an Arm Cortex-A53 quad-core CPU and Cadence Tensilica HiFi5 and Fusion F1 cores, which constitute several voltage and body bias domains. The test chip architecture will be introduced including the Adaptive Body Bias sub-systems. Based on the Cadence digital implementation flow, specific decisions are discussed in the areas of DfT, power intent and timing sign-off.

Ralf Flemming, GLOBALFOUNDRIES

How We Push Largest 5nm High-Performance Arm Core to 4GHz Frequency

The race to compute performance at minimum power is for decades one of the major challenges for electronic design. It now reaches even higher level of priority when considering the increasing need for high performance CPU used in most of the market segments such as HPC, Server, client and Automotive. Latest technology nodes and advanced ARM cores allows us to reach aggressive PPA targets. Nevertheless, challenges to break 4.0Ghz barrier are numerous and inter-related which translate into complex physical implementation development. To address competitive high-performance CPU market, cores are getting larger in physical size, design complexity and must be implemented with latest technology node. To push the core to the maximum achievable frequency, we had to optimize and fine-tune physical implementation technics to design, EDA tool and technology node specifics. As an example, local logic dominated paths benefit from the technology node performance up-lift  but as the opposite, critical paths on latest large cores are extremely difficult to close at high frequency due to their heterogenous nature which combines highly multiplexed logic with large fanin/fanout, and critical memory/transport. As part of a cross-functional initiative with Cadence and Arm CE-CPU group, Arm PDG Advanced Product Development group in Sophia was given the critical task to investigate and find a path to break the 4.0Ghz limit. Through this presentation, after an overview of the context and challenges that were ahead of us when we started this performance push exercise, we will describe how we managed to reach our ultimate PPA target, thanks to a tight collaboration with Cadence and thanks to a thorough investigation on how to best combine the use of very advanced physical IP, technology, EDA tools and flow.

Olivier Rizzo, ARM
Stephane Cauneau ARM

Timing Signoff in the cloud with Tempus and Quantus

This talk will address TSMC’s brand new Cloud strategy of Scale-Out and Scale-In for timing sign-off, helping customers to speed up run-time in a big way while achieving cost saving at the same time. By injecting in-depth Cloud and IT knowledge into OIP EDA enablement, TSMC & its Cloud Alliance members Cadence and Microsoft jointly created Cloud-optimized design solutions in Tempus and Quantus with concrete benefits validated using Microsoft’s latest VM(virtual machines) newly announced mid-year 2020 targeting EDA runs. TSMC Cloud Alliance white paper on the subject is of immediate availability for customer download from the portal of TSMC-Online (online.tsmc.com).

Willy Chen, TSMC

IP Solutions

Not All 112G/56G SerDes Are Born Equal - Select the Right PAM4 SerDes for Your Application

Hyperscale computing continues to be the main driver for very high-speed SerDes, and 112G/56G is a key enabler for cloud data center and optical networking applications. 56G connectivity is particularly important for 5G infrastructure deployment, both in baseband and remote radio head systems.  After being first to market in 2019 with silicon-proven 112G-LR SerDes on TSMC 7nm technology, we have now expanded our high speed portfolio to include PPA-optimized 56G-LR in TSMC N7/N6 processes to address the connectivity needs of the 5G infrastructure and AI/ML market and have all the building blocks to accelerate the adoption and deployment of cost-effective 100G and 400G networks as well as a trajectory to a 25.6T switches.  However, not all PAM4 high speed SerDes are born equal.  In this presentation, we will share with you the various flavors of LR, MR/VSR and XSR high speed SerDes and where they fit best in the end application space.  You will also learn the tradeoffs between data rate, power, performance, area, insertion loss, and flexibility of use to help you decide the right PAM4 SerDes for your next design.

Wendy Wu, Cadence

Latest Trends in Memories and Cadence Offerings

Marc Greenberg, Cadence

Updating Your Automotive SoC from 16FFC to N7

Thomas Wong, Cadence

IP and Block Implementation

Pushing Verification Throughput With Cadence

Umer Yousufzai, Cadence

Sign-off clock gating verification using JG SEC

Clock gating (CG) techniques are broadly used in the industry mainly in order to save power or for security aspects. Yet, it’s hard to verify the correctness of the CG logic -how can someone be sure that the clock won’t be gated when it shouldn’t? The common method to verify it is to run functional regressions with the CG cells. Nevertheless, this doesn't fully guarantee that the addition of the CG has no harmful impact, as long as it is based on dynamic simulation methods we are bound to coverage limitations and vectors reachability states. This abstract describes how to leverage Sequential Equivalence Checking (SEC) app to compare two designs and prove their sequential behavioral equivalence end to end. On this technique one design is defined as the golden model and the second one as the implementation. In this case, these two are identical except that the clock is free running on the golden model, while logically for any given input vector the output should be acting the same. The goal is to prove that under any condition, both designs behave equally and the low power optimization has no functional impact on the design. Using JasperGold SEC application we were able to formally prove that CG logic inserted to the implementation design was equivalent to the golden design with free running clocks. In this presentation we will show the results of applying the aforementioned method in various modules. The presentation will explore the challenges identified alongside implementation examples and relevant implications to the verification infrastructure. Demonstrate the various issues and counterexamples found as well as analyze the runtime and non-convergence issues. Finally we present a breakdown of this method and a comparison with the simulation based flow.

Netanel Miller, Texas Instruments

Verification of a Multi-language Components - A case study: Specman E Environment with SystemVerilog UVM UVC

Verification of a complex SOC today demands the use of Verification IPs from diverse sources. The ability to utilize available verification components and embed them into an existing Verification Environment, which often consists of different languages, isof great importance. The Accelera UVM-ML Open Architecture provides the ability to assemble and co-simulate components that are written in different languages. Nevertheless, some synchronization aspects -such as sequences alignment and data transport between those components -are left for one's determination. In this paper, we demonstrate a common case for Multi-language necessity: a SOC that is generally verified with a Specman E environment that utilizes an SV UVM Verification Component from an external vendor. In the implementation of this system, we deployed a mechanism for data and bilingual sequence synchronization. In this project, we also deal with a dilemma: In what circumstances is it better to translate (or rewrite) code to another language, rather than combine it in a different language environment.

Eran Lahav, Veriest Solutions

Verifying real firmware flows in Pre-Silicon simulations

One of the challenges we face today is to verify real firmware flows in Pre-Silicon simulations and find HW Bugs related to FW flows in Pre-Silicon. In this presentation, we will share with you how we connected the Design Verification functions written inSpecman with FW functions written in C, using automation that encapsulates the C to e / e to C connectivity using MACROs enabling a simple user interface.

Omer Glazman, NVIDIA

Verification Reuse Taken to the Next Level with Portable Hardware-Software Interface

Since the release of Accellera’sPortable Stimulus Standard (PSS) in 2018, industry adoption has been growing steadily, with public reports of major productivity and efficiency gains. However, one of the key promises of a portable stimulus language is only recently materializing into a solution, with the expected release of PSS version 2.0. This is the ability to encode lower-level device programming logic in terms that are truly portable across verification environments, from IP, to subsystem, to full-chip, and across verification platforms, from Virtual-Platform, through simulation and emulation, all the way to post-silicon. The inclusion of general procedural-language constructs, together with built-in support for memory management, register descriptions, and various read/write operations, allow PSS models to program and control hardware devices directly. Given such descriptions, tools generate test code that drives testbench transactors or embedded processor cores, and possibly a mixture of these, with the push of a button. And with that, full reuse is achieved from the top-level test intent, all the way down to low-level implementation. Over the past year, early-adopter PSS users have been deploying these capabilities in real project settings. Their experience provides both positive indication of the validity of the solution, as well as some practical takeaways.

Matan Vax, Cadence

PCB and System Analysis

Holistic approach for power management verification of a broad power profile SoC

IoT market brings along extreme power efficiency and utilization competition. Our work presents a way for validating Power Management Unit (PMU) operation with lowest power across different device modes (standby / low power / active) and transitions acrossPVT for all voltage rails. We offer a holistic flow for power management verification by Collecting System definition of power states for all IP’s, Analyzing it using the Power Analyst tool to generate power information per rails for all SoC states and finally Verifying it by generating complicated system level scenarios and compositing IP activity mixture that stress the device to its limits under real system numbers

Guy Shubeli, Texas Instruments

What's New in OrCAD 17.4 Release

Alex Slutski, EDAIs

Introducing In-Design Analysis with OrCAD and Allegro Technologies

Dirk Mueller, FlowCAD

Introducing InspectAR: Accelerating Hardware Development Through Augmented Reality

Mihir Shah, Cadence

CadenceLIVE Israel: RF to Millimeter-Wave Front-End Component Design for 5G NR

5G New Radio (NR) networks represent the next milestone in enhanced mobile communications, targeting more traffic, increased capacity, and reduced latency and energy consumption, made possible through multiple enabling technologies. To achieve 5G NR performance targets, these communication systems must improve spatial efficiency using multiple-in-multiple-out (MIMO) and beam-forming antenna arrays while increasing bandwidth using millimeter-wave (mmWave) spectrum and carrier aggregation (CA) techniques for greater spectral efficiency in the sub-6GHz spectrum band. Each of these technologies present numerous design challenges for engineers developing and integrating components in RF/mmWave front ends. This presentation examines some of the technical requirements and design challenges in developing high-frequency components supporting 5G NR communications—from beam-steering antenna arrays to mmWave MMIC power amplifiers—using gallium nitride (GaN) semiconductor technology. An overview of typical 5G NR system requirements will be followed by a discussion of how these requirements impact component performance specifications. RF link budget analysis and time-domain analysis will be used to examine bit error rates (BER) and error vector magnitude (EVM) measurements using communication system analysis. Physical implementation of a receiver based on a 45nm complementary metal oxide semiconductor (CMOS) RFIC will also be presented.

David Vye, Cadence

SoC and System Verification

Pushing Verification Throughput With Cadence

Umer Yousufzai, Cadence

Test Reuse and code shrink-down using PSS Test Tables

The Portable Test and Stimulus Standard (PSS) enables the representation of stimulus and test scenarios usable across many levels of integration under different configurations, facilitating the generation of diverse scenario running on a variety of execution platforms. PSS tools can generate test content that effectively covers global flows and concurrency validation spaces in diverse platforms such as Simulation, Emulation, Virtual Platform and Silicon. It already provides significant reuse. Still, validation teams face increasing challenges in maintaining and reusing existing content across SoC products. Capturing variations between tests by using Excel/CSV tables, which get translated to PSS, can make test suite descriptions even more concise and manageable, compared to coding them directly in PSS. Using this approach, users can develop generic scenarios with multiple automated variations on top of the generic regression test-suite. Test content management is an enormous challenge for developers across organizations, especially when it comes to legacy content. Large amounts of legacy content are owned by many content creators across the board. With more owners adding test content to the centralized content repository and less bandwidth for maintaining that legacy, content reusability and maintainability problems are more common. Furthermore: integration validation complexity is increasing over time; new platform options might double the enablement and maintenance efforts; test content and environmentsneed to be separately managed for various platforms. The proposed presentation will describe the value proposition in using the Test Table approach on top of PSS, to help integration validation teams overcome the challenges of content reusability and maintainability. It will discuss how legacy content consolidation and migration works, showing how it can provide a significant reduction by a factor of 10 the total number of legacy tests.

Gassan Tabajah, Intel

Protium X1 Use Case

Yohay Shefi, NVIDIA

Using Palladium DPA tool for Power estimation

We had concerns regarding the design's power consumption , we wanted to get a power estimation prior to the netlist stage. we decided to use a criteria of activity factor which can give us a prediction of the power consumption of the design, when taking to account the power data of the cells we use. the mean activity factor is calculated as: Activity_Factor = Toggle_count/(num_of_flops*clock cycles) we decided to use the DPA tool on a palladium simulation to extract the needed information. i have uploaded the design to the palladium and with the help of the algorithms team to create a typical high activity scenario. i used a specific trigger to record the interesting time interval and then using the DPA tool to create a 2 SAIF files (one for only registers and one for all design nodes) and a toggle count file. from the SAIF file i have extracted the toggle count data summed all the toggles and calculated the Activity factor. using the DPA tool i have converted the toggle count file to a waveform plot. opening the waveform using simvision, i could dive in the design's hierarchy and see in a visually which module has high toggle rate and at what time. using the data from the toggle count waveform helped me and the algorithms team to fine tune the recorded interval to get a higher activity rate. after we had the mean activity factor for the scenario, we were able to extrapolate the power estimation to meet our goal.

Ran Gross, AutoTalks

Harnessing emulation platforms for faster pre-silicon validation

The never-ending race of advanced communication devices increases complexity of designs and requires collaborations of multidisciplinary design teams. Designs that exceed 10’s of millions of ASIC gates are difficult to validate and becomes more and more time consuming. Furthermore, having embedded CPU or several CPUs in the design also complicates verification and validation. Our team is designing WLAN access points for more than 15 years already –starting at WiFi4 (802.11n) days, working these days on next generation design which would be compatible to the new emerging WiFi7 standard –exceeding 10Gbps over the air, capable of multi-station concurrent transmission/reception and packs many more other features. Besides the high bandwidth, WLAN products arerequired to operate at various RF bandwidths, modulations, bit encoding schemes, transmit and receive various lengths of packets (usually 10’s, 100’s or even 1000’s of packets in a single session) etc. All of the above becomes a major challenge for verification and pre-silicon validation. This is where our 3-stage approach kicks-in: starting at unit level and system level simulations, going through Palladium emulators which let us dramatically increase test vectors coverage and combine our production FW with the HW design –start with deep level debugging of either HW or FW components, run multiple scenarios and regressions, and finally move to FPGA prototyping for long regressions. Our Palladiums let us take calculated risks, dramatically reduce verification and validation time. During my presentation I will go in-depth over the methodologies we developed along the years and focus on the benefits we gain using few flavors of emulation platforms.

Amir Rosenblum, MaxLinear

Special Session

Women In Tech: How to build your brand

Alessandra Costa, VP of Technical Field Operations, Cadence