CadenceCONNECT: Mission Critical Repository

5G/RF

Chip-Package Co-Design on High-Performance Transceivers Using Virtuoso RF Solutions

In this paper we present how ADI has successfully employed Cadence’s Virtuoso-RF flow for chip-package co-design and demonstrate how it offers significant advantages over existing approaches. Currently, chip-package co-design is supported with a two-tool methodology where Virtuoso is employed for chip design and schematic entry, and where Allegro is employed for package design. As ADI’s single-IC-in-package and SiP products increase in complexity, and as time-to-market continues to be a key business objective, the bridge between Virtuoso and Allegro is becoming a critical bottleneck. In this new methodology package design can now be done directly inside Virtuoso through a multi-technology framework. Some of the many features of this new methodology include: IC layouts and packages and can be directly overlaid on each other within the Virtuoso environment, package layouts are easily transportable between Virtuoso and Allegro, and chip-package layouts can be probed all from within Virtuoso. The new methodology brings a host of advantages to ADI’s design process. Planning of bump and ball layouts can now be done far more efficiently. Critical package nets can be designed and optimized within Virtuoso with IC layouts present, and then easily moved to Allegro. Debug and probing of package designs is now as convenient as what is found in existing chip-design flows. Co-design becomes a more practical reality. Using ADI’s RF transceiver products, we will illustrate some of the many advantages offered by this new methodology for large SoC package developments.

Watch Video

Co-Modeling of IC+Package Using EMX Software

It is well known that accurate modeling of interconnect lines and other passive components - example spiral inductors - is essential for correctly predicting the performance of IC designs. Tools such as QRC (and its predecessors) have been successfully used for over two decades to rapidly extract accurate distributed RC models of large IC layouts. As circuit speeds increased, broadband models became a necessity for critical signal nets, which in turn drove the need for fast and accurate IC-level electromagnetic solvers. Tools such as EMX, specifically designed for this purpose, have been successfully used for modeling both interconnect and passive structures up to 80 GHz and beyond. For example, at Analog Devices EMX is an essential tool in the design of nearly all our transceiver and RF products – to name just two product areas. As IC performance and speed both continued to increase, package performance and speed followed, and hence tools and flows were developed to support the electromagnetic modeling of complex package designs. This in turn has led to a new modeling challenge that is now upon us: the co-modeling of IC + package, where electromagnetic coupling between IC and package, and signal chain behavior from circuit through to package ports, can significantly impact system performance. Failure to co-model these physically distinct domains can be very costly, and in the worst can force a re-spin of not just a package design but of one or more ICs. Electromagnetic simulation tools traditionally used for package modeling can be coaxed to include IC-level interconnect and passive structures in a simulation. But this approach can run into challenges when dealing with the many complexities of deep submicron back end of line technologies. Alternatively, tools targeted for IC-level modeling, that have been designed for submicron technologies, can usually be more easily coaxed to include key package elements. In this talk we will focus on the latter approach. We will present examples from our RF and transceiver product lines where we have successfully used EMX to simulate IC + package. We will also review the key assumptions that are in play when extending EMX to IC + package and the accuracy limitations that arise. In addition, we will address the topic of ground planes in the context of an IC + package simulation, and the importance of understanding the fundamental differences between ideal and physical grounds. We will conclude with a summary and an outline of plans for future work.

Watch Video

Designing 5G/mmW Products in the Cadence Cloud on AWS

Interest in 5G and millimeter wave (mmW) devices is rapidly expanding in today’s world. From 5G small cells and phased arrays, to automotive radar, to IoT devices, the challenges of accurately designing and modeling these projects requires a stable platform and dedicated hardware resources that run efficiently, especially for a company that has limited resources to devote to maintaining that environment. As a startup based in Moorestown, NJ, Otava utilizes the Cadence cloud that is hosted through the Amazon Web Services (AWS) to enable our product development. Designing for 5G and mmW products requires the use of several tools seamlessly, and 3rd party tools are able to be loaded into the cloud environment to support the team’s design. The cloud approach to hardware and software maintenance allows designers to focus on what they do best – design. Otava’s mmW flow through AWS effectively delivered a complex ultra-wideband mmW beamforming SoC within a year with only six designers scattered across multiple states, with the design coming back demonstrating first pass millimeter wave performance.

Watch Video

From Radar to Radios: System and Microwave Design

From radar to radio systems design, Cadence AWR software platform enables RF/microwave engineers to move from concepts to prototypes.  This presentation will touch upon basic radio system design through construction and demonstration of a quadrature amplitude modulated (QAM) radio. Fundamentals of signal processing, pulse shaping for reduce bandwidth, noise and maximum capacity of a system will be covered as well.  Lastly, the microwave components such as couplers, power dividers and phase shifters using transmission lines as well as active devices such as mixers, low noise amplifiers (LNAs) will be discussed.

Watch Video

Loop Gain Envelope Amplifier Stability Analysis Using Microwave Office

Will discuss the loop gain envelope stability analysis theory and show how to use the technique in Microwave Office focusing on a practical PA design example.  Was requested by Microwave Office team to present.

Watch Video
Advanced Packaging/Heterogeneous Design

Design and Simulation Flow for Highly Integrated, Heterogenous Packaging Solutions

With demand for higher levels of system integration, novel heterogeneous packaging technologies are emerging as a viable path toward increased Input/Output (IO) density and improved system performance. These non-standard processes provide greater flexibility in terms of system design and capability; however, they require new design flows for system level planning, modeling, and verification. Specifically, power modeling and timing analysis are critical in guaranteeing reliable system performance, but they pose challenges when executing a heterogeneous design. To enable these packaging solutions, the Air Force Research Lab (AFRL) is leveraging existing Cadence tool flows to compose an atypical design process, giving improved visibility and control across integrated circuit (IC), package, and board efforts. The AFRL’s motivation for this integrated flow is to create an effective, cross-domain simulation and verification environment executed by a small and agile design team. As a first step toward developing an agile, cross-domain design and verification flow, the AFRL implemented a 150 um pitch system-on-chip (SoC) with a double data rate 4 (DDR4) memory interface. Cadence’s system connectivity tool, orbitIO, was incorporated during package planning, aiding via and routing optimization while supporting simultaneous board level alterations. In order to assess integrated performance, a simulation environment was developed for the DDR4 interface. Channel characterization required full channel modeling of the SoC, package, board, and unregistered dual in-line memory modules (UDIMM). Through Cadence’s Sigrity SI solution, PowerSI and SystemSI were used to generate and simulate s-parameter models for each domain and interface, verifying operation. To leverage the flexibility of the heterogeneous packaging process, a VSDP like flow was developed with Cadence application engineers, requiring the use of IC design rule checking (DRC) decks and process technology files. This paradigm allowed SoC simulation and verification techniques to be applied on the package, more closely aligning the two domains. In addition, material and simulation information were fed back to PowerSI to refine the package models. With the DDR4 fully characterized, SystemSI’s parallel bus simulator linked the interfaces, creating a system level simulation driving UDIMM ebd and ibis models.

Watch Video

Experiences with Serial Link Analysis: Baseline Simulations with Xilinx and Intel FPGAs

Unfortunately, there is no “standard” startup kit for FPGA base serial link simulations. Creating a known-good baseline for FPGA serial link simulation can be challenging. Each vendor has their own methods in preferences. In this presentation Cadence SystemSI was used to re-create published and known good results from two FPGA vendors. The models used in these simulations are IBIS-AMI models. This means syntactically they meet the EIA IBIS-AMI standard, but the variables and switches are unique to each vendor.  The presentation will focus the challenges faces as each known good simulation gets setup.

Watch Video

Experiences with a Large Multi-Board Multi-Rail IR Drop Analysis with Sigrity PowerDC Technology

Traditional IR drop calculations in a system with multiple printed circuit boards (PCBs) often involves the use of spreadsheets and estimates. This creates some error and hence voltage margins are reduced to accommodate these estimated margin values. This can be acceptable while in the exploration and pre-route phase in a design. However, the use of Sigrity PowerDC allows real post-route values to be used in checking the overall system. The attendees will benefit from the authors project experience in developing a complete system analysis which includes some best practices and well as some pitfalls to avoid.  In addition to IR drop the project benefited from the system power calculations, VIA currents, specific point-to-point voltage measurements and automated report generation.

Watch Video

Wirebond feasibility and production design using Allegro Package Design for biometric smart card

We had been using other tools and creating our wirebonding proposals in Excel, however using APD/SIP we were able to co-design with real data and have dynamic wirebonding. We needed a very tight design and the APD tool provided us with very good results. We are exploring  (and can hopefully present) how we are using Sigrity to model the analog sensor that we are building and close the loop/product cycle very quickly

Watch Video
Aerospace and Defense

A Custom RISC-V SoC in GF 12LP Technology Designed with a Personalized Stylus Common UI Flow

In order to keep pace with the technological advancements in the field of microelectronic design, the DoD has taken great strides to develop its internal capabilities in the custom ASIC design space. To push system performance and product delivery timescales towards industry standards, government research and design teams must develop agile, quick-to-market design approaches without sacrificing the security of the system. The design team at Centauri, in collaboration with the AFRL and other partners, is actively developing a RISC-V based SoC in GF 12nm technology that incorporates several 3rd party IP blocks. This system is primarily composed of a custom-generated "Rocket Chip" CPU, an embedded FPGA fabric, a DDR4 memory controller, and several variants of an auto-generated AES encryption core. The SoC is purposed to be a common platform for the AFRL and future collaborators to perform detailed testing and assessment of novel hardware security methods. The design flow, signoff, and verification tools are primarily supplied and supported by Cadence Design Systems. Since the onset of this project, the team at AFRL has been working with Cadence engineers to optimize the Stylus Common UI flow to improve turn-around-time, quality of results, and to implement cutting edge design flow practices. This work provides new possibilities for DoD hardware security research, increases the capabilities for government-based design teams, and represents an important milestone in this nation's effort to meet the demands for producing high-performance and secure microelectronics.  

Watch Video

Digital Twin Case Study: Applying Emulation-Based Verification of SoC Using Tactical Software

Northrop Grumman Corporation (NGC) and Cadence Design Systems (CDS) have successfully developed a full System-on-Chip (SOC) digital twin in an emulation environment to execute bare metal and OS level software tests on the RTL design prior to tapeout. The use of digital twins to speed deployment of defense systems has been a subject of discussion since it was articulated by Dr. Michael Grieves in 2002. Verification of large DoD SOCs together with mission software has been an increasingly difficult task with tradition simulation methodologies. With sponsors looking for more use case level validation, DoD contractors are forced to look at more efficient verification methods prior to tapeout of an ASIC in order to burn down risk and increase confidence of first-time-correct silicon. In this case study paper, we will describe the challenges and successes encountered during the development of the emulation system and the testing of processor coherency and Level 3 cache data handling for one of NGC’s multicore SOCs.

Watch Video

Robust Optimization for High Performance Microwave Filters

RF filters are a key component in most radar and communication systems, responsible for mitigating potential signal interference in an increasingly crowded spectrum. As a result, system planners often demand higher order, low loss filters in their link budgets, putting design pressure on the filter designer. While simulation software plays a critical role in achieving the required performance, general purpose optimizers are typically not very robust for optimizing higher order microwave filters and multiplexers as they often fail to control the passband exactly and leave one or more resonators mistuned. DGS Associates offers high end filter design and optimization expertise and wanted to integrate that expertise into a complete RF/microwave design environment. The company chose to integrate its Equal Ripple Optimization software within the AWR Microwave Office circuit design environment to address these design challenges. The resulting EQR_OPT_MWO solution is a dedicated optimizer for microwave filters and multiplexers that leverages the AWR Design Environment COM Automation API and can optimize any filter that can be defined in Microwave Office. EQR_OPT_MWO and Microwave Office together can be used to design many types of microwave filters and multiplexers for both commercial and military applications. For example, lumped element filters, planar microstrip filters or waveguide filters can be optimized using the Microwave Office element catalog. In addition, S-parameter files imported from any planar or 3D EM simulator can be port tuned. With EQR_OPT_MWO and Microwave Office, there is essentially no fundamental limitation on the filter technology or topology and hard to obtain performance is in reach.

Watch Video

Verification Enhances Confidence in Defense Program Success

For today’s ASIC and FPGA designs, the verification state space is enormous. Programs commonly use direct-test RTL/gate simulation and code coverage analysis, but that approach is increasingly inadequate as the designs grow.  Conversely, adding comprehensive functional verification connected to metrics gathered from multiple abstractions across digital, analog, and software design elements may be outside the constraints of a running program.  Between these two extremes are verification technologies that can increase efficiency with relatively small investments.  Applying a fast RTL/gate simulator and tuning its performance on the program’s verification environment can be done with a few hours of set-up. Automating simulation execution and regression triage with verification planning tools requires hours of set-up and saves days of accumulated effort triaging each regression run in the program. Interconnect test development can be automated with hours of investment saving a few days of time on the running program and more time if the interconnect is reused on a subsequent program.  BAE Systems applied these technologies on the development of the RAD510 Space Processor.  This paper will provide detailed operational and technical details that you can immediately use to increase verification confidence in your programs.

Watch Video
Design

BISR Implementation with od-PMBIST Flow

Watch Video
Safety/Security

Functional Safety Flow for ISO 26262 ASIL-C of D Analysis

Automotive IC suppliers increasingly need to justify their ISO26262 classification with detailed diagnostic coverage of the safety mechanisms. At Allegro Microsystems, we had a part assessed as ASIL-B of D and we wanted to verify its viability as ASIL-C of D.  For that task, we needed tools and methodology that worked with the existing functional coverage solution and added fault injection/analysis with a connection to our FMEDA tool.  This presentation will present the results of applying the Cadence vManager Safety solution including the process to set up the flow, issues and solutions encountered in the evaluation, and our planned next steps.

Watch Video

Hybrid HW/SW Security Enforcement with Dover CoreGuard

Watch Video
Verification/Validation

Benefit of Finding Unreachable code with JasperGold UNR App

Watch Video

Design Validation Within Protium Platform: A Case Study

This presentation provides a case study of the validation of a complex SoC utilizing the Cadence Protium Rapid Prototyping Platform.  With careful planning and up-front decision making, porting of a custom ASIC design to the Protium can be a successful and relatively smooth process.  We will present our first-hand experience and lessons learned with this undertaking.   The SoC consisted of four Cadence Tensilica BBE64 processor cores and one Xtensa LX7.  The ARM NIC400 crossbar was utilized to provide connections with 5MBytes of SRAM and peripherals. Protium emulation was chosen to enable the development and debug of complex multi-processor software algorithms and provide validation of the SoC architecture.  Since Protium delivered multiple options for design debug and performance monitoring we were not only able to provide a high-performance solution for software development but also incorporate architectural enhancements months ahead of ASIC availability.

Watch Video

EM IR-drop and Self-Heating Effect Analysis using Voltus-Fi on 5nm process

Watch Video

Formal Assisted Verification Closure: Taking the MDV techniques to the Formal Realm

Formal techniques are part of verification flows since a long time but many a times they are considered as auxiliary verification methods. Recent advancements in formal & MDV tools have made it possible to measure and merge the completeness metrics from formal efforts with simulation metrics thus giving a big-picture view of verification activity. In SoC development, the design and integration complexity is ever increasing, at the same time the project planning and product to market timelines do not allow extensive simulation based verification efforts for functional correctness. This mandates formal methods as alternative verification sign-off technique for avoiding any functionality coverage compromises. Combining the needs in verification and advancements in the methodologies, this paper discusses three projects from ADI which extensively utilized formal techniques in MDV framework to achieve timely and measured verification sign-off. ADI collaborated with Cadence 2 years back to adopt this methodology when cross-domain merging was in early development stages. Over time, the flow improved with more ‘simulation-like’ coverage items being supported from Formal perspective and thus merging them with simulation metrics was possible. There are 3 projects discussed in the presentation which adopted formal for ‘formal-friendly’ blocks and verification tasks. Usage of Metric Driven Verification made these efforts very simulation-like for completeness measurement at the same time formal environment offered a quicker bringup and more exhaustiveness. There were no simulation environments built for these blocks/tasks and they were signed-off using formal meeting the critical project deadlines. The type of blocks/tasks verified using formal were APB/AHB bus-matrix, arbiters, register end-point connectivity and pin-muxing. The formal efforts were started with ABVIPs for protocol checking and later extended to perform move directed checking with constraints in place. Formal coverage was collected and exclusions were added where ever necessary. Once the formal setups were stable, they were added as part of the internally developed regression management utility to carry out merging with simulation coverage results. A formal vPlan was used to track progress on formal goals using annotated reports. There were multiple bugs found in designs using this methodology and formal coverage was used as sign-off metrics. The automation in place to facilitate the collection, merging and reporting makes it easier to adopt on any project using MDV methodology and thus makes a compelling case to increase adoption of formal tools.

Watch Video

Functional Verification of Quantum Structures and Quantum Processor Unit

We present a verification methodology for a fully integrated semiconductor quantum processor unit (QPU) SoC realized in a 22nm FD-SOI technology. At the heart of the QPU is a quantum core which is activated by independent high-resolution DACs and high speed pulse generators to perform quantum gate operations described as Pauli rotations along the X, Y, and Z axes on a bloch sphere. The quantum core comprises an array of quantum dots separated by barriers controlled by imposers and together these devices control the evolution of the electron’s wave function resulting in a specific spatial distribution between those quantum dots. Using a simplified approach, the solution to the time dependent wave function can be used to describe the motion of an electron in the quantum structure in verilogA or VHDL. This modeling strategy can be integrated with the full chip verification flow where a variety of quantum experiments can be simulated on the proposed model and the outcomes of those experiments can be passed to the netlist representing the mixed signal detector chain. The proposed setup can be integrated into a system level testbench that incorporates all processing elements on and off chip for statistical analysis. The proposed methodology paves the way for SoC functional verification with sophisticated quantum structures comprising hundreds to a thousand qubits along with the classical circuit elements.

Watch Video

HSI Expands PSS to New Levels of Automation and Reuse

The advent of the Accellera Portable Test Stimulus Standard (PSS) provides new capabilities for test developers to fulfill their verification plan requirements using a single flow to specify abstract scenarios for testing, which can then target various platforms, e.g. IP level environments and SoC level environments.  System components, functions (actions), transaction flow-objects, and resources are described formally using PSS at a high-level of abstraction, and then scenario activities are specified in terms of these definitions, independent of the target platform.  Scenarios can even be partially specified with constraints on them, and a PSS tool can automatically generate full solutions to exhaustively “fill” the state space to drive productivity and quality.  For test realization, implementation details for different target platforms (commonly called “exec” code) can be independently captured, and then be bound to the activities making up the test scenario in order to generate testcases which target the specific platforms required.  With this capability, PSS offers huge benefits by providing a common way to specify scenario activities independent of the target platform. Now with the additional specification for HW/SW Interface (HSI) standardization, planned in PSS 1.1, automation and portability are enhanced even further by allowing test realization implementation details to also be captured in portable terms in PSS.  Prior to HSI, the test realization “exec” code for different targeted platforms typically needed to be coded separately for each platform in terms of that platform’s execution semantics.  E.g. test realization “exec” code for “coreless” SV/UVM IP or subsystem simulation environments would typically be in terms of bus agent sequences, and for embedded SoC level environments it would be in C code (firmware APIs.)  Without HSI, this required duplicate effort, and more concerning, could potentially lead to disjoint test realization across different platforms.  HSI allows test developers to capture this “exec” realization code just once in portable terms, requiring less effort to code, and also requiring less effort to validate that the “exec” implementations across different platforms represents the same intent. To accomplish this, HSI specifies a standard set of “transactional primitives” (e.g. a common set of read and write functions for accessing registers and memory) in portable terms that are independent of the target platform.  Structured higher-level memory and register interactions are provided which are built on top of these transactional primitives, allowing test developers to efficiently capture the “exec” behavior using them with PSS’s portable procedural semantics.  Once “exec” code is captured in these portable terms, targeting a particular platform now just requires that the finite set of standard transitional primitives is implemented for that platform.  Fortunately, because HSI standardizes the set of transactional primitives, in many cases, the targeting of primitives for different platforms can be automatically provided.  For example, for embedded C platforms, the primitives can be automatically translated to dereferenced pointer C code.  In other cases, VIP vendors can provide out of the box support for HSI transaction primitives packaged with their VIPs for each of the platforms the VIPs run on, allowing testing suites of PSS scenarios to be automatically targeted to each supported platform.  And, the standardization of HSI in PSS allows for even further automation of tedious and error prone aspects of test code realization, e.g. such as automated generation of PSS register structure definitions and register groups from standard register description formats like IPXACT. As designs become larger, increasingly leveraging more IP and interactions between them, duplicate disjoint testing efforts across different testing scopes are creating bottlenecks and potentially unintentionally divergent test intent.  PSS with HSI provides a number of benefits to help address these challenges to provide faster development for scenario specification and test realization, with less duplicate effort.  At the same time, PSS with HSI provides higher quality by allowing test developers to drive testing scenarios from a single specification and flow, which ensures that the same verification intent is ported to all necessary platforms.  This presentation will provide concrete real-world examples of the application of PSS with HSI to automate verification with much less duplicate effort by utilizing a common set of PSS scenario specifications, bound to a common portable HSI “exec’ specification.  From this, it will be demonstrated how portability is enabled across multiple testing scopes and targeted platforms, while maintaining unified test intent.

Watch Video