CadenceLIVE Korea – OnDemand

Custom / Analog

Solving Analog Mysteries Inside A Digital Cockpit

In  top level mixed signal design verification, locating the source of abnormal current consumption is difficult and tedious.  It is impossible  to visualize and trace the text based analog content in the complicated mixed signal design:

•Analog waveform viewer, VIVA: no debug capability to link the individual waveform to its netlist or source code.

•Digital/mixed signal debug tool, SimVision: unable to trace the text based analog content.

SimVisionMS, a new digital/mixed-signal simulation cockpit integrated in SimVision, is presented as a novel solution. With this solution, terminal currents can be viewed interactively and traced down to the leaf level of every node; Verilog-AMS/Spectre/SPICE text contents will be shown in Schematic Tracer as schematics, which allows  reviewing connectivity and current distribution easily; Spectre and SPICE source file can be cross probed in the Source Browser.

This new solution reduces the top level mixed-signal debug time dramatically especially for those testbenches related to current or SPICE/Spectre netlist: about 4x debug time reduction with respect to the traditional SimVision is observed. SimVisionMS provides a unified debug suite for Analog/Digital/Mixed-signal design verification.

Jerry Chang Texas Instruments

Virtuoso Simulation-Driven interactive Routing (SDR)

Electromigration (EM)은 IC 신뢰성과 수명에 큰 영향을 미치며 보다 낮은 공정으로 가면서 추가적인 문제가 발생합니다.

이번 발표에서는 Simulation Driven interactive Routing (SDR)으로 EM 문제를 보다 효과적으로 해결하는 방법을 제시합니다.

SDR은Cadence Electrically Aware Design (EAD) 기술을 기반으로 구축된 In-design layout 에 예측되는 EM violation에 대해 correct-by-construction 즉,  자동 오류제거 방식의 routing 방법으로 보다 효과적인 layout 방법을 제시합니다.

Electromigration (EM) has a major impact on IC reliability and lifespan, and it poses additional challenges as we move to lower process nodes. 

In this webinar and demonstration, learn how to address EM concerns with simulation-driven routing (SDR). 

Built upon the Cadence® electrically aware design (EAD) technology, SDR provides immediate EM and parasitic feedback for developing layout geometries to connect components.

Woong-Sik Jung Cadence

Spectre FX Simulator

CADENCE: Accuracy, Performance, Capacity: Finding the right balance for Memory/SoC verification

The complexity of circuitry created by the combination of mixed-domains, advanced nodes,

and impossible schedules has pushed the all-important verification stage of design to its breaking point.

Engineers are forced to make so many compromises in terms of what parts of the design can be tested, and

how extensive those tests can be, and when can the results be returned to be useful that there becomes a

real risk of critical parts “slipping through.” In this session, we will unlock the latest methodical secrets for

reducing risk during your custom verification with a powerful combination of Virtuoso and Spectre platform

tools and flows.

Sean(Sangchul) Lee Cadence

Useful Utilities and Helpful Hacks

Layout design complexity increased with advanced nodes. Multiple SKILL utilities were developed to reduce steps and save time by focusing on repetitive tasks. This session includes a via enclosure solution, color related checks, layout backup outside of design management, a layer palette procedure and the principles for selecting palette colors. The initial challenge and hints and hacks on how to implement are also included.

David Clark Intel

Samsung AMS Design Reference Flow for Advanced Nodes

AMS design challenges have significantly increased with complex design specification requirements at advanced processes. The Samsung AMS Design Reference Flow is intended to cope with design complexity and hence improve design productivity at any process technology even including 3nm. It demonstrates to users how Samsung foundry PDKs are well in sync with the latest Cadence Design Tool Suites, and also provides user experience via Lab tutorials. As another solution, Samsung Foundry provides Design Solution Kits to support design, simulation, analysis, and verification methodologies. Samsung Foundry's customers can now take advantage of the most advanced features for circuit design, performance, reliability verification, automated layout, block and chip integrations for custom and digitally-controlled analog designs based on Cadence Virtuoso, ADE and Spectre platforms. In this session, we shall talk about Samsung Cadence AMS Reference Design Flow from schematic to GDS.

Seongkyun Shin Samsung

Check Analog Fault Simulation with Actual ATE Program

Analog fault simulation is a tool which is used to check the coverage of design defects (shorts and opens within design). Design defects should cause ATE final test program fail instead of pass. Otherwise, the fault coverage is not enough and leads to potential quality issue. Analog fault simulation (AFS) tool is provided within Cadence Virtuoso ADE Product Suite setup. AFS users can use it a different approach to achieve different goals. Some remaining questions which haven’t been addressed. First, can we run actual C++ ATE test program with AFS? Secondly, how do we manage large number of fault injection which will be an overkill for the computing resource? Lastly, how do we collaborate between multiple disciplines to achieve the AFS simulation run and analysis? This paper addressed all of the issues above. By using DPI-C in SystemVerilog, we are able to support C++ test program. Then we choose to run sub-block level sim with C++ stimulus to optimize the computing resource. Next, the checker results can be parsed from xrun.log by using a function named “CCSgrepCurrLogFile()”, dumped into Virtuoso ADE Product Suite output, and used for pass/fail criteria for AFS simulation. Lastly, the results are reviewed by design engineers, especially focused on undetected faults. This paper shows an innovative way of how to use Cadence AFS tool to emulate ATE tests with team effort cross DV, ATE, design functions.

Jun Yan Texas Instruments

Spectre X: Speed with Accuracy to Meet Growing Circuit Simulation Demand

Achieving high throughput and coverage in design verification (DV) is recognized as one of the biggest challenges in a design cycle at Texas Instruments. Improved circuit simulator performance - with minimal accuracy loss - is a central enabler to meeting these challenges. In 2019, Cadence introduced its next-generation analog circuit simulator, Spectre X.  TI Analog EDA has been collaborating with Cadence’s development team to characterize and qualify the simulator in preparation for adopting it into the TI analog/mixed-signal design community. This paper describes the qualification process, span of test circuits, and benchmarking results. We then analyze the results and show what type of circuits benefit more by adopting Spectre X.

Ziyan Wang Texas Instruments

Design Processor IP & Automotive

Securing Offload Engines

Today, every design team is looking at Cloud with great interest to solve their compute capacity gap and accelerate project Turn Around Time (TAT). However, transitioning EDA and CAD flows to cloud can be complex, requiring thoughtful decisions about cloud architecture, data management, IT retraining, infrastructure setup, security, to name just a few.

This session will discuss Cadence platform to overcome cloud complexity. We’ll also uncover industry’s newest breed of cloud products that are allowing designers to enjoy their familiar on-prem design environment and yet enjoy all the great benefits of secure, scalable and agile cloud. 

 

All the goodness of cloud without the effort and delays involved in adopting and optimizing the right cloud environment. Have your cake and eat it too!!

Sriram Kalluri Cadence

Architecture Exploration for Power and Energy Optimization Using Tensilica IP in a Pre-Silicon Design Flow

This work represents the design and TIE implementation of a flexible yet fast 2n-point complex FFT architecture using FIFO. Four parallel read-write queues are used instead of shared data memory to improve the overall performance and throughput. The proposed FFT architecture follows the basic concepts from Radix-2 and Stockham-FFT algorithms, but stands unique with the data flow patterns. This approach requires only 768 cycles to perform complex 4K-FFT when compared to the 2070 cycles requirements in the ConnX B20 (library) itself. It is achieved by a total of 2048-bits read and write per cycle (2 to 4 queues in parallel).

The twiddle factors for the FFT can either be computed during runtime or pre-computed and stored in a separate reusable queue. The latter technique is followed in this work in order to reduce the computational complexity. Four hardware parallelization methods are proposed to achieve better performance based on the FFT sizes. In every cycle, each method reads data-points from two different queues, performs butterfly operation and writes the results not necessarily in the same two queues. The 1st method takes only 2-data points per cycle and performs the butterfly operation, whereas the 2nd, 3rd and 4th methods take 16-, 32- and 64-data points per cycle respectively. Each data point is considered as 16-bits real and 16-bits imaginary fixed-point complex element. The 4th method requires two 32-data points from two respective queues, which corresponds to the data width of 1024-bits per queue.

In each cycle, the 1st method reuses 4 32-bit multipliers and 8 16-bit adders that are required to perform a butterfly operation and the 4th method requires reusable 128 multipliers and 256 adders. The data flow pattern followed in this work ensures the reusability of twiddle factors. Thus the twiddle factor computation or the twiddle queue read is not required in every cycle. With any fixed hardware parallelization methods, the FFT size can be configured. For instance, with the 1st method, 24 to 212-point FFTs or above can be realized. Similarly, with the 4th method, 29 to 212-point FFTs or above can be realized. With the 1st method, a 16-point complex FFT takes 32 cycles to perform the whole operation. The 4th method computes 512-, 1024-, 2048- and 4096-FFTs within 72, 160, 352 and 768 cycles respectively. The application specific SIMD and FLIX instructions are created using Tensilica Xtensa and the FFT architecture is realized using ConnX B20 DSP.

Pierre-Xavier Thomas & Michael Young, Cadence

Implementation of a Highly Parallel Complex FFT Architecture using TIE in ConnX B20

Safety-critical applications rely more and more on complex electronic circuits and systems, thus increasing the need for a safety-aware design automation flow. At the center of this flow is the automation of Functional Safety Analysis, e.g. FMEDA (Failure Modes, Effects and Diagnostic Analysis) and its connection to the design and verification environments. In this paper, we introduce the USF KIT, a scriptable environment hosted in the Genus synthesis solution (replace with cadence tools?) which enables FMEDA authoring, the generation of base failure rates and customizable reports for ISO26262 (Automotive) and IEC61508 (Industrial) standards, providing interactive queries for metrics and related safety objects. A device or system can be described from the safety point of view starting from the architecture (with estimated design information) or at a given implementation phase, mapping real design information, hierarchical instances, analog blocks, memories, and even special groups of instances called elementary subparts. This paper describes how the USF Kit has been used to set-up the functional safety analysis in order to get reports and prepare the environment for the metric verification phase with fault injection (also allowing annotation of expert judgement).

Prasath Kumaraveeran Fraunofer IIS/EAS

FMEDA in EDA: Enabling an Integrated Flow for Safety-critical Applications

The semiconductor industry providing for safety-critical applications follows the implementation of functional safety standards such as ISO26262 (Automotive) and IEC61508 (Industrial). These standards can require verification of the safety metrics, which if often performed using fault injection techniques. Differently from manufacturing test, the safety verification methodology is still quite diverse across the semiconductor companies providing devices for safety-critical applications. Therefore, flexible solutions are essential to support the capability to derive inputs, execute, review, and feedback the fault campaign results into the functional safety analysis domain.

 

In this paper, we present how the Cadence Safety solution replaces our in-house custom fault injection flow: to achieve grading of STL (Software Test Library), the solution introduces several optimization techniques and executes with different simulation engines, maintaining scalability and performance. The Cadence Fault Campaign Manager (FCM) is connected to the Cadence FMEDA capability which provide the FS verification plan. Cadence FCM is then leveraged for the demanding task of using design and application knowledge to identify safe faults and guide the fault campaigns execution. Structural and Test-based analysis steps integrated in the Fault Campaign Manager flow are enabled to mark design/application aware safe faults as well as limiting the scope of faults to simulate per each test. Results of the safety metrics verification are then automatically annotated back in the FMEDA, including the possibility to introduce expert judgement. These and other automated capabilities, some yet to be explored, are discussed in this paper.

Sanjay Singh Texas Instruments (Cadence)

A Flexible and Integrated Solution for Fault Injection Driven by Functional Safety Requirements

Industry is shifting towards use of domain specific processors to achieve optimal performance. This includes using application specific DSP and custom processors as offload engines to perform specific tasks. Often, security in such systems is handled by the host CPU and the firmware or 3rd party algorithm that runs on the offload engine is encrypted. However, such a system is no longer considered secure enough. To make these systems more secure, it's imperative to bring security to the offload engines as well. This presentation talks about how Cadence with the help of our partner Beyond Semiconductor, supports the customers to protect firmware, proprietary data and build a more robust secure application.

Sanjay Singh Texas Instruments (Cadence)

Cloud-Scale Productivity Without the Complexity -- Have Your Cake and Eat it Too!

In this presentation, we will describe common challenges and solution in creating an efficient and accelerated flow that will meet technical requirements for accurately measure the power, the energy and system performance while making essential design trade-offs to meet aggressive time-to-market schedule. 

Design teams are facing the challenges of having to add more and more processing and programmable capabilities in their SoC in order to adapt and implement the latest and greatest algorithms during the lifecycle of their products, but at the same time comply with power and thermal dissipation envelops for their products. 

For instance, next generation AI/ML ASIC design team needs to validate power and performance trade-offs for various targeted representative use cases before committing to tape-out. The need to create an end-to-end flow to model the complexity of ASIC and its system-level environment in an efficient and repeatable manner is vital for project success and preparing best for the future needs of the software applications.  

Some of the design trade-offs are made at the architectural phase where various hardware and software partitioning options are studied and characterized for power and energy dissipation. We will discuss how the Tensilica development flow based on configurability and extensibility connected with system level verification in a pre-silicon design flow can provide an efficient environment to make energy aware trade-offs with accurate implementation characterization data.    

 

As next generation design size grows, design team needs an efficient and high-performance tool flow that will address various workloads that can be measured and automated. To save total system power, it is vital to test the low power optimization techniques under the control of system and application software, in order to verify that they are implemented correctly and work as expected in the system-level context. This presentation describes an environment and an approach which can enable system-level verification by adopting a high-performance verification platform, together with a methodology, to create representative software application scenarios running on Tensilica IPs and run them in a pre-silicon context, so design team can analyze, test and optimize their pre-silicon RTL design for power and energy dissipation before committing to tape-out. We will discuss how such an environment can apply to various use models servicing the perpetual needs for the hardware and software architects to make design decisions with more and more accurate views of the actual hardware and software implementation.

Ketan Joshi Cadence

Digital Full Flow

Digital Design and Implementation 2021 Update

Latest innovations from the Digital Design and Implementation group relating to power savings, advanced node coverage , machine learning and multi-chipset flows will be presented.

Joo-Sik Kim Cadence

Using Timing Robustness as a Metric to Optimize Design PPA

As silicon technology advances, designers are constantly pushing the Power-Performance-Area (PPA) envelope. Managing process variation and its impact on PPA is one of the foremost challenges in high-performance design. This presentation presents challenges and advances in statistical STA methods, outlines the Tempus Timing Robustness approach and demonstrates how Tempus Timing Robustness-based optimization can significantly improve (by several %) the PPA of your design.

Young-Hyun Lee Cadence

Building a Clock Spine Structure for NoC Blocks Using Innovus Implementation

As technology process nodes shrink, the additional variation caused by the scaling can increase clock skew and clock latency significantly. This clock performance degradation is a major problem that should be resolved because it adversely affects the overall chip performance, especially for HPC applications. Reducing clock skew and latency should be considered in the clock tree synthesis stage. One of the main solutions to mitigate clock skew and latency induced by process variation is a clock mesh network. However, even if the clock mesh has strengths for high variation tolerance and clock performance, the clock resource and power consumption of the mesh is often unacceptably high. Therefore, to compromise the clock resource with clock skew variation, a clock spine network can be used as an alternative. Proposed clock spine architecture can reduce local clock latency dramatically without significantly increasing the physical resources and power consumption. The proposed clock spine structure has strengths in any design where clock latency is important, especially like Network on Chip (NoC) blocks. In this presentation we will show how to implement a low latency clock spine structure using Innovus.

Ingeol Lee Samsung

Shortening Time-To-Market with Best-in-Class PPA using Tempus Signoff Solution

In this session, we will review advanced node design challenges and customer requirements. It’s well-known that design and modeling complexity are increasing at advanced nodes; and while the competitive marketplace demands higher performing devices (PPA) the time-to-market window for these devices continues to shrink. We will thereafter discuss how Tempus and its capabilities is able to address customer requirements by providing faster design closure and best QoR. We will discuss various capabilities in the ECO and STA Signoff area that are part of the latest Tempus software release. 

Hitendra Divecha Cadence

Multiplier Internal Truncation usage and strategies for better timing, area and power

When multiplying two numbers, the number of output bits grows to be the sum of the sizes of the two inputs to the multiplier. In order for the bit widths to not increase throughout the datapath, it is typical to truncate/round the lower bits of the result before sending it to the next computation. This truncation/rounding is possible because the LSBs of the result have very little information content.

The issue though is that you need to generate the full logic for the multiplier before doing the truncation.

Cadence Genus provides two system functions $inttrunc(X,pos) and $intround(X,pos), that creates a smaller version of the multiplier that does not even implement the logic that multiplies the LSBs of the two operands with each other. In using these system functions, the resulting logic does not correspond exactly to the multiply, so there will be some error, but may be acceptable if LSBs will be truncated away anyway.

In this presentation, I will provide details on how to use these system functions, and also provide snippets of code that show where in a datapath these should be used relative to pipeline stages and later arithmetic operations. I will provide some diagrams on what is happening during internal truncation or rounding. Will also provide area, power, timing improvement examples. Also will provide information on strategies to handle the fact the the generated gates do not LEC with the RTL.

Venkata Gosula Cisco

Methodology for Timing and Power Characterization of Embedded Memory Blocks

Intel’s NAND Flash ICs have increasingly complex memory controller for running the NAND algorithms, such as write, read, and erase with precision. This complex logic involves multiple custom designed memory blocks like SRAM, ROM, Register Files and CAMs. Due to high density and switching activity, the energy consumption has become an important metric along with the speed for our latest ICs. As embedded memories occupy significant area, the timing and power estimation of the IC must include characterization of these embedded Memories. The memory characterization solution from Cadence has timing, power, and other characterization capabilities in a single tool. This greatly amortizes the development effort from designers for comprehensive characterization. In our observation, the best methodology is to first setup a robust flow for timing characterization and then extend it to other parameters. After setting up a flow for timing characterization of 20 memory blocks in little over a quarter, we were able to extend this setup to power characterization for all of these blocks in under 6 weeks. This includes verification of timing and power numbers generated by the tool against the numbers obtained in spice simulations. With the setup ready, characterization across multiple corners can be run by the tool in a day. In this presentation, we examine the use of Cadence memory characterization tool for timing and power characterization of various types of memory blocks. We will also review interesting cases where the debug information generated by the tool was used to pinpoint and fix timing and power issues in the design.

Trupti Bemalkhedkar Intel

The Future of Timing Constraint Validation - A New and Smart Solution in Renesas to Create Precise SDC Using Conformal Litmus

The Future of Timing Constraint Validation - A New and Smart Solution in Renesas to Create Precise SDC Using Conformal Litmus

The validation of timing constraints (SDC) had become a time consuming manual and inefficient process due to increasing design complexity in LSI designs with inaccurate SDC verification tools. The major problem was that these tools reported numerous insignificant occurrences as error. To solve the problem, we have been partnering with Cadence on a new solution, Conformal Litmus, which addresses Renesas’ vision of timing verification. We have applied Conformal Litmus to several projects and have been able to identify the root cause of errors and reduce verification resources by 50%. In this presentation, we will introduce how we improved our design efficiency with the developed features in Conformal Litmus.

Hiroshi Ishiyama Renesas

Machine Learning Implementation in DFM signoff and auto-fixing flow

We are presenting a new fully integrated Machine Learning (ML) solution to finding and fixing design weak points. This methodology has been jointly developed between GLOBALFOUNDRIES and Cadence, and seamlessly integrates into existing DFM tools used in design sign-off and Innovus router based auto-fixing flows. The introduction of Machine Learning gives foundry customers greater accuracy in detecting process weak points compared to traditional methods. Moreover, previously unknown weak points can be predicted and fixed already in the design phase. This gives designers the opportunity to create more stable designs, with the potential for better yield and faster time to market

Janam Bakshi GLOBALFOUNDRIES

System Design and Analysis

Reimagining 3D FEM Extraction with Clarity 3D Solver

Join us to learn and apply the latest innovations in the full-wave Cadence Clarity™ 3D Solver to analyze your next-gen system design. Deep dive with us into the fully distributed architecture of the Clarity 3D Solver that enables you to extract large and complex packages and PCBs using hundreds of cores in the cloud or your on-premises farm — all while taking as little as 8GB memory per core.

Sung-Joo Kim Cadence

A New Way to Tackle System-Level Power Integrity Analysis

With the massive increase in bandwidth coursing over the internet today, design challenges for applications like telecom, data centers, and cloud infrastructure have all increased as well. One of these key challenges is power integrity. Power integrity analysis has morphed over the last 20 years from back-of-the-envelope calculations to detailed extraction, simulation, and signoff of physical PCB and package layouts. But now the need pushes beyond a single physical fabric to include representations of the ICs, interposer/packages, PCBs, and VRMs all together in a single view, to be analyzed together as a complete power distribution system (PDN).

In this session, a full PDN from a Cisco system design is extracted, modeled, and simulated in the new “SystemPI” environment, from VRM all the way to IC Die. Frequency and time domain simulation is used to analyze the PDN and examine its current characteristics. This opens the door for new methodologies for not only system-level PDN verification, but also for pre-design feasibility and tradeoff studies that can help you optimize your next design, or to improve your current design.

Stephen Scearce & Quinn Gaumer, Cisco

Using Sigrity Technology to Address USB 3.1 Signal Integrity Compliance

Serial Link Simulations require meeting stringent industry compliance standards in order to ensure that products can plug-and-play with other products successfully. 3D cameras are becoming more prevalent in robotics systems designs, and as a result the need for full-speed, 5-gigabit USB 3.1, with good signal integrity.  This paper covers how a product was successfully designed meeting all the USB 3.1 compliance requirements using Sigrity products.  We will discuss how pre-route analysis, power-aware interconnect model extraction, circuit simulation, channel simulation, and power integrity analysis all come into play to establish USB 3.1 compliance.  The presentation will also show some forward-looking work planned where the Cadence support team has guided us on how interconnect modeling can be upgraded to full 3D extraction using Clarity 3D Solver.

Jeff Comstock EMA-Ologic

Allegro Package Designer Plus for WLP

This presentation is intended to provide basic InFO design guidelines to IC, IC package designers applying InFO WLP technology in their design flow. 

The use of Cadence’s Allegro Package Designer Plus and the new InFO-related additions to the Silicon Layout option will be applied throughout the design process. The Silicon Layout Option extends Allegro Package Designer Plus capabilities to handle layout and mask-level verification of silicon substrates.

Hyunjoo Lee Cadence

Simulation and Modeling Challenges in Developing mmWave Power Amplifiers MMICs

The surge in communications systems relying on millimeter-wave (mmWave) frequencies has increased the need for high performance monolithic microwave integrated circuits (MMICs).  One key component of these systems is the MMIC power amplifier (PA).  The demand for higher power and more efficient PAs at lower cost points has put added demand for smaller and more compact chips achievable in one design pass.  To accomplish this, MMIC designers need more accurate models and simulation techniques.  One of the main challenges designers face is accurately modeling sections of the circuit where partitioning must be done between active devices and passive networks.  If not done correctly, electromagnetic wave interaction and coupling effects may not be correctly modeled, leading to inaccurate circuit prediction.  This presentation will discuss the challenges related to partitioning PA MMIC designs for accurate modeling and offer approaches to overcome these challenges using Cadence’s AWR Design Environment.

David Farkas Nxbeam

Silicon-Validated RFIC/Package Co-Design Using Virtuoso RF Solution in Tower Semiconductor’s CS18 RF SOI Technology

Established and emerging wireless and wireline applications such as 5G, WiFi and 400-800G optical networking are driving the demand for highly optimized RFIC solutions. Typical RF/mmWave design flows rely on the use of multiple EDA tools often sourced from multiple EDA vendors. This is inherently inefficient and often error prone leading to delays in getting a product to market. In addition, there exist multiple combinations of design tools and flows that prevent a foundry from providing a golden reference flow that can be used by a large portions of the design community. In this paper we present a silicon validated unified RFIC design flow using the Virtuoso RF. The design flow is based on a high-power SP4T switch design in Tower Semiconductor’s CS18QT9x1 foundry technology. RF SOI switch designs offer a useful test-case for the Virtuoso RF design flow as they require co-design and co-optimization of both the silicon and the package which is a key strength of this design flow. The design flow will be used to present a consistent modeling and simulation methodology. A seamless hand-off between PDK provided model, metal interconnect extraction within the p-cell, metal interconnect modeling outside the p-cell using EMX and Clarity, and the flip-chip package will be presented, while maintaining a single unified database that is used for tape-out. Silicon validation of key small and large-signal metrics will be presented highlighting the importance of the tight interaction between foundry Virtuoso PDK and package modeling using EMX and Clarity.

Chris Masse & Samir Chaudhry, Tower Semiconductor

48V 250A FET PCB Thermal Profile - A Thermal Simulation Using Celsius Thermal Simulator's 2D and CFD Airflow Analysis

This case analysis evaluates the thermal profile of 10 FETs rated at 48V, 250A under continuous soak time with all the Thermally-Enhanced mechanical attachments installed. The goal was for the MOSFETs to operate at 8W peak power output, maintaining a temperature of no greater than 60°C – applying all the thermal handling enhancements. RSENSE installed targets to have the minimum allowable Temperature after MOSFET’s Thermal analysis. The setup is a household thermal mechanism to handle average power level systems that involve mounting of heatsink on the bottom side, attaching the RSENSE, and employing Filled VIAs. Thermal simulations were conducted using Celsius 2D for several power ramp cases and Celsius CFD (Computational Fluid Dynamics) to analyze the Airflow Application. Celsius 2D investigated the Electrical and Thermal performance over a sweep of conditions. Boundary conditions were met at approximately 50°C measured as the highest measured temp on the range of installed MOSFETs. Thermal performance was furtherly improved by 10degC less after employing the Airflow mechanism. Airflow design was modeled and simulated using Celsius CFD and yielded the best Thermal

Marcus Miguel V. Vicedo & Richard Legaspino & Kristian Joel S. Jabatan, Analog Devices

Design and Performance Consideration of Silicon Interposer, Wafer-Level Fan-out and Flip Chip BGA Packages

With the advent of various modern packaging technologies, product design teams are challenged with identifying and selecting the most suitable package. Certainly, device specification is one of the key parameters that plays a critical role in selecting the right integration strategy and substrate. Pathfinding and design exploration are extensively conducted upfront to arrive at optimal cost vs. performance solution. Recently, supply chain shortage and availability has made a critical role in packaging strategy. Package selection variables are now categorized as “Cost vs. Performance vs. Availability”. In this presentation, we will explore design and performance consideration, as well as supply chain availability, to aid in packaging strategy.

Farhang Yazdani BroadPak Corporation

A Reference Flow for Chip-Package Co-Design for 5G/mmWave Using Assembly Design Kit (ADK)

The design effort for upcoming integrated circuit and package technologies is rising because of increasing complexity in production. To cope with that situation it is essential to reuse pre-qualified elements for handle complexity. For package technologies this becomes more and more apparent. Looking at the requirements for 5G applications in a radio frequency up to 60 GHz for package technologies it is no longer feasible to start from scratch. So it becomes more and more import to use prequalified elements for a technology. This paper deals with the implementation of RF-structures for manufacturing and characterization and the how to cover the interaction in the system across IC and different package levels with dedicated tooling. For upcoming package technologies it is getting more and more important to include these devices into so called Assembly Design Kits (ADK) to enable designs by potential customers. 

In this paper a package that includes two different levels of integration is presented. Package level one is a rdl technology for flip-chip assembly and level two is based on eWLB and is integrated on level one as a package-on-package approach. Both are wafer level package technologies. The paper deals with that technology but the general approach is valid for other package technologies as well. 

The flow starts with designing within Cadence Virtuoso with multiple modification in terms of addressed frequency and optimization according RF properties like gain, loss or target impedance. The designs are transferred into radio-frequency analysis tools like EMX or Clarity to investigate their behavior on a model basis. This enable the possibility to optimizes the elements according to certain target parameters across technologies. 

After finishing the design process, these elements are produced on a 300 mm wafer. After production the wafers are ready for characterization. Due to the distribution across a wafer and running lots with multiple wafers, characterization will also include statistics within wafers and across wafers and the reproducibility of the structures can be assured. All the results are bundled into a construction kit with a Cadence-based ADK involving symbol, schematics and layout for future designs and also models for running analysis based on these elements. A set of elements are available for the listed package and IC technology for usage and transfer to customers.

Fabian Hopsch Fraunofer IIS/EAS

Verification

Benefits of a Common Methodology for Emulation and Prototyping

In this presentation, lean why you should consider bridging emulation and prototyping into a continuous verification environment to speed up your verification throughput for early software validation and real-world testing. This presentation cover:

• Fast design brings up between platforms (e.g. common implementation flow, common look & feel UI)

• Advanced debug (e.g. FullVision engine, probes, memory force and release, etc.)

• Re-usable system-level interfaces (real-world testing)

Michael Young Cadence
Juergen Jaeger Cadence

Accelerating DFT simulation closure with Xcelium advancements

DFT simulation execution is crucial to the SOC design process: rapid turnaround time allows for faster pattern verification for each netlist release and more iteration cycles during development, while efficient disk space and compute farm utilization enable a greater number of patterns to be simulated with each iteration cycle. Maintaining DFT simulation execution has proven challenging due to increasing design sizes and the higher test coverages and multiple fault models (ex. stuck-at, transition fault, bridge, IDDQ, RAM sequential, etc.) needed to meet automotive quality requirements. To address these challenges, additional Xcelium features, namely Multiple Snapshot Incremental Elaborate (MSIE) and simulation save and restore, were incorporated into the DFT simulation flow. Results on two recent automotive designs include up to a 38% reduction in turnaround time from netlist release, up to a 96% reduction in disk space required to store simulation build, and 4,289 days of CPU time saved.

Rajith Radhakrishnan & Benjamin Niewenhuis, Texas Instruments

Automating Connection of Verification Plans Requirements - OpsHub Integration Manager & vManager

OpsHub Integration Manager platform helps enterprises synchronize data between tools used in the Product (Software and/or Hardware) development ecosystem. With its latest release, OpsHub Integration Manager starts support of vManager which helps Cadence customers synchronize requirements and associated information between vManager and other Requirements Management tools (e.g. Jira, Jama, IBM DOORS, etc.). Such a bi-directional synchronization allows enterprises to have full traceability of requirements and test cases through its lifecycle in vManager and other tools.

 

The vManager product pioneered verification planning and management with the industry’s first commercial solution to automate the end-to-end management of complex verification projects—from goal setting to closure. Now in its fourth generation, with the introduction of the High Availability feature, Cadence® vManager™ Verification Management provides capabilities, in partnership with OpsHub, to allow Requirements traceability by sync vPlan in DB with 3rd party Requirement Management tools (e.g. Jira, Jama, etc.) to describe and trace the life of a requirement.

 

In this session we will review the motivation for requirements traceability and the need for connection of Requirements Management systems to vPlans in vManager.  We will also explore the need for a system like OpsHub Integration Manager to manage this connection and its benefits to the overall development environment.

Nili Segal & Sandeep Jain, Cadence

Achieving Code Coverage Signoff on a Configurable and Changing Design Using Jaspergold Coverage Unreachability App

We developed a code coverage signoff flow using Xcelium simulator’s constant propagation, Incisive Metric Centre’s coverage merge and Jaspergold’s unreachability app. Our goal was to have a signoff criteria based on coverage reports that does not involve writing custom waivers.

 

Merged code coverage database from regressions was handed over to Jaspergold to analyse the coverage holes for unreachability using formal techniques. Unreachable lines of code were removed from the list of holes to create a list of real holes. Constant propagation features to avoid instrumenting tied off code is helpful but there are a lot of places that are missed by simulation constant propagation.

 

Having the formal analysis approach to eliminate the unreachable code is key to achieving code coverage closure and on time delivery of a highly configurable design. Automated coverage signoff flow without text based filters, waivers and exclusions, enables replicating the success across EDA vendors.

Mayur Desai Broadcom

Indago-Python API Mathematical Signal Analysis

Simulating analog models on event based simulation introduces significant performance increase over traditional mixed signal simulation approaches. However advanced mathematical analysis tools are usually not built to read the output of such simulations. The Indago Python API introduces a way to achieve advanced mathematical analysis by combining Indago probed data with mathematical open source Python packages.

Ramprasad Chandrasekaran, Renesas