Introduction

Welcome to the Embedded University online! This space on the web is to help you keep track of the latest happenings in Embedded world, helping embedded system developers produce better products faster. I hope this will help you find better ways to build embedded systems and at the same time, maximizing the fun!

Saturday, November 18, 2006

Will we ever need memory in excess of 640K? - Part 2

so in the first part of this article, I had mentiond that am embedded designers main concers are high speed operation, low power consumption, robustness and process compatibility. Let's dwell into these in this section.

Designing for high speed operation

Designing a fast read/write operation for nanometer memories requires consideration of several key issues, such as banking, power delivery, clocking speed, sense-amp sensing and timing, the wire loads on the bit lines, and the reference voltage and word-line driver strength, to name just a few.

In traditional sub-micron designs, pessimistic designers performed a critical-path analysis with a wire-load model with lumped parasitics that reflected the worst-case read/write scenario. While this provided an adequate lower-bound estimation of the design performance, a considerable amount of potential power savings and performance was wasted.

Given the coupling and interference issues, approaching nanometer embedded memory designs with lumped parasitics is unacceptable. Today's design flows utilize comprehensive load models with silicon-accurate parasitic values and simulating the designs as is, rather than abstract models of older flows.

Designing for low power consumption

Power measurement is a system-level activity requiring a comprehensive operational view of the module and the blocks that interact with it. Understanding power consumption in embedded memories requires a better understanding not only of the memory's V-I profile over combinations of read-write cycles, but also the power profile of the surrounding blocks that the memory interacts with.

Leakage power increases with lower geometries and is exacerbated with today's low-voltage transistor thresholds. It is therefore essential to consider all these elements in the memory along with the clocks and switching signals and be able to accurately simulate the entire design to estimate the power consumption.

Designing for robustness

The robustness requirement of the ideal embedded memory that includes redundancy, instance-specific simulation, and so on, further strains the capabilities of nanometer design flows. High robustness requires comprehensive verification and characterization of the entire memory.

Today's embedded memories in an SOC design vary from 2 MB to over 20 MB. For a robust design, one must not only simulate the critical path but also the entire memory with associated parasitics. This, alas, produces very large transistor-level netlists that may be multi-GB in size.

Current transistor-level simulation tools provide a less-than-adequate solution to this critical problem as they are either too slow or quickly run out of capacity to handle these very large netlists. To design a highly reliable embedded memory, precious computation resources are expended in lengthy verification and characterization runs, which have a detrimental effect on the design cycle time and the time to market.

Designing for physical effects at nanoscale

SOC designs at the nanometer level have a distinct set of physical effects that need to be accounted for. The current practice of estimating the transistor behavior on silicon is done through esoteric models and is just that: an estimate. This estimation served well when digital operations were granular enough to ignore a lot of electrical effects (noise, leakage current, coupling, etc.) and physical effects (well proximity effect (WPE), sidewall capacitances, and so on).

As geometries and voltages scale down, these effects can no longer be ignored and the ability to accurately predict silicon behavior has become much more complex. In an attempt to account for these effects, nanoscale transistor models have incorporated parameters designed to reflect physical effects such as WPE, STI (shallow trench isolation) LOD (length of diffusion) effects, sidewall and non-linear capacitances, resistances, and so on. While this intricate modeling helps to provide a better estimation of the behavior of the circuit in silicon, it has the unfortunate side-effect of increasing the simulation time by traditional simulators by as much as 20 percent.

Studies have indicated that over half of the time in the design cycle is spent on verification. As future embedded memory designs add functionality through integration, it only lengthens the verification time requirement. While new processes help embedded memory to be compatible with logic processes, the physical effects get more pronounced at the nanometer level and cannot be abstracted away with simplistic voltage-based device models.

As designers are quickly realizing, the most effective path to assure reliable operation of the embedded memory is to simulate it at the transistor level. In order for designers to achieve a successful embedded memory design, they must accurately simulate very large memories in a comprehensive manner in a relatively short amount of time. Judging from the extensive designer feedback we have obtained from numerous customers, there are no adequate solutions in the marketplace.

Several static timing analysis (STA) tools exist that provide a fast solution by abstracting key memory blocks, but these tools must trade off accuracy to achieve this speed, which results in analyses that may be off by more than 25%. For designs larger than 130nm, traditional fast Spice simulators provided a relatively good solution with a combination of accuracy and speed, but their performance quickly deteriorated as the size of the memories increased and nanometer physical effects became more pronounced.

So whats the key? These are discussed in Part three.

Happy Designing







0 Comments: