Introduction

Welcome to the Embedded University online! This space on the web is to help you keep track of the latest happenings in Embedded world, helping embedded system developers produce better products faster. I hope this will help you find better ways to build embedded systems and at the same time, maximizing the fun!

Saturday, December 23, 2006

Traffic Information with all the senses

When it's not building neck-numbingly bad seats, or eye-wateringly ugly executive saloons, BMW is actually quite a clever company. It's got next-generation telematics right. The best way to improve safety is to share up-to-the-second data about road conditions with the cars that are actually on the road. And the best way to do this without needing to violate the laws of physics is to adopt a peer-to-peer system, in which cars communicate directly with their neighbours, rather than going via a central control point. Of course it helps to have a central co-ordinating point as well – partly to collate data and to send it to vehicles that are out of sight of other cars, but also to check whether the data from any particular car is trustworthy. This is how BMW's XFCD (Extended Floating Car Data) system seems to work

To date, telematic services have only worked through the intermediary of a traffic centre which compares and analyses up-to-the-minute traffic information from various sources and then, for example, passes on traffic jam warnings to all participating vehicles. So vehicles have been playing a role in providing specialist traffic information services for years. With FCD, the Floating Car Data System, the vehicles - which are fitted with a GPS receiver and a mobile phone module - send information about their speed and position to the traffic centre. Cars known as 'Floating Car Data Vehicles' have been on the roads since 1999, "some 40,000 in Germany", according to traffic researcher Susanne Breitenberger from the BMW Group. However, according to Breitenberger, the drawback of this system is that "it costs a relatively large amount to transfer a relatively small volume of data".So she and her team of traffic researchers are working on an advanced version of data logging called Extended Floating Car Data, XFCD for short.

XFCD cuts out the middle man - the traffic control centre - and the vehicles can swap the information they have calculated directly with one another," Breitenberger describes the benefits. XFCD uses much of the vehicle-generated data which is already available in the vehicle in order to obtain a comprehensive picture of the traffic situation: Are the lights on, are the wipers working, what are the rain and ABS sensors reporting? In the future, the special software - very little additional equipment is required - will be able to calculate not only travel times, but also congested exits and junctions, jam conditions and the weather and road conditions. Conclusions can be drawn about local risks such as black ice and aquaplaning, and on the current traffic situation, directly within the vehicle. Finally, the car sends information to the traffic centre and transmits information direct from the vehicle to other affected vehicles in the area. Another benefit is that "XFCD is more cost-effective because its 'back-channel referencing' sends only that information which is really needed to the traffic centre," says Breitenberger. This means that a vehicle with XFCD which is stuck in a traffic jam and has been given the information over the radio can register that the information is correct, so it is not reported to the traffic centre again. The result: "The system generates traffic information and sends this on a 'need to know' basis. By using the new XFCD system, we are saving money on communication relative to the FCD system, while at the same time improving the quality of the traffic information put out," says Breitenberger. At the same time, the system not only enables the information calculated on board to be compared with the traffic reports on the radio, but XFCD also checks that the traffic jam reports are still accurate. So the system sends up-to-date information directly to the traffic centre if, for example, a traffic jam has cleared.

What makes XFCD special is that the customer does not have to fit additional hardware into the car. The system works using the existing vehicle architecture, only software is added," stresses Breitenberger. After all, plenty of information besides the speed is already recorded in the car. The introduction of modern on-board computer systems allows the combination of a wide range of available data. This can be used to find out information about the traffic, the road conditions and the current weather. This includes data from, for example, the navigation system, the headlights, the air-conditioning system and rain sensors on the windscreen wipers. The system processes this data and determines the current condition of both traffic and the roads. This means traffic and risk situations, such as black ice, rain or fog, can be detected immediately. "For example, adjusting the Dynamic Stability Control (DSC) in conjunction with a low external termperature, high wiper frequency and correspondingly low speeds translates into a local risk of slipping because of ice or oil on the road," explains Breitenberger. The traffic researcher is optimistic that XFCD can be implemented quickly. Based on her calculations, which were drawn up for the city of Munich, if there is just 7.3 percent takeup on XFCD technology in vehicles, the traffic situation could be reliably determined on 80 percent of the main streets of the city.

The BMW Group has already proved that the new technology works in practice by demonstrating the system on a special test track as part of the "Innovative Mobility Showcase" in San Francisco. As its base scenario, the BMW selected a situation where one of the three vehicles came onto a slippery road surface. The vehicle processes all the information from the vehicle sensors and warns the following vehicles in real time. At the same time, the data is forwarded to a traffic centre. Information gathered by XFCD vehicles on the traffic situation on the public highways can be viewed on the internet and made available directly as traffic information to all road users.

XFCD is being implemented at the BMW Group in line with the BMW ConnectedDrive system. The basic philosophy is the networking of the driver, the vehicle and the environment using telematics, online and driver assistance systems in order to make driving safer and less congested.

Read More...

Saturday, November 18, 2006

Freescale's Ultra-Wideband Brings Wireless Capability to In-Car Entertainment

Freescale Semiconductor's Ultra-Wideband (UWB) wireless technology is bringing new capabilities to auto infotainment systems. Representing the first auto application to leverage UWB, a wireless video system using Freescale's UWB solution , demonstrates the potential for wireless in-car entertainment systems.


The wireless video system is being shown in a seven-passenger sport utility vehicle (SUV). Leveraging Freescale's UWB, it is able to simultaneously stream two separate video streams to two liquid crystal display (LCD) screens mounted on the back of the driver and passenger headrests, eliminating the need for cables and wires to connect the video server located in the vehicle. An additional two screens, located on the back of the headrests on the second row in the vehicle, wirelessly receive video using 802.11n technology, showcasing the coexistence of UWB with other wireless solutions. This product concept highlights the potential for wireless video and audio entertainment systems used in cars, trucks and SUVs.

"Car multimedia -- or infotainment -- is an ideal application for Ultra-Wideband," said Martin Rofheart, director of the UWB Operation at Freescale. "Delphi, as a global leader in transportation technology, has leveraged its innovation and expertise to lead the development of a UWB-powered video application for the auto industry. This demonstration today is an important milestone and gives attendees a look at the future of auto infotainment."

The potential for wirelessly linking other infotainment systems within the auto, such as gaming or global positioning system (GPS), are easily realizable using UWB, due to the high data rates and low cost of the technology.

With the ability to eliminate wires and cables to individual screens in the auto, which in turn reduces the size and cost of the unit, UWB can help manufacturers deliver a high-quality, low-cost in-car entertainment system. Currently, auto manufacturers are finalizing specifications for 2009 and 2010 models, and the availability of Freescale's UWB solutions now provides these manufacturers with a viable solution for their wireless infotainment systems.

Freescale's UWB solutions enable high rate transfer of video, audio and data streams wirelessly. With wire-like quality, UWB brings a new wireless option to auto, consumer electronics and PC/peripheral manufacturers. Using UWB, for example, an MPEG2 movie or HDTV stream can be broadcast in real-time wirelessly. This allows consumers new freedom in the use of multimedia-centric products, as they no longer need to be connected with wires.


Read More...

Will we ever need memory in excess of 640k? - Part 3

Coming to the last and final section of this article, am sure you are well aware that verification does pose some problem to Embedded designers. This section deals with the methods to tackle them. I will try and keep it simple so its easy to comprehend.

Current-based simulation

Active MOS devices are intrinsically current-based devices and a current-based model reflects this behavior significantly better than a voltage-based model. While a voltage-based model provides important information about transistor behavior, it has several drawbacks in terms of accuracy for current measurements, stability in simulations and size of the resulting matrix.

Current-based models are not only as accurate as Spice or Spice-like models, but they also simplify the topological structure of the equivalent circuits. This greatly improves the solution of non-linear equations and matrices. Additionally, current-based models are very efficient in device representation and require less memory than voltage-based models. This helps a great deal in addressing the large physical memory requirements when simulating large embedded memories.

Multi-engine architecture

An embedded memory circuit profile is unique in that it consists of digital, analog and mixed-signal blocks that closely interact with each other. Other features are the large replication of circuit structures and the small amount of active circuitry at any given clock cycle. The memory in itself can be decomposed into basic transistors, bit-cell blocks, decode blocks, interconnect structures and multiple other design entities that each share a unique simulation profile.

When considering a circuit for simulation, traditional fast Spice simulators use one monolithic engine to tackle all the varied elements in the circuit. This is inefficient and the performance degrades as the processes become more complex (as they do at nanoscale) and the designs get larger. On the other hand, a multi-engine architecture helps in a couple of ways - Uses a dedicated engine to optimally handle a particular memory circuit component and Provides an efficient infrastructure for managing and parallelizing the simulations.

The use of multiple, dedicated engines also results in the ability to produce greater performance in the simulation of embedded memory designs while increasing the accuracy of simulations.

Intelligent topological assessments

Recognizing an independent repeated structure or partition of a layout-extracted memory circuit, especially when there are millions of coupling caps and resistors, is tricky at best. Given that this dependence varies with input and resulting control signal changes, partitioning a circuit becomes all the more challenging. Algorithms that can intelligently recognize these partitions/memory topologies and appropriately guide the simulation to use these partitions are crucial to the speed and capacity of the fast Spice simulation.

Advanced interconnect evaluations

For embedded memory designs, parasitic loads are the predominant factor in signal delays and the numbers of parasitic elements outnumber active devices by a ratio over 4:1.

At the nanometer level, the composition of this interconnect is unique and complex. Dedicated algorithms have been developed that recognize these interconnect structures, model them accurately and simulate them efficiently. This is accomplished without affecting either the accuracy or the physical effects associated with that interconnect. This greatly enhances the speed of simulation as well as the capacity of the fast Spice simulator.

Conclusion

Embedded memory designers face an uphill task in design with several areas that are outside their realm of control. Escalating mask design complexity and cost severely limit design iterations. Newer methodologies will need to be adopted to streamline the design cycle. However, there are exciting developments on the embedded memory design simulation and verification front.

Fast Spice simulation technology designed to address the emerging challenges of nanometer designs greatly helps reduce the burden on the thankless designer. By having the high speed to accurately simulate the complex embedded memory structure, a novel technology such as this helps to drastically shorten the design cycle.

With its large capacity to rapidly verify chip-level netlists with parasitics, the next generation fast Spice simulator greatly improves the robustness of the design and reduces the time to market. If the embedded nanometer memory design challenges are to be tackled head-on, it is imperative for designers and design managers to seriously start considering the use of next generation fast Spice simulation and verification technologies into their design flows.





Read More...

Will we ever need memory in excess of 640K? - Part 2

so in the first part of this article, I had mentiond that am embedded designers main concers are high speed operation, low power consumption, robustness and process compatibility. Let's dwell into these in this section.

Designing for high speed operation

Designing a fast read/write operation for nanometer memories requires consideration of several key issues, such as banking, power delivery, clocking speed, sense-amp sensing and timing, the wire loads on the bit lines, and the reference voltage and word-line driver strength, to name just a few.

In traditional sub-micron designs, pessimistic designers performed a critical-path analysis with a wire-load model with lumped parasitics that reflected the worst-case read/write scenario. While this provided an adequate lower-bound estimation of the design performance, a considerable amount of potential power savings and performance was wasted.

Given the coupling and interference issues, approaching nanometer embedded memory designs with lumped parasitics is unacceptable. Today's design flows utilize comprehensive load models with silicon-accurate parasitic values and simulating the designs as is, rather than abstract models of older flows.

Designing for low power consumption

Power measurement is a system-level activity requiring a comprehensive operational view of the module and the blocks that interact with it. Understanding power consumption in embedded memories requires a better understanding not only of the memory's V-I profile over combinations of read-write cycles, but also the power profile of the surrounding blocks that the memory interacts with.

Leakage power increases with lower geometries and is exacerbated with today's low-voltage transistor thresholds. It is therefore essential to consider all these elements in the memory along with the clocks and switching signals and be able to accurately simulate the entire design to estimate the power consumption.

Designing for robustness

The robustness requirement of the ideal embedded memory that includes redundancy, instance-specific simulation, and so on, further strains the capabilities of nanometer design flows. High robustness requires comprehensive verification and characterization of the entire memory.

Today's embedded memories in an SOC design vary from 2 MB to over 20 MB. For a robust design, one must not only simulate the critical path but also the entire memory with associated parasitics. This, alas, produces very large transistor-level netlists that may be multi-GB in size.

Current transistor-level simulation tools provide a less-than-adequate solution to this critical problem as they are either too slow or quickly run out of capacity to handle these very large netlists. To design a highly reliable embedded memory, precious computation resources are expended in lengthy verification and characterization runs, which have a detrimental effect on the design cycle time and the time to market.

Designing for physical effects at nanoscale

SOC designs at the nanometer level have a distinct set of physical effects that need to be accounted for. The current practice of estimating the transistor behavior on silicon is done through esoteric models and is just that: an estimate. This estimation served well when digital operations were granular enough to ignore a lot of electrical effects (noise, leakage current, coupling, etc.) and physical effects (well proximity effect (WPE), sidewall capacitances, and so on).

As geometries and voltages scale down, these effects can no longer be ignored and the ability to accurately predict silicon behavior has become much more complex. In an attempt to account for these effects, nanoscale transistor models have incorporated parameters designed to reflect physical effects such as WPE, STI (shallow trench isolation) LOD (length of diffusion) effects, sidewall and non-linear capacitances, resistances, and so on. While this intricate modeling helps to provide a better estimation of the behavior of the circuit in silicon, it has the unfortunate side-effect of increasing the simulation time by traditional simulators by as much as 20 percent.

Studies have indicated that over half of the time in the design cycle is spent on verification. As future embedded memory designs add functionality through integration, it only lengthens the verification time requirement. While new processes help embedded memory to be compatible with logic processes, the physical effects get more pronounced at the nanometer level and cannot be abstracted away with simplistic voltage-based device models.

As designers are quickly realizing, the most effective path to assure reliable operation of the embedded memory is to simulate it at the transistor level. In order for designers to achieve a successful embedded memory design, they must accurately simulate very large memories in a comprehensive manner in a relatively short amount of time. Judging from the extensive designer feedback we have obtained from numerous customers, there are no adequate solutions in the marketplace.

Several static timing analysis (STA) tools exist that provide a fast solution by abstracting key memory blocks, but these tools must trade off accuracy to achieve this speed, which results in analyses that may be off by more than 25%. For designs larger than 130nm, traditional fast Spice simulators provided a relatively good solution with a combination of accuracy and speed, but their performance quickly deteriorated as the size of the memories increased and nanometer physical effects became more pronounced.

So whats the key? These are discussed in Part three.

Happy Designing







Read More...

Will we ever need memory in excess of 640K? - Part 1

Couple of decades ago we would have told ourselves that who's ever gonna need more than 640K of RAM. But today we ask ourselves can memory requirements be bound? Every new generation of consumer electronic gadget has applications that bedazzle the senses, greedily devouring more memory in the process. One can watch the latest video on a cell phone, take a picture with a pen and get the latest weather info on a wristwatch. Try doing that with 640KB of RAM! This article is a three part series dealing with the challenge of Embedded memory devices.

As we push towards greater integration, current system-on-chip (SOC) designs dramatically increase memory content and show no signs of relenting. According to the Semiconductor Industry Association (SIA), memory already dominates over 60% of silicon area in SOC designs, and is projected to represent over 90% of the die area by end of the decade. New SOC designs are beginning to take on the appearance of a memory-chip with logic surrounding it.

The predominance of memory in SOC designs is made more acute by the variety of memory types that are being used today. The multi-functional nature of current designs is reflected by the International Technology Roadmap for Semiconductors (ITRS). Having an SOC design embedded with a DRAM along with a CAM, an EPROM, and a multi-port SRAM is not uncommon.

There can be several instances of the same memory that might exist on the chip with different architectures for high-performance, low-power, other form-factors, and so on. These variations require that, for design and analysis purposes, multiple instances of the same memory be treated as distinct entities.

The rule is - smaller device sizes, greater the advantage including greater functionality per chip, lower overall cost and higher portability, but they have also resulted in an ever-increasing set of design and manufacturing challenges. We shall restrict our discussion to issues that affect the simulation and verification of these embedded memory designs and some possible solutions.

From the Embedded designer's view, the key points to consider are high speed operation, low power consumption, robustness (i.e., correct data at correct location) and process compatibility

I'll be dealing with each of these in part two.


Read More...

Friday, November 17, 2006

QNX, VoiceBox Team On Next-Gen Telematics

Two software giants have joined forces to improve the user experience of voice recognition systems for in-car telematics and infotainment applications. QNX Software Systems and VoiceBox Technologies are working together to improve the reliability and user experience in next-generation telematics, consumer electronics, and digital home applications. One result: the VoiceBox Conversational Voice Search Platform, deployed on the QNX Neutrino realtime operating system (RTOS), lets users speak in natural and conversational ways to more easily search for digital content and to control in-car and consumer devices.

Leading auto suppliers are already implementing technology from QNX and VoiceBox to enable hands-free and eyes-free management of a variety of in-car functions, including navigation, climate control, and radio control through spoken requests. The system can understand the context and intent of queries, which allows users to speak in a conversational way. For example, the driver can ask "What's the temperature in Detroit?" and then say "Raise the temperature to 70 degrees," and the car will respond to the different requests accordingly.

"The fault-tolerant environment offered by the QNX Neutrino RTOS enables us to better support the high standards necessary for in-car conversational voice applications," said Tom Freeman, Senior Vice President of Marketing at VoiceBox Technologies. "The synergies between our technologies will also lend themselves to the development of sophisticated voice search applications for emerging markets such as the digital home."

"VoiceBox is the market leader in conversational voice search and is currently being used by many of our automotive development partners," said Andrew Poliak, Automotive Segment Manager at QNX Software Systems. "This partnership is key to improving the user experience and performance of electronic devices as the markets for telematics and consumer applications merge."

The VoiceBox Conversational Platform is based on advanced algorithms that determine context and intent from conversational speech. Users aren't required to memorize exact preset commands and can simply ask for what they want, even in noisy environments such as the automobile.

Read More...

AMI Launches Automotive Grade Embedded Flash Technology

AMI Semiconductor has launched its automotive grade embedded Flash memory for its 0.35 μ ‘Smart Power’ high-voltage mixed-signal system-on-chip (SoC) process technology. The combination of the proven CMOS mixed-signal technology and the new non-volatile memory (NVM) allows designers to create cost-effective smart sensor interfaces, intelligent actuators and other sophisticated single-chip devices for operation in automotive and other harsh environment applications.

AMI Semiconductor’s latest High Injection MOS (HiMOS) embedded memory IP and technology provide designers with the flexibility to configure the size of the on-board Flash memory capacity from 2Kbytes to 64Kbytes in target applications expected to operate in harsh conditions. Memory can be delivered in single-bank or dual-bank configurations with memory retention for code storage of up to 15 years.
Single-bank configurations can provide up to 64kBytes of code storage. The dual-bank option allows code storage up to 62kBytes and data storage of 2kBytes, emulating an EEPROM capable of a minimum of 10,000 erase cycles. In addition to providing Flash functionality in the final product, HiMOS NVM can also be used for rapid development and prototyping, prior to shrinking to a ROM-based solution for final manufacture. Sector and multiple sector erase time is 0.5s, while page programming takes just 20μs (with 32-pages per sector). Random access read time is 100ns for either 8- or 16-bit words. The memory is fully qualified to the AEC-Q100 critical stress test for automotive electronic components.

Capable of operation to 80V, AMI Semiconductor’s 0.35μ CMOS-based mixed-signal technology allows system designers to reduce component count, save space, and lower costs through ICs that integrate high-density digital circuits, high-voltage circuitry and high-precision analog blocks. The automotive grade embedded HiMOS Flash memory offers a cost-effective route to reliable on-chip code and data storage and will operate at temperatures from -40ºC to 125ºC. The memory can also continue to provide read functionality at temperatures as high as 150ºC.

AMI Semiconductor’s mixed-signal Smart Power ICs can incorporate a wide variety of digital, analog and high-voltage functions including processors, communication interfaces, bus protocol controllers and interfaces for CAN and LIN connectivity, high-voltage functions such as motor control drivers and DC/DC converters, and analog blocks including filters, ADCs and DACs. Because the AMIS HiMOS Flash is implemented using only three additional mask layers to the base technology mask set, intelligent, smart power Flash-based ICs provide a highly cost-effective alternative to discrete and other SoC alternatives.

The embedded Flash memory process and custom design service provide benefits to users such as memory sized to specific needs, and supply of SoCs with integrated HiMOS Flash memory for the lifetime of a project; in automotive and industrial applications this can be 10 years or more. This protects against the risk of needing to periodically re-qualify a design due to the phasing out of current technology in favor of newer approaches, a problem that often occurs when using stand-alone non-volatile memory.

To help speed development of applications based on the latest mixed-signal and NVM technology, AMIS offers an emulation board and mixed-signal test ICs for designers to develop and debug their software in parallel to hardware development.

Read More...