IT managers are under increasing pressure ….

 IT managers are under increasing pressure to boost network capacity and performance to cope with the data deluge. Networking systems are under a similar form of stress with their performance degrading as new capabilities are added in software. The solution to both needs is next-generation System-on-Chip (SoC) communications processors that combine multiple cores with multiple hardware acceleration engines.

 

The data deluge, with its massive growth in both mobile and enterprise network traffic, is driving substantial changes in the architectures of base stations, routers, gateways, and other networking systems. To maintain high performance as traffic volume and velocity continue to grow, next-generation communications processors combine multicore processors with specialized hardware acceleration engines in SoC ICs.

The following discussion examines the role of the SoC in today’s network infrastructures, as well as how the SoC will evolve in coming years. Before doing so, it is instructive to consider some of the trends driving this need.

In-Vehicle Computer. single board computer, Industrial PC

 

 

refer:http://embedded-computing.com/articles/next-generation-architectures-tomorrows-communications-networks/

In response to growing pressure to boost the performance

Embedded PC, in vehicle PC, Industrial PC

In response to growing pressure to boost the performance and trim down the size of embedded applications, standards organizations meet regularly to optimize their portfolios in light of the latest available technology. These updated standards take advantage of new silicon architecture combining multiple processors, graphics elements, and complex I/O to deliver the next generation of preengineered, off-the-shelf modules to support many of the high-performance requirements of embedded product development.

These standardized computer platforms allow designers to trade in substantial savings in Non-Recurring Engineering (NRE) and scheduling for slightly higher recurring costs. Standards-based designs also shortcut the software development effort by providing access to compatible operating systems, vendor-supplied drivers, and sample firmware.

In the Strategies section of this issue, we asked experts from several standards organizations to bring us up to date on the latest changes affecting embedded designs. Starting things off, Jim Blazer, CTO at RTD Embedded Technologies and active member of the PC/104 Consortium, presents the history and updates in work – such as the latest generation of PCI Express – that support the PC/104 stackable architecture. Citing the need for smaller and more rugged building blocks, Alexander Lockinger, President of the Small Form Factor Special Interest Group (SFF-SIG) and CTO at Ascend Electronics, covers the trends and new products to expect in 2013. In addition, Jerry Gipper, Director of Marketing at VITA and Editorial Director ofVITA Technologies magazine, reports on the recent Embedded Tech Trends 2013 meeting aboard the Queen Mary and standards work in progress, plus some new technologies such as optical interconnects.

 

 

Migrating legacy applications to multicore: Not as scary as it sounds

Industrial computer, Panel PC, networking appliance

Multicore processors bring significant performance and power usage benefits to embedded systems, but they also add the complexity of multiprocessing to the legacy migration workload. Nonetheless, development teams can successfully manage their transition to multicore by following some straightforward techniques.

Port to a portable standard

Often, migrating to multicore involves more than moving to a new processor. In many cases, developers must first port the legacy code to a new programming language, compiler, or OS. Using an open standard such as POSIX is highly recommended, in light of its support of many general-purpose and real-time operating systems. Doing so will help ensure that large portions of the application, including its interface with the OS, are portable. Just as important, the POSIX standard has a proven history in multiprocessing systems, and a multicore processor is simply a multiprocessing System-on-Chip (SoC).

Divide and conquer

The OSs that support Symmetric Multiprocessing (SMP) are the best option for homogenous multicore processors. SMP leaves the complex details of allocating CPU resources to the OS, rather than to the application. From the application’s point of view, the interface to the OS remains the same, regardless of the number of cores, from 1 to N. Consequently, the application can scale easily as more cores are added.

A multicore system running in SMP mode provides true parallelism, but some legacy applications were never designed for parallel execution. Often, large portions of the code do not use threads, which would allow different parts of the application to run in parallel or use threads only to isolate blocking system calls such as file or network I/O.

Another typical pitfall occurs when code uses a priority scheme to control access to shared memory. For instance, in a uniprocessor embedded system, the softwaredeveloper can often assume that a high-priority thread and a low-priority thread will not access the memory simultaneously, since the high-priority thread will always preempt the low-priority thread. Thus, many programs fail to use a mutual exclusion lock (mutex) to properly synchronize access to the memory. In an SMP multicore system, however, both of these threads can run in parallel and, as a result, access memory simultaneously with unpredictable results. Other insidious problems might exist due to synchronization errors that work perfectly on a single processor system but surface only in multiprocessorexecution.

To solve such problems, developers can divide and conquer: isolate the problem code on a single core of the multicore chip until the code can be fixed. To do this, developers can use Bound Multiprocessing (BMP), an extension to SMP that allows selected processes to run on only a specified core or CPU. In effect, BMP provides a single-core, nonparallel execution environment for legacy code while allowing other code to leverage the full parallelism of SMP. The development team can subsequently remove the CPU binding once they have modified the legacy code to behave properly in its new parallel environment.

Leverage the tools

Development teams must also use the right tools. In particular, they need visualization tools that help them pinpoint areas where code is misbehaving in a parallel environment. Mostly, this effort involves the detection and correction of the synchronization bugs mentioned earlier.

Once an application is operating properly, it may still fail to take advantage of all of the multicore chip’s CPU capacity. Visualization tools can help here, too, by allowing developers to reduce contention for shared resources (hot spots), eliminate excessive thread migration or communication between cores, and find opportunities for parallelizing code. As the number of cores increases in multicore platforms, visualization tools will be the key to successfully leveraging the performance benefits that multicore offers.

To provide such analysis, multicore visualization tools must reach beyond the scope of conventional debug tools. They must, for example, track threads as they migrate from one core to another and diagnose messages flowing between cores. They must also offer flexible control over which events are recorded and when, so that developers can focus on areas of concern.

Making the transition

“Multicore” does not need to be a bad word nor add another roadblock to legacy migration. Adopting portable programming standards such as POSIX, using OSs designed for multicore platforms, isolating legacy code to run on a single core, and using visualization tools all make the transition less daunting.

 

 

refer:

http://mil-embedded.com/articles/migrating-applications-multicore-not-scary-it-sounds/

Simplifying the development of M2M devices

With advances in wireless technologies, defining a strategy for building wireless M2M-enabled devices is not the dauntingly complex task it was once thought to be. Instead of devoting precious R&D resources to the integration of fragmented, ad hoc technologies, today’s developers can take advantage of increasingly sophisticated Embedded Application Frameworks (Linux, Android, and others), some of which are highly optimized for M2M application development.

Industrial computer, Panel PC, networking appliance
Industrial computer, Panel PC, networking appliance

Machine-to-Machine (M2M) communication, or the ability to connect and manage remote devices over the air, offers enormous potential. With the ability to centrally control remote industrial equipment, trackvehicle fleets, manage electric vehicle charging stations, expand the capabilities of consumer devices, and much more, M2M has profound implications for virtually every industry.

Given the novelty of M2M technology, however, developing connected devices has traditionally been an expensive and time-consuming process, largely due to the fact that system designers had to build the entire M2M architecture from scratch. Today, designers have a powerful new option in their M2M toolkit: Embedded Application Frameworks (EAFs). By deploying connected services on mature, prepackaged Real-Time Operating Systems (RTOSs) and libraries embedded directly into the communications module, M2M designers can substantially reduce the time and costs involved in developing new M2M hardware and focus their efforts on creating innovative connected applications.

 

refer:

http://embedded-computing.com/articles/embedded-frameworks-simplifying-development-m2m-devices/#utm_source=Cloud%2Bmenu&utm_medium=text%2Blink&utm_campaign=articles