Fuel Economy Improvement Achieving to 75% from Exa’s simulation

Exa Corporation isa global innovator of fluids simulation solutions for product engineering, stated that Cummins Inc. and Peterbilt Motors Company, the first automobile computer companies to announce their SuperTruck for the Department of Energy (DOE) SuperTruck Program, credited Exa’s technology and engineering expertise as instrumental in the success of their recently announced vehicle. Exa worked with engineers from both Cummins and Peterbilt to perform vehicle pc and thermal simulations to achieve ignificant efficiency improvements throughout the tractor, trailer and engine. These simulations, done long before a physical prototype was ever created, helped this SuperTruck exceed the required 50% efficiency improvement, and deliver a 75% more efficient truck — ahead of schedule.

The remarkable improvements were made possible through the collaboration of two world-class automobile  computer companies. They evaluated the entire truck from the underhood cooling requirements and engine housing, to every part of the tractor and trailer. “It was not one aerodynamic or thermal simulation that made the difference,” stated David Koeberlein, Cummins’ Program Lead for the SuperTruck program. “Using Exa’s vehicle pc  simulations, we were able to rapidly find and address areas of thermal and aerodynamic efficiency throughout the truck — it was a critical resource for our vehicle pc team.”

The in-vehicle computer project started with Cummins’ engineers digitally packaging their new, energy efficient automobile computer, designed with a waste heat recovery system, into the Peterbilt tractor CAD geometry. They then added heat exchangers, and simulated the thermal performance of the complete system. “Exa’s technology was able to quickly demonstrate, through simulation alone, optimal cooling package design,” remarked Jon Dickson, Cummins Engineering Manager of Advanced In-Vehicle Computer Integration. “To package the new waste heat recovery condenser, we had to redesign the vehicle heat exchanger system and use a non-traditional layout. We were able to use Exa’s PowerCOOL and PowerTHERM to identify areas to improve thermal performance while maximizing aerodynamic efficiency — years before any vehicle was built.”

At the same time, Landon Sproull, Peterbilt Chief Engineer, and Rick Mihelic, Peterbilt Manager of Vehicle Performance and Engineering Analysis, were evaluating their vehicle pc and trailer combinations for aerodynamic/thermal performance. “Over the course of three years, we ran hundreds of simulations to test and analyze every part of this in-vehicle computer using Exa PowerFLOW,” stated Mihelic. “We designed a complete new SuperTruck aerodynamic package which included visible devices such as trailer skirts and wheel well covers as well as unseen, but critical, underbody shields that optimize airflow and thermal efficiency.” Sproull added, “Using visualization of the simulation results, our team analyzed each area looking for opportunities for improvement. Our designers and engineers could easily review and discuss results and optimization options — something simply not possible in a wind tunnel. It was this comprehensive in-vehicle computer analysis that helped us to achieve our extreme efficiency savings that exceeded even the aggressive goals set by the DOE.”

“Each day our customers seek efficiency improvements using our aerodynamic, thermal and acoustic solutions,” remarked Stephen Remondi, Exa’s President and CEO. “We have been working with Peterbilt for many years and were pleased to see them use Exa’s solutions so effectively as part of this important initiative that will benefit us all in the end.”

About Exa Corporation

Exa Corporation develops, sells and supports simulation software and services to enhance automobile computer product performance, reduce product development costs and improve the efficiency of design and engineering processes. Exa’s simulation solutions enable our customers to gain crucial insights about design performance early in the design cycle, thus reducing the likelihood of expensive redesigns and late-stage engineering automobile computer changes. As a result, Exa’s customers realize significant cost savings and fundamental improvements in their engineering development process. Exa’sproducts include, PowerFLOW, PowerDELTA with PowerCLAY, PowerVIZ, PowerSPECTRUM along with professional engineering consulting services. A partial automobile computer customer list includes: AGCO, BMW, Ford, Hyundai, Jaguar Land Rover, Kenworth, MAN, Nissan, Peterbilt, Renault, Scania, Toyota, Volkswagen, and Volvo Trucks.

refer to:
http://embedded-computing.com/news/exas-improvement-cummins-peterbilt-supertruck/

Acrosser’s Embedded Products on Media Coverage

In February, acrosser Technology was interviewed by Elektronik Praxis and Digitimes, two news sources which share a great reputation in the embedded technology industry in Germany and Taiwan. Here we share with you a summary of the two interviews.

There are many Industrial computer manufacturers in Taiwan, and in this competitive environment, it pays to be smart. For over two decades, Acrosser Technology’s claim to fame has been its staffing structure: one third of the staff belongs to the Research and Development Department. For IPC manufacturers, a larger number of people engaging in research stands for a greater effort in design, communication, verification and validation behind each industrial product. For instance, all car PCsfrom Acrosser have undergone a series of anti-shock/vibration tests before final production. Both of Acrosser’s in-vehicle computers, AR-V6100FL and AR-V6005FL, were awarded the Taiwan Excellence Award, and Acrosser is still supplying these car computers to our system integrators globally. The fanless car computers feature an Intel serial processor (i7, i5, Celeron), and have rich I/O interfaces and an integrated graphics processor, allowing each customer to find the best in-vehicle solution to fit their industry.

As for the embedded computer market, Acrosser has chosen its Fanless Embedded System, AES-HM76Z1FL, to reach its target audience. With a fanless design, Core i series processor, and an ultra slim body as its 3 main features, the so-called “F.I.T. Technology” that makes up the AES-HM76Z1FLhas garnered numerous business inquiries since its release last year. The standard I/O ports (HDMI, VGA, USB, audio and GPIO) and small form factor make AES-HM76Z1FL an appealing solution for the following applications: security control, banking systems, ATM, kiosk, digital signage, e-commerce via cloud applications, network terminal, and more. With its optional Mini PCIe socket for a 3.5G or WiFi module, the capability of wireless communication allows AES-HM76Z1FL to be a feasible addition to any transportation management control system.

To further promote the advantages of our book-sized mini PC, Acrosser has launched a free Product Testing Event starting in January 2014. Acrosser received a great deal of positive feedback from the security, financial, and entertainment industries. If you are looking for embedded products with great computing performance, do not miss the final chance to submit your application now!

Aside from its traditional industrial PCs, in-vehicle computers and Embedded Systems, Acrosser has a wide array of other product lines, including all-in-one gaming systems, single-board computers, panel PCs, industrial touch displays, rackmount servers and network appliance devices, waiting for you to make your embedded idea a reality.

Original articles:
http://www.elektronikpraxis.vogel.de/sps-ipc/articles/433172/

http://www.digitimes.com.tw/tw/iot/shwnws.asp?cnlid=15&cat=30&cat1=10&id=0000368249_26U3LBNW3F20COLW64SF5

Contact us:
http://www.acrosser.com/inquiry.html

Connected but private: Transporter aims to be your off-cloud Dropbox

Can the gap between personal and cloud storage be easily bridged? Connected data’s rackmount aims to create remote storage data that’s not actually stored in the cloud.

The cloud may be the future of all things storage, but the present is more complicated: it can be expensive, potentially insecure, and you’re left trusting a third party with all your data.

That’s what inspired The Transporter, a Kickstarter project started by former employees of Drobo. Transporter aims for something more secure and distributed, while still being sharable. The concept largely works like Dropbox, with a Transporter folder that lives on your desktop and syncs with files stored on the physical Transporter drive (which resides someplace you designate). You can easily give others network security access to specific folders, although they will need to register for a free Transporter account.

The physical Transporter is the big difference; all your data lives on your own rackmount, rather than a third party’s cloud servers (which could be located in data centers anywhere in the world). In addition to giving you the peace of mind of having the drive under your personal control, having the Transporter on your local home or business network appliance will make for faster transfer speeds while you’re on-site. (When accessing the Transporter remotely, of course, you’ll be subject to the host location’s upstream and downstream data speeds.)

The rackmount itself includes housing for a 2.5-inch SATA hard drive, with an Ethernet and USB port on the back. It can work with Wi-Fi, but you need to buy an adapter that connects via USB. It sounds a lot like other hard-drive housings, but the Transporter’s meant to be used in tandem with other Transporters. Plug one in somewhere, and it can share its drive with other rackmount, syncing and copying all data between them, depending on how you configure your folders. Even better, if any drive were to fail, the information can redundantly stored on every other Transporter connected to the network, in addition to PCs that have the shared Transporter folder.

For Network security, the strongest part of The Transporter’s pitch comes down to pricing. Yes, Dropbox offers a lot of the same functionality without the need for hardware, but it gets pricey quickly: 100GB is $100 per year and 500GB is $500 per year. For large storage amounts, the Transporter’s no-subscription-fee model is much more affordable: 1TB Transporter for $300, 2TB Transporter for $400, plus you can buy the hardware without storage for $200 and add your own hard drive later. It might make a lot of network security sense for professionals that need to offer access to large files and don’t want to deal with antiquated FTP transfers.

What’s the difference between this and any other networked hard drive? Theoretically, ease of use and a setup process that may be able to easily bypass firewalls and port settings, like the Pogoplug. In our meeting with Connected Data, no demo of the software was shown; all we saw was the Transporter box itself. It’s reasonably attractive, but ultimately the success of the network server hardware is going to come down to the quality of the software and overall experience.

The Transporter’s laser-focus on data storage and backup means it’s not quite as flexible as a more traditional network attached storage (NAS) drive. Sure, you can store your personal photos, music, and videos on a Transporter, but it lacks a built-in media server (such as DLNA or AirPlay) that makes it easy to access those on say, an Apple TV or PS3, without leaving a separate computer on. While the Transporter team says it’s looking into those types of features in the future, at the moment it’s really more of a personal rackmount, rather than a full-fledged NAS (networked attached storage) replacement.

We’ve felt network server hardware by dealing with our network server hardware data, like videos and photos, that take up too much space for cloud storage yet still need to be shared as well as secured and backed up. Transporter sounds like it fills some of those needs (storage, shareability), but not all of them. The question is, are there enough people out there who need a service like network security for it to be successful? It’s hard to say, but The network server hardware raised more than double its $100,000 goal, plus the company announced today that it has secured $6 million in additional financing.

The Transporter is available to order today, directly from Connected Data. We’re expecting to get a review unit soon, so we can see if its software and services deliver on their promise.

refer to:
http://reviews.cnet.com/8301-3382_7-57566899/connected-but-private-transporter-aims-to-be-your-off-cloud-dropbox/

Stay social with the Acrosser AMB-D255T3 Mini-ITX Board!

To further promote acrosser products, we will continue to enrich our web content and translate our website into more languages for our global audience. This month, Acrosserhas created a short film that highlights its Mini-ITX board, AMB-D255T3, using close-ups to capture its best features from different angles.

One fascinating feature of the AMB-D255T3 is its large-sized heatsink, rendering better thermal conductivity in the board. Secondly, the large amount of intersecting aluminum fins increases heat radiation area as well as heat-dissipation efficiency. The fanless design also eliminates the risk of fan malfunction, raising its product life expectancy. Without a fan, the single board computer AMB-D255T3 can perform steadily in a cool and quiet way.

Using the Intel ATOM D2550 as a base, the AMB-D255T3 was developed to provide abundant peripheral interfaces to meet the needs of different customers. For those looking for expansions, the board provides one Mini PCIe socket for a wireless or storage module. Also, for video interfaces, it features dual displays via VGA, HDMI or 18-bit LVDS, satisfying as many industries as possible.

In conclusion, acrosser’s AMB-D255T3 is a perfect combination of low power consumption and great computing performance. The complete set of I/O functions allows system integrators to apply our AMB-D255T3 to all sorts of solutions, making their embedded ideas a reality.

Product Information:
http://www.acrosser.com/Products/Single-Board-Computer/Mini-ITX-&-others/AMB-D255T3%E3%80%80(Mini-ITX-)/Intel-Atom-D2550-AMB-D255T3-(Mini-ITX)-.html

Follow us on Twitter!
http://twitter.com/ACROSSERmarcom

Contact us:
http://www.acrosser.com/inquiry.html

Embedded Virtualization: Latest Trends and Techniques

Data center Network security appliance architectures have been increasingly influencing all areas of embedded systems. Virtualization techniques are commonplace in enterprises and data centers in order to increase network security appliance capacity and reduce floor space and power consumption. From networking to smartphones, industrial control to point-of-sale systems, the embedded market is also accelerating the adoption of virtualization for some of the same reasons, as well as others unique to embedded systems.

Virtualization is the creation of software abstraction on top of a hardware platform and/or Operating System (OS) that presents one or more independent virtualized OS environments.

Enterprise and data center environments have been using virtualization for years to maximize server platform performance and run a mix of OS-specific applications on a single machine. They typically take one server blade or system and run multiple instances of a guest OS and web/application server, then load balance requests among these virtual server/app environments. This enables a single hardware platform to increase capacity, lower power consumption, and reduce physical footprint for web- and cloud-based services.

Within the enterprise, virtualized environments may also be used to run applications that only run on a specific OS. In these cases virtualization allows a host OS to run a guest OS that in turn runs the desired application. For example, a Windows machine may run a VMWare virtual machine that runs Linux as the guest OS in order to run an application only available on Linux.

How is embedded virtualization different?

Unlike data center and enterprise IT networks, embedded systems span a very large number of processors, OSs, and purpose-built software. So introducing virtualization to the greater Embedded Systems community isn’t just a matter of supporting Windows and Linux on Intel architecture. The primary drivers for virtualization are different as well. Embedded systems typically consist of a real-time component where it is critical to perform specific tasks within a guaranteed time period and a non-real-time component that may include processing real-time information, managing or configuring the system, and use of a Graphical User Interface (GUI).

Without virtualization, the non-real-time components can compromise the real-time nature of the system, so often these non-real-time components must run on a different processor. With virtualization these components can be combined on a single platform while still ensuring the real-time integrity of the system.

Technologies enabling embedded virtualization

There are some key capabilities required for embedded pc – multicore processors and VM monitors for OSs and processor architectures. In the enterprise/data center world, Intel architecture has been implementing multicore technology into their processors for years now. Having multiple truly independent cores and symmetrical network security appliance laid the groundwork for the widespread use of virtualization. In the embedded space, there are even more processor architectures to consider like ARM and its many variants, MIPS, and Freescale/PowerPC/QorIQ architectures. Many of these processor technologies have only recently started incorporating multicore. Further, hypervisors must be made available for these processor architectures. Hypervisors must also be able to host a variety of real-time and embedded pc within the embedded world. Many Real-Time Operating System (RTOS) vendors are introducing hypervisors that support Windows and Linux along with their RTOS, which provides an embedded baseline that enables virtualization.

Where are we in the adoption?

As multicore processors continue to penetrate embedded systems, the use of virtualization is increasing. More complex embedded environments that include a mix of real-time processing with user interfaces, networking, and graphics are the most likely application. Another feature of embedded environments is the need to communicate between the VM environments – the real-time component must often provide the data it’s collecting to the non-real-time VM environment for reporting and management. These communications channels are often not needed in the enterprise/data center world since each VM communicates independently.

LynuxWorks embedded board perspective

Robert Day, Vice President of Sales and Marketing at LynuxWorks (www.lynuxworks.com) echoed much of this history and current state of the embedded system and virtualization. “Network security appliance are nowhere near as diverse as in the embedded systems environment. In addition, embedded environments are constrained – the embedded board layer must deal with specific amounts of memory and accommodate a variety of CPUs and SoC variants.”

Day notes that embedded processors are now coming out with capabilities to better support embedded virtualization. Near-native performance is perhaps more important in embedded than enterprise applications, so these hypervisors and their ability to provide a thin virtualization and configuration layer, then “get out of the way” is an important feature that provides the performance requirements the industry needs.

Day references the embedded board hypervisors that run or depend on another OS – this kind of configuration simply doesn’t work in most embedded environments due to losing the near-native performance as well as potential compromise of real-time characteristics. Type 1 hypervisors – the software layer running directly on the hardware and providing the resource abstraction to one or more OSs – can work, but tend to have a large memory footprint since they often rely on a “helper” OS inside the hypervisor. For this reason, LynuxWorks coined the term “Type 0 hypervisor” – a type of hypervisor that has no OS inside. It’s a small piece of software that manages memory, devices, and processor core allocation. The hypervisor contains no drivers – it just tunnels through to the guest OSs. The disadvantage is that it doesn’t provide all the capabilities that might be available in the network security appliance.

Embedded system developers typically know the platform their systems run on, what OSs are used, and what the application characteristics are. In these cases, it’s acceptable to use a relatively static configuration that gains higher performance at the expense of less flexibility – certainly an acceptable trade-off for embedded systems.

Embedded board has been seeing embedded developers take advantage of virtualization to combine traditionally separate physical systems into one virtualized system. One example Day cited was combining a real-time sensor environment that samples data with the GUI management and reporting system.

Processors that incorporate Memory Management Units (MMUs) support the virtualized memory maps well for embedded applications. A more challenging area is the sharing or allocating of I/O devices among or between virtualized environments. “You can build devices on top of the hypervisor, then use these devices to communicate with the guest OSs,” Day says. “This would mean another virtual system virtualizing the device itself.” Here is where an I/O MMU can provide significant help. The IOMMU functions like an MMU for the I/O devices. Essentially the hypervisor partitions devices to go with specific VM environments and the IOMMU is configured to perform these tasks. Cleanly partitioning the IOMMU allows the hypervisor to get out of the way once the device is configured and the VM environment using that device can see near-native performance of the I/O.

LynuxWorks has seen initial virtualization use cases in the defense applications. The Internet of Things (IoT) revolution is also fueling the embedded virtualization fire.

Virtualization is one of the hottest topics today and its link to malware detection and prevention is another important aspect. Day mentioned that malware detection is built into the LynuxWorks hypervisor. This involves the hypervisor being able to detect behavior of certain types of malware as the guest OSs run. Because of the privileged nature of the hypervisor, it can look for certain telltale activities of malware going on with the guest OS and flag these. Most virtualized systems have some method to report suspicious things from the hypervisor to a management entity. When the reports are sent, the management entity can take action based on what the hypervisor is reporting. As virus and malware attacks become more purpose-built to attack safety-critical embedded applications, these kinds of watchdog capabilities can be an important line of defense.

Wind River embedded virtualization perspective

Technology experts Glenn Seiler, Vice President of Software Defined Networking and Davide Ricci, Open Source Product Line Manager at Wind River (www.windriver.com) say virtualization is important in the networking world.

A network transformation is underway: The explosion of smart portable devices coupled with their bandwidth-hungry multimedia applications have brought us to a crossroads in the networking world. Like the general embedded world, network infrastructure is taking a page from enterprise and data center distributed architectures to transform the network from a collection of fixed-function infrastructure components to general compute and packet processing platforms that can host and run a variety of network functions. This transformation is called Software Defined Networking (SDN). Coupled with this initiative is Network Functions Virtualization (NFV) – taking networking functionality like bridging, routing, network monitoring, and deep packet inspection and creating software components that can run within a virtualized environment on a piece of SDN infrastructure. This model closely parallels how data centers work today, and it promises to lower operational expense, increase flexibility, and shorten new services deployment.

Seiler mentions that there has been considerable pull from service providers to create NFV-enabled offerings from traditional telecom equipment manufacturers. “Carriers are pushing toward NFV. Wind River has been developing their technical product requirements and virtualization strategy around ETSI NFV specifications. This has been creating a lot of strong demand for virtualization technologies and Wind River has focused a lot of resources on providing carrier-grade virtualization and cloud capabilities around NFV.”

Seiler outlines four important tenets that are needed to support carrier-grade virtualization and NFV:

Reliability and availability. Network infrastructure is moving toward enterprise and data center architecture, but must do so and maintain carrier-grade reliability and availability.
Performance. Increasing bandwidths and real-time requirements such as baseband and multimedia streaming requires near-native performance with NFV.
Security. Intelligent virtualized infrastructure must maintain security and be resistant to malware or viruses that might target network infrastructure.
Manageability. Virtualized, distributed network components must be able to be managed transparently with existing OSS/BSS and provide the ability to perform reconfiguration and still be resilient to a single point of failure.
Wind River recently announced Wind River Open Virtualization. This is a virtualization environment based on Kernel-based Virtual Machine (KVM) that delivers the performance and management capabilities required by communications service providers. Service provider expectations for NFV are ambitious – among them being able to virtualize base stations and radio access network controllers – and to support these kinds of baseband protocols at peak capacity, the system has to have significant real-time properties.

Specifically, Wind River looked at interrupt and timer latencies from native running applications versus running on a hypervisor managing the VMs. Ricci mentioned Wind River engineers spent a significant amount of time developing with the KVM open source baseline to provide real-time preemption components with the ability to get near-native performance. Maintaining carrier-grade speeds is especially important for the telecom industry, as embedded board cannot be compromised.

refer to:http://embedded-computing.com/articles/embedded-virtualization-latest-trends-techniques/

Acrosser Introduces the Book-Sized Fanless Mini PC Video

To illustrate the high performance of AES-HM76Z1FL, acrosser created a short film, explicating the multiple features of our ultra thin embedded system. From its exterior look, this book-sized mini PC embodies great computing performance within its small form factor.

The arrangement of the I/O slot has taken product design and industrial applicability into consideration perfectly. Despite AES-HM76Z1FL’s small form factor, a wide selection of I/O ports including HDMI, USB, LAN, COMBO, GPIO and COM can be found on both sides of the product. Moreover, our model can be integrated horizontally or vertically, making it a flexible option that caters to many different industries. We are sure that these concepts make AES-HM76Z1FL a more feasible choice than other embedded systems.

The second part of the video demonstrates the 4 major applications of our AES-HM76Z1FL mentioned in our previous announcement: digital signage, kiosk, industrial automation and home automation. Aside from these four applications, Acrosser believes there are still many other applications for which the AES-HM76Z1FL would be useful.

Through the video, Acrosser was able to demonstrate the best features of the AES-HM76Z1FL, and allow our customers to easily see its power and versatility.

Finally, we would like to offer our gratitude to the vast number of applicants for the Free Product Testing Event. This program is easy to apply to, and still going on right now! Having reached the halfway mark for the event, many system integrators and industrial consultants have already provided plenty of interesting ideas for us. For those who have not applied the event, Acrosser welcomes you to submit your amazing proposals!

Product Information:
http://www.acrosser.com/Products/Embedded-Computer/Fanless-Embedded-Systems/AES-HM76Z1FL/Intel-Core-i3/i7-AES-HM76Z1FL.html

Contact us:
http://www.acrosser.com/inquiry.html

Enhanced Cybersecurity Services: Protecting Critical Infrastructure

Comprehensive cybersecurity is an unfortunate necessity in the connected age, as malwares like Duqu, Flame, and Stuxnet have proven to be effective Embedded PC instruments of espionage and physical sabotage rather than vehicles of petty cybercrime. In an effort to mitigate the impact of such threats on United States Critical Infrastructure (CI), the Department of Homeland Security (DHS) developed the Enhanced Cybersecurity Services (ECS) program, a voluntary embedded system framework designed to augment the existing cyber defenses of CI entities. The following provides an overview of the ECS program architecture, technology, and entry qualifications as described in an “on background” interview with DHS embedded pc officials.

At some point in 2007, an operator at the Natanz uranium enrichment facility in Iran inserted a USB memory device infected with the Stuxnet malware into an Industrial Control System (ICS) running a Windows Operating System. Over the next three years, the embedded system would propagate over the Natanz facility’s internal network by exploiting zero-day vulnerabilities in a variety of Windows OSs, eventually gaining access to the Programmable Logic Controllers on a number of Industrial Control Systems (PCSs) for the facility’s gas centrifuges. Stuxnet then injected malicious code to make the centrifuges spin at their maximum degradation point of 1410 Hz. One thousand of the 9,000 centrifuges at the Natanz facility were damaged beyond repair.

In February 2013, Executive Order (EO) 13,636 and Presidential Policy Directive (PPD)-21 ordered the DHS to develop a public-private partnership model to protect United States CI entities from cyber threats like Stuxnet. The result was an expansion of the Enhanced Cybersecurity Services (ECS) program from the Defense Industrial Base (DIB) to 16 critical industrial pc.

Enhanced Cybersecurity Services framework

At its core, the embedded system pc is a voluntary information-sharing framework that facilitates the dissemination of government-furnished cyber threat information to CI entities in both the public and private sectors. Through the program, sensitive and classified embedded system information is collected by agencies across the United States Government (USG) or EINSTEIN sensors1 placed on Federal Civilian Executive Branch (FCEB) agency networks, and then analyzed by DHS to develop “threat indicators”. DHS-developed threat indicators are then provided to Commercial Service Providers (CSPs)2 that, after being vetted and entering a Memorandum of Agreement (MOA) with DHS, may commercially offer approved ECS services to entities that have been validated as part of United States CI. The ECS services can then be used to supplement existing cyber defenses operated by or available to CI entities and CSPs to prevent unauthorized access, exploitation, and data exfiltration.

In addition, CSPs may also provide limited, anonymized, and industrial cybersecurity metrics to the DHS Office of Cybersecurity & Communications (CS&C) with the permission of the participating CI entity. Called Optional Statistical Information Sharing, this practice aids in understanding the effectiveness of the ECS program and its threat indicators, and promotes coordinated protection, prevention, and responses to malicious cyber threats across federal and commercial domains.

Enhanced Cybersecurity Services countermeasures the initial implementation of ECS, including two countermeasures for combating cyber threats: Domain Name Service (DNS) sinkholing and embedded pc e-mail filtering.

DNS sinkholing technology is particularly effective against malwares like Stuxnet that are equipped with distributed command and control network capabilities, which allows threats to open a connection back to a command and control server so that its creators can remotely access it, give it commands, and update it. The DNS sinkholing capability enables CSPs to prevent communication with known or suspected malicious Internet domains by redirecting the network connection away from those domains. Instead, CSPs direct network traffic to “safe servers” or “sinkhole servers,” both hindering the spread of the malware and preventing its communications with embedded pc cyber attackers.

The e-mail filtering capability is effective in combating cyber threats like Duqu, for example, which spread to targets through contaminated Microsoft Word e-mail attachments (also known as phishing), then used a command and control network to exfiltrate data encrypted in image files back to its creators. The e-mail filtering capability enables CSPs to scan attachments, URLs, and other potential malware hidden in e-mail destined for an entity’s networks and potentially quarantine it before delivery to end users.

Accreditation and costs for Enhanced Cybersecurity Services

The CS&C is the DHS executive agent for the ECS program, and executes the CSP security accreditation process and MOAs, as well as validation of CI entities. Any CI entity from one of the 16 key infrastructure sectors can be evaluated for protection under the ECS program, including state, local, tribal, and territorial governments.

For CSPs to complete the security accreditation process, they must sign an MOA with the USG that defines ECS expectations and specific program activities. The MOA works to clarify the CSP’s ability to deliver ECS services commercially while adhering to the program’s security requirements, which include the ability to:

Accept, handle, and safeguard all unclassified and classified indicators from DHS in a Sensitive Compartment Information Facility (SCIF) Retain employee(s) capable of holding classified security clearances for the purposes of handling classified information (clearance sponsorship is provided by DHS)
Implement ECS services in accordance with security guidelines outlined in the network design provided on signing of the embedded pc versions of MOA.

Privacy, confidentiality, and Enhanced Cybersecurity Services

“ECS does not involve government monitoring of private communications or the sharing of communications content with the government by the CSPs,” a DHS official told Industrial embedded systems.  Although CSPs may voluntarily share limited aggregated and anonymized statistical information with the government under the ECS program, ECS related information is not directly shared between customers of the CSPs and the government.

“CS&C may share information received under the ECS program with other USG entities with cybersecurity responsibilities, so long as the practice of sharing information is consistent with its existing policies and procedures. DHS does not control what actions are taken to secure private networks or diminish the voluntary nature of this effort. Nor does DHS monitor actions between the CSPs and the CI entities to which they provide services. CI entities remain in full control of their data and the decisions about how to best secure it.”

refer to:http://industrial-embedded.com/articles/enhanced-protecting-critical-infrastructure/

Machine-to-Machine (M2M) Gateway: Trusted and Connected Intelligence

The factory of the future will still have Programmable Logic Controllers (PLCs) and Human-Machine Interface (HMI) panels, but someone half a world away will likely be monitoring and controlling them. That person may be sitting at a desk watching over a global network of facilities or checking the latest production statistics from a smartphone. Either way, the vision of the “Connected Factory” is evolving from concept to reality, as the explosive growth in Machine-to-Machine (M2M) connections, mobile devices in the enterprise, and wireless data traffic shows.

Implementing this approach, however, is not simply a matter of connecting devices to Ethernet and wireless networks. The fundamentals must be right to ensure that facilities produce information that can be accessed, monitored, and controlled from anywhere.

Over the past 50 years, automation technology has evolved to the point that a plant manager for a global industrial manufacturing company can easily monitor and control devices from hundreds of miles away, rather than standing a few feet away from them. This level of control can be achieved in ways that may include:

Sitting at a desk in a centralized office
Watching video footage captured by a global network of connected cameras
Remotely troubleshooting a piece of equipment from a tablet
Checking the latest production statistics using a smartphone app
The progression of the “Industry 4.0” revolution means that more factories and industrial plants will implement more networked devices that are able to collect data. This concept, which is also referred to as the “connected factory,” is transitioning from a ’what-if’ notion to present-day reality at overwhelming speed.

The flood of enabling technology has paved the way for automation to gain global prominence across a wide variety of industrial and manufacturing industries. Organizations are increasingly realizing that with automation they can produce better quality products, sustainably and efficiently, while keeping a closer check on production costs. Gartner forecasts that by the year 2020, there will be up to 30 billion devices connected with unique IP addresses, most of which will be products. In the industrial world, these devices will be equipment such as natural gas or wastewater treatment pumps, high-capacity scales, and other production machines.

While many global manufacturers are eager to realize the benefits of the Connected Factory, such as reduced operational costs and better visibility and control of assets, it is unrealistic and cost prohibitive for them to construct greenfield facilities or orchestrate a ’rip-and-replace’ of all legacy equipment. Instead, plant managers are better off leveraging industrially fluent communications devices and adapting the legacy sensors, Remote Terminal Units (RTUs), and communications protocols that have served them well for years in order to create modern, real-time reporting and control systems.

The three key requisites of the Connected Factory

Managing productivity and profitability is a key role of plant managers and engineers in world-class manufacturing operations. The first step towards achieving this in the 21st century factory is to implement the fundamentals of a successful Connected Factory. These fundamentals must be in place to ensure that factories are generating information that can be accessed, monitored, and controlled from anywhere.

To begin this process, manufacturers must do three things:

Enable devices to speak the same language
Rethink operational efficiencies so more devices can talk with each other
Provide a secure, seamless platform in which these devices can communicate
Come together: Devices that speak the same language

The challenge with integrating legacy equipment with the Connected Factory model is that it often uses older protocols or even serial links that don’t easily fit into the TCP/IP world. An organization’s engineers must first ensure that this equipment can speak the same language as newer devices.

Plant engineers often source network switches used to build industrial networks from the IT world, a decision that may make sense for higher level infrastructure, but one that essentially introduces technology that is not purpose-built for machine-level control systems. For example, a modern machine may have every component networked and may allow every conceivable piece of status information to be displayed on its HMIs, but the network switch itself – the failure of which could take down the entire machine – sits alone or is loosely integrated via expensive and seemingly incomprehensible SMNP drivers.

To avoid this scenario, manufacturers must use a complex combination of drivers to provide protocol compatibility, replace existing hardware with more complex devices, or choose advanced HMIs, protocol converters, and industrial-grade switches that offer industrial fluency and multi-protocol support.

The first two options add complexity and development costs to the system. The third – deploying equipment with native support for all required standards and protocols – provides a simpler solution.

Raise your voice: Enabling more devices to communicate

Connecting equipment that can’t easily be reached in remote or geographically rugged locations enables real-time information access and greatly enhances remote troubleshooting capabilities. It can also result in safer working conditions for the humans who must monitor, regulate, and troubleshoot this equipment. Think about the value of automated devices in an oil and gas facility, for example. This clear value proposition for remote connectivity is driving the current boom in cellular M2M connection. Consider Metcalfe’s law as it applies to the Connected Factory: the value of the network increases exponentially with the number of connected assets.

With this in mind, manufacturers must invest in issuing all remote assets a cellular connection. Cellular routers and modems now provide native support for industrial automation equipment and protocols, including models that support 4G network connectivity. These products enable two-way communications from facility to facility, and enable information exchange with remote assets, such as offshore platforms or unattended substations or pipelines.

Everyone’s invited: A better place for devices to connect

As manufacturers seek to assign an IP address to networked assets, one hurdle they often face is that the available bandwidth remains static in spite of the growing number of networkable devices and data points. When factoring in the hierarchical nature of the industrial world – with PLCs and HMIs grouped into machines, these machines grouped into cells, and these cells grouped into factories – assigning an IP address to every PLC and sensor can be a management nightmare.

But new approaches to network design and configuration can help plant managers take full advantage of the available connectivity and control. Instead of assigning individual IP addresses, for example, engineers can solve the problem by using a rugged appliance that manages communications with dozens of disparate devices (including sensors, PLCs, and HMIs) while serving as a single point of contact for the network.

What’s next for Industry 4.0?

The ability to seamlessly communicate with operators, control systems, and software applications combined with practical networking options and support for native features and protocols delivers exponential meaning to data extracted from industrial devices. In other words, the true value of Industry 4.0 and the Connected Factory isn’t derived from the sheer volume of connections; it comes from creating more meaningful connections and the competitive edge gained by the harmonious dialogue between devices and the humans managing them. These capabilities create the context to take automation and remote management to new levels, thereby making the Connected Factory a reality.

As part of the Industry 4.0 movement, the Connected Factory demands a new approach to the concept of factory automation. With the thoughtful integration of supporting components that are designed specifically for this goal, the ability to connect, monitor, and control will drive productivity well into the future.

 

refer to: http://embedded-computing.com/articles/elements-success-the-connected-factory-needs-flourish-2014/

Meet Acrosser at Embedded World 2014!

acrosser Technology, a world-leading Industrial computer manufacturer, announces its participation in Embedded World 2014 from February 25-27, 2014. The event will take place in Nuremberg, Germany. We warmly invite all customers to come and meet us in Hall 5, booth number: 5-305!

At Embedded World 2014, Acrosser Technology will showcase its NEW embedded system product, AES-HM76Z1FLand its In-Vehicle Computer, AIV-HM76V0FL. Both products will be displayed in LIVE DEMO, showing its stability and high performance to the audience. What’s more, Acrosser will select its most favored mini-ITX boards from among our loyal customers, being demonstrated as a featured zone inside the booth. Make sure you do not miss our mini-ITX collection!

For gaming applications, Acrosser will exhibit the All-in-One Gaming BoardAMB-A55EG1. The board features great computing and graphic performance, and high compatibility on multiple operating systems. If you are looking for a gaming system, do not miss our AGS-HM76G1. It is a cost-effective PC-based gaming solution that can be easily applied to your VLT, amusement, and slot machines.

In addition, Acrosser will also stress its focus on networking appliances. With a series of products being showcased, we are ready to be your solution provider! We look forward to making your embedded idea a reality, and we cordially invite you to visit our booth and discover our outstanding products.

Feel free to pay us a visit in Hall 5 at Booth 5-305!
Acrosser Technology Co., Ltd.

For more information, please visit to Acrosser Technology website
www.acrosser.com

Contact:http://www.acrosser.com/inquiry.html

Apply for our AES-HM76Z1FL Product Testing Event NOW!

acrosser Technology Co., Ltd., a world-leading industrial and Embedded Computer designer and manufacturer, is pleased to announce that our AES-HM76Z1FL Product Testing Event has officially begun! To experience AES-HM76Z1FL’s superb computing performance, Acrosser welcomes all system integrators, from all industries, to join the event! The campaign will only last for 3 months and ends in March, 2014. So don’t hesitate to submit your application! Please click our event web page or look for the banner on our website!

So, are you ready to explore the excellence of Acrosser’s embedded products? To sign up for the AES-HM76Z1FL Product Testing Event, please click here, complete the on-line application form and submit! Acrosser will review your eligibility upon receiving your request. There are only a limited of AES-HM76Z1FL models for this event, so we encourage you to apply early!

Once your application has been approved, Acrosser will send a confirmation e-mail and an AES-HM76Z1FL Product Release Form. Please double-check that the Product Release Form has the correct mailing information so that we can get the product to you in a timely manner. You will then receive free lease of our product for a duration of one month, starting immediately!

Please mark the date, and make sure to return the Feedback Sheet and the AES-HM76Z1FL model to Acrosser on time. Meanwhile, we will send a small gift back to your previous address as a closing of the event. If you are interested in placing an order after product testing, please contact our sales team for discount!

We are prepared to be amazed by your fascinating projects. With its small form factor and fanless design, AES-HM76Z1FL can be installed anywhere under multiple industrial projects. Apply for the event, and experience great computing performance!

Product Information:
http://www.acrosser.com/Products/Embedded-Computer/Fanless-Embedded-Systems/AES-HM76Z1FL/Intel-Core-i3/i7-AES-HM76Z1FL.html

Contact us:
http://www.acrosser.com/inquiry.html