Controlling Data At Remote Sites
For the most part, remote sites with critical equipment are located in places that are difficult to access due to long distances or harsh conditions. Accessing critical information, such as equipment health and operational data at these sites can be time-consuming and costly. Also, given today’s aging industrial infrastructures, monitoring and controlling the data within these sites is more critical than ever. In fact, we are beginning to witness the consequences of not updating and maintaining outdated networks, as demonstrated by recent explosions at gas pipelines and blackouts in major cities when parts of the electrical grids have gone down.
Keeping a closer eye on these infrastructures is necessary not only to prevent loss of revenue, but more importantly, loss of life. Unfortunately, however, communicating with remote sites to proactively prevent equipment degradation is far from an easy task and may even require a four-hour helicopter ride. In order to proactively monitor and control remotely located assets, users must be able to access local sensor data. The most cost-effective and intelligent way to do this is through cellular automation.
Using Cellular Automation
Cellular automation is the concept of providing remote terminal units (RTU) with cellular connectivity to access data in hard-to-reach locations. Cellular connectivity provides fast and easy access to monitor and control business-critical applications at remote sites. This flexibility, however, also requires a level of responsibility that requires enhanced security requirements as well. In some cases, this is new ground for many users, as data security is something that many customers did not focus on in the past since they were using direct circuit connections via modem banks.
These types of connections did not require the same stringent security standards that a cellular connection over an IP network does. Therefore, as customers migrate toward IP networking and data security is mandated, sourcing and implementing new technologies to support the increasing security demands becomes necessary.
In addition to addressing more stringent security requirements, industrial users face the complexity of having multiple devices to manage and implement for an effective remote monitoring and control solution over IP. The challenge facing many customers is that, on top of their existing RTUs, they must also figure out which of many products they will require. It may be necessary to have a device for cellular connectivity, a Modbus gateway and a security (VPN) device, which is costly to deploy and complicated to administer and maintain.
Is Three Company?
“JAI has solid offerings on both sides of color, meaning single-chip Bayer filter color cameras and color cameras with three CCD sensors or more, including a four-line multispectral color camera that offers separate sensors for red, green, blue, and near infrared,” explains Steve Kinney, Director of Technical Pre-Sales and Support at JAI Inc., USA (San Jose, California).
When customers come to JAI to discuss a color application, Kinney starts by asking what sort of spatial accuracy the system needs versus color accuracy. “It also depends on data rate,” he adds. “If you need absolute color accuracy of less than 1%, then we usually look at a three-CCD prism camera solution. If spatial accuracy over a wide inspection area is more important, then a very-high-resolution single-chip Bayer camera may be better. If you need high speed, CMOS offers higher frame rates and multi-line sensors with NIR capability and is very effective for high-speed printing applications where colorimetry measurements are very important because NIR can help you judge between true black ink and black made by combining cyan-magenta-yellow inks. And for some printing applications, knowing the difference is important for quality purposes.”
Improvements in energy consumption
The Situation: A global industrial gas distribution companysought to manage production loads by taking advantage of variations in power prices between peak and non-peak times. It also wanted the capacity to respond quickly and according to customer product demands to reduce venting and top-up usage, as well as the ability to operate consistently at maximum and minimum load constraints. This company implemented two powerful Honeywell products powered by Matrikon, Operational Insight and Control Performance Monitor – the information infrastructure of which was tied together with OPC networking.
Operational Benefits: The technology provided a web-based solution for process data acquisition, control system performance analysis, and process monitoring and offered automated step testing and modeling functionality. The company realized several benefits including:
Improved throughput and control quality
Reduced energy consumption
Improved plant stability
Increased operational consistency
But What Really Matters: In the upstream oil and gas industry, changing market conditions require more flexibility and efficiency in the production of natural gas and oil. Increased operational costs, combined with instability in the price of crude oil in the international market, make it essential to lower operating expenses while improving production levels. Reducing energy consumption can play a huge role in achieving that goal.
The Situation: A leading global producer of crude oil and natural gaslooked for a way to stay ahead of dynamic market demands and overcome challenges associated with offshore oil and gas Automation. As part of an innovative technology project and with the help of Honeywell, this company built a Solutions to help coordinate control of multiple offshore platforms in the North Sea, and improve operations and efficiency.
Resurgence of the Do It Yourself (DIY) community has driven a range of open networking platforms, giving aspiring technologists cheap and easy access to embedded development. Outside of hobbyist toys and educational devices, however, “hacker” boards are increasing performance and I/O flexibility, and have become viable options for professional product development.
MinnowBoard is an Intel Atom-based platform equipped with interfaces like SATA, Gigabit Ethernet, and PCI Express, and is suited for applications such as Networking Attached Storage (NAS) and Network security, Garman says (Figure 3). “Professional embedded developers working on commercial products will like the fact that the MinnowBoard is open hardware, and can be customized without having to sign any Non-Disclosure Agreements (NDAs),” he adds.
With that said, the controls world is going to be moving with anautomation that has a definite consumer bias, with product development and release cycles of six months or less. In an industry where the average life expectancy of an automotive production line is eight years, it is impossible to expect the networking in an industrial setting to keep up with modern IT standards. Therefore, we turn our attention to the technologies that have existed the industrial, with the most open standards and the very best support. These are the protocols we wish to use and keep, and this article highlights and explains some of these technologies. This article does not focus on the technical implementations of each piece of technology. Rather, it is assumed the reader will be using packaged solutions such as a function block for a PLC.
refer to: http://www.automation.com/leveraging-it-technology-for-industrial-controls-applications
In the early days of embedded Linux development (circa Y2K), a significant part of the embedded computer was to port the open source code to run on the hardware platform being targeted. Unless engineers were running code on an Intel x86 board, it was not a trivial effort to develop the embedded computer and cross-compile the open source middleware to run on the hardware. In the years since, an increasing number of hardware companies have discovered that providing free Linux BSPs is necessary to ensuring the wide adoption of their hardware into embedded applications. Whereas in the early days it might have taken weeks or months to get to a Linux shell prompt over a console port, these days it should only take a few hours.
refer to: http://embedded-computing.com/articles/the-not-code-quality/
With the first car makers committing to the MOST150 network in selected vehicles from 2011. The new Intelligent Network Interface Controller (INIC) In-Vehicle computers architecture complies with Specification Rev. 3.0 and expands the audio/video capability for next generation automotive infotainment devices such as Head Units, Rear Seat Entertainment, Amplifiers, TV-Tuners and Video Displays. The MOST Cooperation – the organization through which the leading automotive multimedia network Media Oriented Systems Transport (MOST) is standardized – proudly announces that the newest Specification Rev. 3.0 is on its way to production. Various In-Vehicle computers have already started with first series projects implementing this latest MOST Technology. MOST150 enables the use of a higher bandwidth of 150 Mbps, an isochronous transport mechanism to support extensive video applications, and an embedded Ethernet channel for efficient transport of IP-based packet data. It succeeds in providing significant speed enhancements and breakthroughs while keeping costs down.
refer to: http://embedded-computing.com/news/most150-series-adoption/
The 4th generation Intel® Core™ processors
The 4th generation Intel® Core™ processors serve the embedded computing space with a new microarchitecture which Kontron will implement on a broad range of embedded computing platforms. Based on the 22 nm Intel® 3D processor technology already used in the predecessor generation, the processors, formerly codenamed ‘Haswell’, have experienced a performance increase which will doubtlessly benefit applications. Beside a 15% increased CPU performance especially the graphics has improved by its doubled performance in comparison to solutions based on the previous generation processors. At the same time, the thermal footprint has remained practically the same or has even shrunk.
With improved processing and graphics performance as well as energy efficiency and broad scalability, the 4th generation Intel® Core™ processors with its new microarchitecture provide an attractive solution for a broad array of mid-range to high-end embedded applications in target markets such as medical, embedded computing, industrial automation, infotainment and military. This whitepaper gives engineers a closer look into the architectural improvements of the new microarchitecture and delivers the answers as to how they can integrate these most efficiently into their appliances.
refer to: http://embedded-computing.com/white-papers/white-intelr-coretm-processors/
This is just one example of why telehealth strategies are poised solutions to revolutionize medicine. Telehealth not only provides quick access to specialists, but can also remotely monitor patients and reduce clinical expenses. Many of the systems needed to realize these benefits will operate on the edge, and require technology with the portability and price point of commercial mobile platforms, as well as the flexibility to perform multiple functions securely and in real time. All of this must be provided in a package that can meet the rigors of certification and scale over long lifecycle deployments.
The ability to transition between x86 and ARM processors is critical for low-volume medical applications because a single carrier board solutions – often the most costly component of a COM architecture – can suit the needs of both graphics-intensive systems and platforms that require more mobility and lower power. In addition to reducing Time-To-Market (TTM), this decreases Bill Of Materials (BOM) costs and eases Board Support Package (BSP) implementation, says Christoph Budelmann, General Manager, Budelmann Elektronik GmbH in Münster, Germany (www.budelmann-elektronik.com).
refer to: http://smallformfactors.com/articles/qseven-coms-healthcare-mobile/
Embedded OEMs are looking to the latest memory technologies to solve their specific design needs and market demands. But which memory modules provide the most optimal solution for excessive shock and vibration or increased thermal dissipation? And what new testing and validation techniques are being used to reduce overall design risks and increase reliability? Designers must evaluate these factors and other key embedded considerations when specifying memory devices for embedded systems in rugged environments.
refer to: http://embedded-computing.com/articles/ruggedization-memory-module-design/