By Steven Fong, Corporate Vice President, APJ Embedded Business, AMD

Designing and scaling industrial computers is becoming tougher as the types and number of sensors needed to satisfy the growing demand for manufacturing data continue to increase. Furthermore, as industrial and medical organizations seek to take advantage of automation, systems — from devices to edge to cloud — are infused with AI, ML, data analytics software, and intelligent displays. This drives the need for higher levels of diversified compute. New approaches enabled by adaptive compute platforms designed for sensor-rich control applications can accelerate development, simplify hardware-software integration, and sustain performance while allowing tight control over power consumption.

Embedded PC computing trends

The ongoing digitization of edge applications involves several elements, including sensorization, the infusion of AI and machine learning across edge and cloud computing, human-machine interfacing, multimedia experiences, networking, and the integration of Operational Technology (OT) and Information Technology (IT) domains, which often require different compute elements to perform optimally.

Take the example of a medical imaging system. It typically includes probes that must be interfaced and processed using various algorithms, which require a large amount of compute given the complexity of the workloads. The data created by these operations is only useful to medical users such as radiologists and cardiologists once it has been cleaned, organized, and processed. Data analytics engines and AI inferencing can also generate insights to speed up the analysis of results. All this information must be rendered and visualized on display monitors to aid medical analysts and then cascaded to a medical database via the organization’s network.

This is just one example of how extensive sensorization is enabling changes that enhance efficiency and productivity for embedded applications. These wide-ranging sensors need to be interfaced and processed in a timely manner — usually within milliseconds — to achieve maximum responsiveness. Massive sensor deployments also feed Big Data algorithms that extract intelligence about processes and generate insights that drive improvement and next-generation product development.

Large numbers of edge instances with sensors and adaptive compute platforms also have a PC either incorporated or tethered to their system. Bringing x86 computing, AI, control, sensor interfacing and processing, visualization, and networking closer together delivers key advantages, including size reduction that eases deployment and installation. In addition, savings in power consumption simplify power supply design and can enable battery-powered applications, such as autonomous mobile robots (AMRs) used for moving components, materials, and finished products within factories, to operate longer on a single charge. While fitting a larger battery to offset excessive power consumption increases system cost and weight, a more integrated solution lowers the total cost of ownership.

However, integration requires intensive hardware and software engineering effort, which increases as more sensor channels are added in pursuit of greater productivity, safety, and efficiency.

Flexible integration

A common approach is to take advantage of the rich ecosystem around the x86 processor architecture, commonly used in industrial and medical computing, alongside an adaptive compute platform that can execute real-time machine control, sensor interfacing, and networking. This combination can be applied to use cases such as machine vision, industrial networking, robot controllers, medical imaging, smart city infrastructure, security, and retail analytics.

A professional portrait of Steven Fong, Corporate Vice President at AMD, smiling against a wooden backdrop.
Steven Fong, Corporate Vice President, APJ Embedded Business, AMD

Conventionally, an industrial PC acts as a gatekeeper, handling the influx of sensor data and deciding whether processing will be done on the x86 core or, if available, on an FPGA-based accelerator card accessed through the PCIe interface. Latency is a major issue with this approach. The time needed to ingest, process, and transfer sensor data into the accelerator adds delays that can make real-time system response impossible.

Integrating sensor interfaces, AI processors, and network processing onto the FPGA-based adaptive compute platform holds immense promise. Consolidating these functions onto a single motherboard enhances computational efficiency and reduces latency by eliminating the need for data to traverse disparate components. This integrated approach enables faster response, greater accuracy, and lower power consumption.

Supportive ecosystem

Adaptive compute platforms that can handle real-time sensor processing, control, networking, and AI inferencing help minimize latency, power consumption, and overall solution size, resulting in an efficient and powerful platform for embedded processing.

Extending this principle — embodied in devices such as AMD Versal adaptive heterogeneous processors — can streamline the development of embedded compute platforms to support the trend toward sensorization while handling diverse workloads. With the addition of x86 processor IP, specialized adaptive compute solutions, and large numbers of I/Os suited to sensor interfacing, the next level of integration, power efficiency, and system response is within reach. The large number of I/Os makes it possible to connect various types of sensors and route their signals directly for processing. This applies to many sensor types, such as GMSL (Gigabit Multimedia Serial Link) cameras, 10/25GE, LiDAR, and medical probes like endoscopes and ultrasound. Moreover, additional sensor channels can be configured easily when needed, supporting scalability.

This approach combines scalable sensor interfacing and heterogeneous acceleration with the advantages of the extensive ecosystem supporting industrial processing on x86 platforms to simplify sensing, AI, control, and networking software. Engineers can build optimized embedded computers that meet their specific needs. They can tailor the number of sensor I/Os, direct each channel to the most suitable acceleration engine — whether a CPU, real-time core, DSP, AI engine, or programmable logic — and fine-tune the implementation for optimal power consumption and performance. The flexibility to connect signals on any input channel to the most suitable processing engine also helps engineers manage mixed sensor criticality, depending on importance and real-time requirements.

The extensive ecosystem supporting x86 embedded computing provides rich resources for developing applications such as machine vision, medical image scanning, robot control, and more.

Discover more from Back End News

Subscribe now to keep reading and get access to the full archive.

Continue reading