Shifting AI to Small Form Factor Products

Shifting AI to Small Form Factor Products

February 12th, 2019

By Lynnette Reese, Editor-in-Chief, Embedded Systems Engineering


Embedded systems that leverage the latest deep learning technology for edge computing include rugged, small form factor high performance computing devices.

The industry might be seeing a shifting demand for Small Form Factor (SFF) products vs board-level products, for several reasons. Small form factor modules are not only driven by smaller components size, but also by the market. For example, Unmanned Aerial Vehicles (UAVs), autonomous vehicles, small robots, and many other applications benefit from small devices that can control them. As embedded system size continues to shrink dramatically, having advanced functionality located directly on a vehicle is becoming the expected norm.

Such applications are where Artificial Intelligence (AI) and system-on-modules (SoMs) are beginning to expand, thanks to AI’s ability to transform vast amounts into learned intelligence. Deep-learning is a subclass of AI that uses Artificial Neural Networks (ANN). Multi-layered ANNs can assist human activity by accurately performing repetitive tasks like object recognition using machine vision, speech recognition/translation, filtering content on social networks, assisting in medical diagnosis, and many others. Over the past decade, the speed and performance of processors have increased rapidly.

Figure 1: Floating point operations per second for CPUs vs. GPUs (Image: CUDA C Programming Guide ToolKit Documentation).

SFFs handle enhanced processing
A chip with 1 TFLOPS performance can perform a trillion floating point operations every second, or FLOPS. (FLOPS means “FLoating point OPerations per Second.) Not very long ago, if you wanted to get a computer with 1 TFLOPS performance, you needed a large, high-power enclosure the size of a small car with multiple, multi-core general CPU (Central Processing Unit) processors. Extreme High-Performance Computing (HPC) is now measured in petaflops, or one quadrillion operations per second.
CPUs and General Purpose Graphical Processing Units (GPGPUs) offer similar benefits. However, the GPGPUs can perform the same data and matrix math computations much faster than CPUs. (CPUs, on the other hand, can work with changing processes and algorithms that do not repeat the same operations on different data, which is what GPGPUs do very well.)

Figure 2: Aitech’s A176 Cyclone, A177 Twister fanless GPGPU supercomputers in a small form factor meant for defense and industrial high-performance computing in an AI context. (Images: Aitech)

The benefits of HPC are moving down from high-performing, cloud-based servers to independent or partially independent supercomputers that are so advantageous in the embedded world. The term GPGPU signifies the expansion of the GPU from graphics rendering applications to AI applications and markets, including security, surveillance, industrial automation, aerospace equipment and systems, boating, and marine industries.

Benefits of embedded GPGPU technology

Figure 3: Emil Kheyfets is Director of Military and Aerospace Product Line for Aitech.

Industrial and Mil-Aerospace markets are continually working to optimize products with attention to SFF and Size, Weight and Power (SWaP) enhanced systems. GPGPU systems for the above industries are ideal for AI applications. Companies are combining the benefits of deep learning technology with the SFFs required in the embedded systems of the markets mentioned above. Aitech is one company that integrates AI capabilities into rugged computers intended to bring AI to applications that may or may not have access to the cloud. Embedded systems that are leveraging the latest deep learning technology with a modular, small form factor include the A176 (Cyclone) and A177 (Twister) fanless GPGPU supercomputers, for military and rugged industrial applications, Emil Kheyfets, Director of Military and Aerospace Products at Aitech confirms the trend. “Providing high performance AI capabilities in a small form factor is benefitting multiple applications where small size and low power are critical requirements.”

He continues, “Take our NVIDIA Jetson TX2-based rugged A176 units for defense and aerospace and the recently added A177 for industrial applications, for example. Both are SFF GPGPU supercomputers, measuring around 52 cubic inches, that contain a six-core ARM processor and a GPGPU with 256 CUDA cores, while supporting more than 1 TFLOPS performance with a maximum power of less than 17 watts.”

Designed for harsh military operations, the A176 has an overall size of just 5.0” x 5.1” x 2.05” and weighs less than 2.2 lbs. The slightly bigger A177, with the same weight and a 5.9” x 5.8” x 2.5” size, is intended for less rugged, industrial applications.

In both, the Jetson TX2 SoM is integrated on a carrier card, which includes a rugged power supply and various auxiliary functionality as well as provides I/O expansion capability. Units are delivered with Linux OS pre-installed, and a Quick Start Guide and mating cables kits with industry standard connectors are available. Users can concentrate on software development in the lab, instead of going through system integration process/issues, and be up and running in a short period of time.

As processing demands increase, SFF systems using GPGPU technology and AI-based solutions are providing a path to next generation embedded systems that are poised to tackle the growing field of mobile, unmanned and autonomous vehicle technologies, bringing computing power to areas never-before conceivable.


Lynnette Reese is Editor-in-Chief, Embedded Intel Solutions and Embedded Systems Engineering, and has been working in various roles as an electrical engineer for over two decades. She is interested in open source software and hardware, the maker movement, and in increasing the number of women working in STEM so she has a greater chance of talking about something other than football at the water cooler.