The Computing Foundation behind the IoT and Visual Computing
By Sasa Marinkovic, Technology Evangelist, AMD
Today’s profusion of display-equipped devices will be the most visible component of the Internet of Things (IoT), a vast array of Internet-connected devices, appliances, sensors, and objects that are managed, inventoried, and monitored through wireless technologies. However, tremendous advancements in computer processing power, graphics, video, and display technologies are triggering a new era of “visual computing” that may be even more revolutionary than the IoT, introducing a new realm of visual possibilities. Already, almost every waking moment — at home, at work, or on the go — we are in the presence of some kind of digital display or computer screen, and this will only increase. Taken together, the two trends of visual computing and the IOT will radically transform on how people use computing devices for work, entertainment, and in their daily lives.
“Today’s emerging APUs will power computing platforms in every domain of computing technology”
Today, the Internet is overwhelmingly composed of computers and devices completely dependent on data captured and created by people. However, the billions of devices in the interconnected IoT will be almost the exact opposite, primarily using information originating from other devices. With almost 30 billion connected devices by the end of the decade, the IoT will transform today’s Internet into an ultra-efficient and auto-organizing entity that may eliminate human beings as the primary creators and “routers” of routine or mundane information.
This hyper-automation of the IoT will liberate human beings — using advanced screen technologies and new user interface concepts — to focus on what they do best: creative thinking, intellectual exploration, and new ideas. Immense technological leaps since the beginning of personal computing has improved image quality from crude, blocky, and pixilated to today’s incredibly detailed, vivid, and lifelike high-resolution “retina” displays, 4K TVs, and wearable computer displays that transport to an immersive new world of virtual reality (VR), “augmented reality,” and new concepts of human-computer interaction.
Today’s head-worn VR devices are primarily used for computer gaming to deliver an immersive high-resolution 3D view
Someday soon, virtual reality will migrate from gaming and entertainment to new forms of “surround computing” used for everyday interactions. Swap the goggle-like headset, mouse, and keyboard for natural input methods like gesture, touch, voice. Add the sensory impact of high-fidelity surround audio. Surround yourself with a connected array of large high-resolution displays inside a room to alter your situational awareness, and your virtual reality session will become a lifelike experience that begins to approach the “holodeck” of science fiction.
Making this possible will require tremendous advances in computing and graphics performance beyond what is available today. This is true for both personal devices and servers, with tremendous impact on demands in the data center. Technologies need to be developed to dramatically improve processing capability while minimizing energy consumption.
Fortunately, a key foundational element of the enabling technology to make this possible is emerging today. The newest category of computing processors — called Accelerated Processing Units (APUs) — deliver the kind of efficient computing, graphics, and video performance needed to power the needs of tomorrow’s advanced worlds, ranging from virtual reality technologies capable of blurring the boundaries between ourselves and our devices, to deep data extraction leading to near real-time insights. In addition to traditional processing techniques, APUs offer the parallel processing and the back-end server capabilities needed to quickly process and analyze visual content and serve required data to demanding clients.
This is achieved by an APU in part through combining two previously separate technologies. APUs combine the productivity processing of traditional CPUs (central processing unit) with the multimedia, gaming, and graphics acceleration of GPUs (graphics processing unit). The heterogeneous design provides the technical underpinnings to drive both the Internet of Things and the new era of virtual reality and visual computing.
Fortunately, a broad swath of the computing industry, including AMD, ARM, Oracle, Qualcomm, Samsung, Texas Instruments, and many others, has adopted this heterogeneous systems architecture (HSA). They have formed the Heterogeneous Systems Architecture F o u n d a t i o n , which will lead to devices delivering enhanced power efficiency, improved processing performance, easier programmability, and broad software portability. Integrating different types of microprocessors and compute elements, HSA eliminates inefficiencies with sharing data and routing computing tasks, and enabling different compute elements to work seamlessly together in harmony.
For example, HSA is a key element in AMD’s latest A-Series APU microprocessor designs. This is the first instance of the new design in the market but numerous other semiconductor suppliers are now pursuing this design strategy. Delivering greater overall application performance, these APU’s feature lower power consumption, “write once, run everywhere” software programming, and support for a wide array of different types of HSA-enabled computing devices — especially for future IoT devices.
Blending the serial-processing operations of traditional CPUs with the parallel-processing capabilities GPUs, today’s emerging APUs will power computing platforms in every domain of computing technology, from high-performance computers and servers to battery-powered tablets, mobile phones, and embedded devices. The technology will become a mainstay in datacenters, boosting processing efficiencies, and enabling new processing capabilities. APUs and HSA are the indispensable foundation for our visual computing future and the key enabling technology for tomorrow’s Internet of Things.