Long a plot device in science fiction, artificial intelligence (AI) is becoming a part of everyday reality. AI-powered chatbots, virtual personal assistants and smart toys are now widely available, and applications such as autonomous cars, healthcare diagnostics and robotic process automation are advancing rapidly.
AI applications are being enabled by more powerful hardware, sophisticated algorithms and big data analytics. However, some of the greatest breakthroughs in AI development have come from Nvidia, a company most closely associated with the video game market.
Nvidia revolutionized computer gaming through the development of its graphics processing unit (GPU). These specialized chips perform multiple mathematic calculations simultaneously to produce cleaner, faster and smoother motion in graphics. In 2007, Nvidia pioneered the concept of using GPUs in massively parallel processing environments to make compute-intensive applications run faster. This brought dramatic improvements over previous methods that relied on linking together multiple computer processing units (CPUs).
Now Nvidia is using those same concepts to develop products with mind-boggling capabilities. For example, Bosch just announced that its new self-driving car will be based on next-generation NVIDIA DRIVE PX technology with Xavier, the first single-chip processor designed to achieve Level 4 autonomous driving. At Level 4, the car is in total control.
Why GPUs Beat CPUs for AI
Key architectural differences between a CPU and GPU make the difference. A CPU has a few cores with lots of cache memory that can handle a few software threads at a time, but a GPU has hundreds of cores that can handle thousands of threads at the same time. Plus, CPUs are optimized for sequential processing, while GPUs can execute multiple processes at the same time.
As a result, GPUs can run some software 100 times faster than with a CPU alone. Plus, it conserves power and is more cost-efficient. That makes it perfect for the deep learning type of algorithms that are powering a wide range of AI applications.
Deep learning is a form of AI designed to loosely mimic the way the human brain works with neurons and synapses. Nvidia’s GPUs are used to create so-called “artificial neural networks” that use a large number of highly interconnected nodes working in unison to analyze large datasets.
AI in the Cloud
To fully exploit the capabilities of its GPUs, Nvidia recently introduced the DGX-1 server. This so-called “AI supercomputer in a box” delivers 170 teraflops of processing power in a single system and is purpose-built for deep learning and accelerated analytics. It comes fully integrated with hardware, deep learning software and development tools, and runs popular accelerated analytics applications.
Because deep learning involves analysis of large datasets, AI platforms can benefit from cloud-based resources. This is why Nvidia recently partnered with Microsoft to allow users to run GPU-accelerated workloads in Microsoft’s Azure cloud platform. Customers will be able to use Azure N-Series virtual machines powered by Nvidia Tesla K80 GPUs to run deep learning training jobs, high-performance computing simulations, data rendering, real-time analytics and other accelerated tasks.
AI has the power to transform a wide range of industries, from manufacturing to healthcare to retail to government. Nvidia is on the leading edge of this trend, delivering powerful solutions that turn fiction into fact.