Contact Us

Get In Touch:

What are you waiting for? Request a proposal. We’ll share smart ideas for growing your business.

Blog

Get the inside scoop on the latest strategies, trends and best practices for engaging your audiences and converting your buyers.

When industry thinks about breakthroughs in artificial intelligence, they often focus on the software side, imagining new, complex algorithms to train up data sets. And while these developments can be very exciting, like the recent announcement by Microsoft and Amazon Web Services of their new open-source deep learning development kit, any type of software progress is moot without a computer chip with the physical properties to handle these algorithms with speed and agility.

For AI to really proliferate in the last few decades, it needed a computer chip that could out-perform the CPUs of the late ‘90s. Instead of finding the new options from chip giant Intel, advances in AI were possible from the GPU, invented by then-small startup NVIDIA. Originally created to provide advanced graphics processing for video games, the GPU also proved perfect for new narrow AI applications, since they are easily programmed to perform any sort of general-purpose processing.

Since the late ‘90s, these chips are proliferated and grown more capable. In mid-October, NVIDIA company released a new chip aimed at the self-driving car market, which it describes as an “in-vehicle data center.” And earlier this year, IBM put NVIDIA’s GPU accelerator in the cloud, to proliferate speedy workloads of complex data.

But for the next phase of AI — where advanced autonomous systems will have to handle more data than currently imaginable very quickly — will GPUs still be enough?

Stepping up to the plate is another option that could give companies more flexibility as they leverage more sophisticated algorithms for very specific use cases. A self-driving car, for instance, doesn’t necessarily need a general-purpose GPU. It needs a chip that can process one type of data set extremely quickly, like pixels coming off a lidar, or video from stereo cameras.

Application-specific integrated circuits, or ASICs, offer that, and they are becoming the chip of choice for applications like voice recognition and cryptocurrency mining.

These chips are so efficient at single-application data processing that Google made one just to sidestep the extensive data warehouse footprint Android voice searches would create. And while that chip is proprietary, NVIDIA is developing its own ASIC similar to Google’s for specialized deep learning applications that will be available at large. This is a signal these devices may soon overtake GPUs, which is already happening in the cryptocurrency marketing, where NVIDIA and AMD are riding high on ASIC sales.

What does this move to specialized chips mean for technology companies? In essence, it’s lowering the barrier to entry. ASICs are easier to program, cheaper to start using and perform a fixed operation extremely quickly. So for startups looking to perform narrow AI applications extremely well with a small cost-to-market footprint, ASICs may democratize the availability of deep learning in the very near future, and usher in an explosion of related applications and startups.

Want information on how to leverage trends in AI and deep learning for media coverage? Contact Merritt Group today at info@merrittgrp.com to get the conversation started!

 

 

 

 

Want to join our amazing team? Check out our open positions.