Powering AI: The explosion of new AI hardware accelerators | Computing

Want create site? Find Free WordPress Themes and plugins.

’s rapid evolution is producing an in new types of for machine learning and deep learning.

Some people refer to this as a “Cambrian explosion,” which is an apt metaphor for the current period of fervent innovation. It refers to the period about 500 million years ago when essentially every biological “body plan” among multicellular animals appeared for the first time. From that point onward, these creatures—ourselves included—fanned out to occupy, exploit, and thoroughly transform every ecological niche on the planet.

The range of innovative AI hardware-accelerator architectures continues to expand. Although you may think that graphic processing units (GPUs) are the dominant AI hardware architecture, that is far from the truth. Over the past several years, both startups and established chip vendors have introduced an impressive new generation of new hardware architectures optimized for machine learning, deep learning, natural language processing, and other AI workloads.

Chief among these new AI-optimized chipset architectures—in addition to new generations of GPUs—are neural network processing units (NNPUs), field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), and various related approaches that go by the collective name of neurosynaptic architectures. As noted in an Ars Technica article, today’s AI market has no hardware monoculture equivalent to Intel’s x86 CPU, which once dominated the desktop computing space. That’s because these new AI-accelerator chip architectures are being adapted for highly specific roles in the burgeoning cloud-to-edge ecosystem, such as computer vision.

The evolution of AI-accelerator chips


Did you find apk for android? You can find new Free Android Games and apps.

You might also like More from author

Leave A Reply

Your email address will not be published.