Intel hopes to conquer AI market by Nervana processor
Intel hopes to conquer AI market by Nervana processor
Intel (Intel) has the proud chip technology, but in recent years, the most popular artificial intelligence (AI) field, Intel has not been able to have a more prominent performance. In order to reverse the disadvantages, Intel bought the deep learning chip manufacturer NervanaSystems, and plans to launch the first AI (purpose-built) Nervana neural network processor (NNP) before the end of 2017.
According to a Engadget report, computer vision, speech recognition and other deep learning application usually needs matrix calculation in large array, and this is not Intel Core or Xeon universal chip. Intel expects to make up for the gap in AI by the upcoming NNP, and invites Facebook to participate in deep learning and AI exhibition to participate in the chip design.
In addition to Facebook's good social media applications, Intel plans to extend its AI chips to health care, automotive, meteorology, and other fields.
NervanaNNP is a specific purpose integrated circuit (ASIC), which can train and execute deep learning algorithms with very high computational efficiency. Intel abandoned the common CPU cache on chip memory management by special software is responsible for specific algorithms, hoping to promote the operation and performance of the chip density will be to a new level.
NervanaNNP can also support a large number of two-way data transmission through the high-speed interconnection between the inside and outside of the chip. If you connect multiple NNP chips, you can form a huge virtual chip to cope with the escalating scale of deep learning modules.
It is worth mentioning that NervanaNNP has adopted a low precision Flexpoint format. AI, vice president of Intel NaveenRao, says the neural network has a high tolerance for data noise, and these noise can even help neural networks bring together new solutions. Flexpoint with low precision can improve parallel processing capability, reduce delay and increase bandwidth.
Before Intel put into AI development, NVIDIA took advantage of GPU's parallel computing ability to conquer the market first, but GPU excels in algorithm training rather than execution. On the other hand, Intel's biggest competitor, Qualcomm (Qualcomm), has invested in chip development that specializes in implementing AI programs.
Intel's NNP chip aims at the training and execution of AI, and will launch new versions. In addition, Intel also put in a neuromorphic called Loihi chip, and the machine vision chip MyriadX research and development.
In trying to catch up with Intel as NVIDIA for AI applications (App) launched the V100 chip, and connect to ClementFarabet AI architecture as vice president, hoping to improve the executive ability of deep learning program chip.
At the same time, Google has built a TensorProcessingUnit (TPU) chip for data center applications, and IBM has published the analog neuron chip called TrueNorth.