Technology companies seek to transfer the functions of artificial intelligence in smartphones and other wearable devices. For example, it is convenient to carry in your pocket the ability to show mechanics how to repair an engine or to tell tourists in their native language what they see and hear. But there is a problem: you can not manage a huge amount of data that make the implementation of these tasks possible, without slowing down the operation of the device and not discharging the battery in minutes.
For many years, central processors from Intel, ARM, and others have provided enough power for devices and servers around the world. But the rapid development of artificial intelligence over the past five years has led some traditional chip makers to face real competition. The increasing capabilities of AI are largely related to neural networks that analyze patterns and participate in them. Universal processors used on PCs and servers do not handle multiple threads at the same time.
On July 23, at the CVPR2017
conference in Honolulu, Hawaii, Microsoft announced the second version of the HoloLens Holographic Processing Unit (HPU) chip. HPU 2.0 is an additional AI processor that analyzes everything that the user sees and hears right on the device, and does not spend precious microseconds to send data back to the cloud. HPU 2.0 is currently under development and will be included in the next version of HoloLens. This is one of the few cases where Microsoft is involved at all stages of development (except for production) of the processor. Representatives of the company say that this is the first chip designed specifically for a mobile device.
Microsoft has been working on its chips for several years. The company has created a motion tracking processor for the Xbox Kinect, and recently used custom chips — user-programmable gate arrays — to apply AI capabilities to real-world tasks. Microsoft buys chips from Altera, a subsidiary of Intel, and then adapts them for its purposes using software.
In 2016, Microsoft used thousands of these chips to translate the entire English-language “Wikipedia” into Spanish — three billion words in five million articles — in less than a tenth of a second. In 2018, the corporation plans to allow cloud computing users to use these chips to accelerate their own AI tasks: recognize images from large data sets or use machine learning algorithms to predict various economic and other models.Promotional animation HPU 2.0
Microsoft has a lot of competitors in this business: Amazon uses user-programmable arrays and plans to use the new Nvidia chip for the Volta microarchitecture AI
, and Google has created its own AI semiconductors - the Tensor Processing Unit
. Creating chips within a company is expensive, but Microsoft says it has no choice, because technology is developing very quickly.