Microsoft has unveiled a new system known as Brainwave which lets developers deploy onto programmable silicon machine learning models with a view to achieving high performance compared to what is possible with a GPU or a CPU. During a Hot Chips event in Cupertino, California, researchers showed a model of the Gated Recurrent Unit which runs on Stratix 10 an Intel FPGA.
Larger and faster
The model developed by Microsoft is not only larger but is also faster. This is advantageous with regards to machine learning systems that are operating at scale since users don’t want their applications taking too long to respond.
“If it’s a video stream, if it’s a conversation, if it’s looking for intruders, anomaly detection, all the things where you care about interaction and quick results, you want those in real time,” said a Microsoft Research engineer, Doug Burger.
Azure cloud services
Microsoft intends to have Project Brainwave available via Azure cloud services for those firms intending to make use of real-time artificial intelligence. The software giant is however not the only tech firm that has invested in hardware necessary for the acceleration of machine learning as other firms such as Google and some startups are also doing it. Earlier in the year Google’s Tensor Processing Unit was revised and issued for the second time.
And unlike in the case of Google, the system that Microsoft has develop will offer support to a variety of other deep learning frameworks and this includes Caffe3 by Facebook, TensorFlow by Google and Microsoft’s own CNTK. Though FPGAs offer flexibility they suffer from performance issues and the Redmond, Washington-based software maker has tweaked them in order to increase their competitiveness. This has resulted in them performing just as well as dedicated chips and sometimes even better.
Long-term investment in FPGAs
Microsoft’s investment in FPGAs has been long-term and was initially used in online search as well as in security. Currently the world’s largest software maker boasts of having the biggest FPGA installation across the globe. This has come at an opportune time when deep learning is just taking off in the tech sector. This has however made it necessary to beef up the hardware and this has forced some organizations to turn to GPUs in order to enhance the speed of their artificial models. Some firms such as Huawei and Fujitsu have even developed new artificial intelligence chips.