Atecnologia of machine learning allows computers to learn based on interaction with the public so that they can perform their tasks better and better shape. This type of artificial intelligence is already being applied by Google in more than 100 of its products and resources, allowing applications to understand complex voice commands, among other things. Now, the company announced that stealthily developed its own processor to perform this kind of task.
With the name Tensor Processing Unit (TPU), the component was created on the sly by the giant of searches over the past few years and has the specific goal run TensorFlow - the machine learning system of the Mountain View company. In a posting on one of its blogs, Google says that the processor is being used in their data centers for more than a yea
This is the server cabinet that used the TPUs to defeat the world champion Go, Lee SEDOL
The company claims that the performance improvements offered by TPUs are equivalent to jump seven years into the future, based on Moore's Law. The chip allowed the search giant perform more operations per second to achieve better results in less time than when the other components used.
The future is now
The improvements provided by the TPUs are equivalent to jump seven years to the future
This level of power is responsible for helping artificial intelligence so that they can achieve great things - to win the world champion Go game, for example, something that AI AlphaGo the Mountain View company was able at the beginning of the year. Other company products that benefit from the TPUs are RankBrain, which increases the relevance of search results, and Street View, which displays more accurate and better maps results.
For more appetizing that the novelty is, however, those who are considering purchasing a Tensor Processing Unit will have to take cyber wheelies rain, since the technology is not available for purchase. Still, the benefits of TPUs will continue certainly can be felt through the products and services of Google, much more established as those announced recently at the conference I / O 2016.
No comments:
Post a Comment