The arrival of machine learning and neural networks together with more artificial intelligence techniques are producing unparalleled advances in products and services that are already in our hands.
For many years, we have spoken of artificial intelligence when in fact we could be referring to simple algorithms that, fulfilling the function for which they are written, seem to be responsible for a miracle that is not so much, they are “simply” mathematics . Today, however, many of the services we use already employ computational models such as neural networks and machine learning that go far beyond the algorithm that years ago performed a similar function, but much more limited.
In that sense, one of the great exponents is Google, which through these new techniques has been able to improve incredibly the voice recognition that is present on the keyboard of our smartphone and personal assistants such as Google Now or Assistant, to the point that we see in the image.
Although at the moment Google only offers data that have to do with English, the language always presents the advances first, it is noteworthy that, in less than a year, the error rate has decreased in recognition from 8.5% to 4.9%, approaching steps gigantic figures that will make the fact of talking with our smartphone almost as comfortable as with a person. Since 2012 the figure has been reduced by more than 30%.
On the other hand, Google presumes to have reached and surpassed humans in the recognition of objects in images. The relevant thing is that the error 7 years ago was more than 30% for a figure close to 5% and since last year it is systematically better than us. This not only helps in quick search solutions in Google Photos and the like, but it can be used for various fields considering its power of indexing.
Thus, we can imagine that by scanning the many classic photos that still exist without digitizing, you could discover many secrets hitherto unknown without the need for human exercise.
Finally there is the translation, where Google Translate is the absolute leader thanks to which since 2016 it has implemented a neural network system that allows it to be closer than ever to the human, in a function like translation that no longer depends on a perfectible sense as is the view, but it is pure intelligence and requires so much effort in people that the sessions are limited in time. It is slower than the rest of the improvements, but the results impress.