Top 10 popular myths about machine learning [Updated]

machine learning

Multinational technology companies such as Google, Facebook, and Amazon used machine learning secretly before machine learning was acceptable globally.

Google took care of ad placement with the help of machine learning, while Facebook showed machine learning post feeds according to their convenience. Amazon showed e-commerce website recommendations that impacted user clicks. For example, if the user recently purchased a shoe, Amazon used this information to show recommendations for other shoes of the same shape and structure for further exploration.

Although machine learning has undergone modifications since very recently, it is now also the center of an essential topic for discussion–Is machine learning the end of privacy for the human race? What else can machine learning do except automatically drive cars and communicate? Is machine learning harmful? This article will provide a quick view of several misconceptions over time.

Machine learning just summarizes data.

In fact, machine learning’s main aim is to predict the future. Knowing the films you saw in the past is only a way to see which ones you want to watch next. Your credit record is a guide for paying your bills on time. Learning algorithms, like robot scientists, formulate hypotheses and refine them, and believe them only when their predictions become a reality. Learning algorithms are still not as intelligent as scientists but are millions of times quicker.

Learning algorithms only reveal correlations between event pairs.

This is the impression you get from most of the media. An increase in Google’s searches for “flu” is an early signal in a famous instance that it spreads. That is good and fine, but most learning algorithms discover far more rich forms of knowledge, such as the rule If a mole has an irregular shape and color.

Machine learning can only discover correlations, not causes.

Indeed, one of the most popular types of machine learning involves the testing and observation of different actions–the essence of causal discovery. An e-commerce site, for example, can try various methods of presenting a product and select the one that leads to most purchases. You probably participated without knowing it in thousands of these experiments. And even in situations where experiments are out of the question, and all the computers can do is look at past information, causal relations can be discovered.

Unseen events, like Black Swans, can not be predicted in machine learning.

If something has never happened before, it’s expected to be 0. Machine learning, on the other hand, is the art of accurately predicting rare events. If A is one of the causes of B and B is one of the reasons of C, A can lead to C, although it never occurs before. Spam filters flag spam emails correctly every day. In fact, Schwarze swans were widely anticipated, just as the 2008 housing crash was not used by most banks at that time by defective risk models.

The more data you have, the hallucinating patterns are likely to be.

The more phone reports that the NSA looks at, the more likely it is to flag an innocent man as a potential terrorist, as he accidentally fell into line with a rule of terrorist detection. Machine learning professionals can increase the risk of hallucination by mining more attributes of the same entities but maintaining it at a minimum is excellent. On the other hand, mining more objects with the same set of attributes reduce the risk, since they support the rules. And there are some learning algorithms that can help find multiple entities’ patterns and make them even more robust. Anybody videotaping the New York Town Hall can be unthinkable, and there may not be any other person buying large amounts of ammonium nitrate.

Machine learning disregards pre-existing knowledge.

The “blank slate” approach of the learning algorithms they know has been permeated by experts looking askance. Real knowledge is the result of a long process of reasoning and experimentation that can not be imitated by using a generic database algorithm. However, not all learning algorithms begin in a white slate; some user data to refine pre-existing knowledge, which is very elaborate if it is encrypted in a computer form.

Learning computer models is incomprehensible to people.

This is a cause of concern, of course. How can we trust its recommendations if a learning algorithm is a black box? Some types of models are hard to understand, such as the deep neural networks that are responsible for some of the most notable successes in machine learning (such as recognizing cats in YouTube videos). But others are quite understandable, such as the rule we saw earlier for diagnosing skin cancer.

Models that are simpler are more accurate.

Sometimes this belief is equated with the razor of Occam, but the razor only says that it is preferable to have more straightforward explanations, not why. Because they are easier to understand, remember, and reason with, they are superior. Sometimes the simplest data-consistent hypothesis is less accurate than a more complicated one for prediction. Some of the most powerful learning algorithms output models that seem to be free of charge elaborate — sometimes even adding to them after they fit the data perfectly— but that’s how they beat the less powerful ones.

It is possible to take the patterns that computers discover at face value.

If a learning algorithm produces the rule for the diagnosis of skin cancer we’ve seen earlier and the rule is very accurate (in the sense that almost all the moles that match it are indeed tumors), that doesn’t necessarily mean you should believe it. A slight change in the data could lead to a very different but equally accurate rule being induced by the algorithm. Only rules that are induced reliably despite random variations in the data can be trusted to mean what they say, as opposed to being just useful predictive tools.

Machine learning will produce superhuman intelligence in the near future.

From the daily news, it’s easy to get the impression that computers are on the verge of seeing, speaking and reasoning as well as we are doing, after which they’ll leave us in the dust quickly. We certainly have come a long way in the first 50 years of artificial intelligence, and the main reason for its recent achievements, is machine learning, but there is a lot more to do. Computers can do a lot of small tasks very well, but there is still no common sense, and nobody knows how to teach it.