Artificial Intelligence and Cognitive Computing... new horizons -

Artificial Intelligence and Cognitive Computing… new horizons

Artificial Intelligence and Cognitive Computing… new horizons

Although a number of technological limitations still prevent Artificial Intelligence from developing into the apocalyptic scenarios that the film industry has been inundating us with for decades, the progress of cognitive systems and their application in business contexts is not going to stop. In fact, it is likely to become ever stronger in view of the scope of current research, also in terms of infrastructures and nanotechnologies

ArtificialIntelligence

We have come a long way since the first experiments on so-called “artificial neural networks” [mathematical models representing the interconnection between artificial neurons – i.e. algorithms that “mimic”, at various levels, the functioning and properties of the biological neurons in the human brain – developed to solve the engineering issues of artificial intelligence – Ed.], dating back to more than 70 years ago [in 1943, two American neuroscientists, Warren Sturgis McCulloch and Walter Pitts, published a paper putting forth the first artificial neuron and, in 1949, a book by Donald Olding Hebb, a Canadian psychologist, analysed the connections between artificial neurons and the complex patterns of the human brain – Ed.], to cognitive computing, which is much debated today.

Neural networks

If we take a look at history, the first neural network model dates back to the late 1950s: it was the so-called “perceptron”, put forth in 1958 by Frank Rosenblatt (a well-known American psychologist and computer scientist), a network with an input layer, an output layer and an intermediate learning rule based on the “error back-propagation” algorithm (error minimisation); in short, the mathematical function based on the assessment of the actual output data – compared to a given input – alters the weights of the connections (synapses), thus leading to a difference between the actual output and the desired one. Indeed, some experts in this field date the birth of cybernetics and Artificial Intelligence [AI: the term was actually coined in 1956 by the American mathematician John McCarthy, and, in 1950, a first assumption was made by Alan Turing, by explaining how a computer can act like a human being – Ed.] to the time of Rosenblatt’s perceptron. However, in the following years, the two mathematicians Marvin Minsky and Seymour Papert showed the limits of Rosenblatt’s neural network model. The perceptron was only able to recognise linearly separable functions, after proper “training” (through the training set – the learning algorithm – in the vector space of the inputs, separating those that require a positive output from those requiring a negative output). In addition, the computational skills of a single perceptron were limited and its performance was highly affected by the choice of inputs, as well as by the choice of the algorithms through which the synapses, and thus the outputs, could be “modified”. The two mathematicians Minsky and Papert sensed that creating a network with various levels of perceptrons would solve more complex problems. However, at the time, the increasing computational complexity required for training networks using algorithms had not yet been addressed in terms of infrastructure (there were no hardware systems capable of handling such operations). The first significant breakthrough from a technological point of view dates back to the late 1970s-early 1980s, when GPUs were developed, considerably reducing the time required for training networks, lowering them by 10-20 times.

Artificial Intelligence, Cognitive Computing… how confusing!

Creating machines that resemble humans has always been “a fantasy” typical of mankind. In fact, the first automatons – which were mechanical clockwork systems capable of performing certain activities typical of human beings, such as playing the piano or drawing – date as far back as the early 18th century. But, as we already mentioned, it wasn’t until the mid-20th century that talk about Artificial Intelligence began, referred to as a computer’s ability to perform functions and thought processes typical of the human mind. Beyond ethical and social aspects, in purely computer science-related terms, AI is the discipline that encompasses the theories and practical techniques for developing algorithms that enable machines (especially “calculators’) to show intelligent activity, at least in specific domains and fields of application.

One of the most conspicuous critical issues is the formal definition of man’s functions to do with synthetic/abstract reasoning, meta-reasoning and learning (since it is still not fully understood today how the human brain actually works), in order to create computational models capable of implementing such forms of reasoning and learning. And Machine Learning was born in the late 1950s as a “sub-discipline” of AI exactly to seek to address such issues. It deals with the study and modelling of algorithms that are able to learn from data, extrapolating new knowledge from them. In other words, systems that are able to learn autonomously on the basis of observation, thus paving the way for Cognitive Computing, seen as technological platforms based on the scientific disciplines of Artificial Intelligence (including Machine Learning and Deep Learning) and Signal Processing.

Therefore, Cognitive Computing deals with technological platforms that are capable of learning autonomously (Machine Learning), reasoning, understanding, processing and using natural human language, including visual and dialectical abilities (NLP – Natural Language Processing).

The current “state of the art”: Machine Learning Vs Deep Learning

The true turning point is marked by the maturity of Machine Learning systems: during the “first stage” of so-called intelligent systems, machines still had to be programmed and “instructed” how to behave in a way similar to the human mind.

“Stage 2” is where Machine Learning draws closer to Artificial Intelligence. By “dusting off” the key principles of neural networks, it paves the way for Deep Learning: unlike the former, which collect and “channel” huge amounts of data that can be used through algorithmic models that “teach” machines to recognise patterns and then process and analyse such data, the latter are systems that simulate the human cognitive process, “learn” more quickly and, above all, aggregate data hierarchically (i.e. they are organised according to different levels of abstraction: different “layers” of neural networks allow data to be directed to growing progressive levels), an ability that, so far, has been “restricted” to humans.

From the perspective of computer science, today, many experts distinguish between “weak AI” and “strong AI”: the former, also known as “narrow AI”, identifies the so-called “weak” and non-sentient systems, focused on a very precise task (the Siri application is a good example of this because, while understanding natural language, it operates in a pre-set topic area and is not capable of self-learning); the second area, instead, includes self-learning machines equipped with senses and mind, which allow them to apply artificial intelligence in a “general purpose” manner to solve any kind of issue. An example of “strong AI” is provided by Google which, in 2014, acquired the company DeepMind. Google DeepMind has implemented an algorithm of a multi-level neural network in which the decision-making process is carried out hierarchically by aggregating basic information processed by the lowest level neural networks (the operating model, albeit “rudimentary”, is indeed that of the human brain), without being specifically instructed in advance, therefore without imposing an algorithmic logic within the system. The system operates through self-learning by observing and experiencing: in the case of the Go game, AlphaGo (the DeepMind based system) first observed players, then learned how to play until it improved its skills, eventually beating the person considered the top game expert.

 

Lascia un commento