Select Page

Artificial intelligence is the term used to describe computer programmes that have been designed to be able to simulate human thought processes, mimicking human actions to solve a variety of problems.

In recent years the definition of the term, usually shortened to AI, has been broadened to encompass any machine that has the ability to learn or solve problems. According to Tesler’s Theorem, anything that has not yet been done can be classified as AI.

Artificial intelligence is ideally characterised by an ability to display rational thought in problem solving and deliver solutions that have the best chance possible of achieving the desired objective.

Thibaut de Roux is involved in a start-up that has strong links to artificial intelligence, big data and digital. More information about what big data means can be found in the PDF attachment to this post.

Artificial intelligence is widely used across numerous industries today to help streamline and improve digital systems.

Can Machines Think?

Artificial intelligence was theorised long before it became a reality. A paper was written by the famous mathematician and war hero Alan Turing in 1950, titled ‘Computing Machinery and Intelligence’, which posed the question ‘can machines think?’ Within this paper, the fundamental goals of machine learning or AI were officially laid out for the first time.

Today’s scientists might still use the Turing Test to determine whether a machine can be classed as intelligent. You can find out more about the Turing Test in the embedded short video.

Since 1955, artificial intelligence has been a recognised academic discipline. Over more than half a century later, we are seeing AI applications being introduced in almost every industry and sector to varying degrees.

Defining Artificial Intelligence

Movies and popular media have left many of us immediately thinking of super-advanced robots when we think of AI. However, the reality is very different. AI is becoming vastly more pervasive in society, but the results are much more subtle than giant metal beings controlling the human population. Instead of this fantast scenario, what we are instead seeing are dramatic improvements to the efficiency, accuracy and reliability of many digital systems through the introduction of AI.

The basic principle of AI is the preposition that machines can be created that have the capability to mimic human responses such as perception, learning and reasoning. As the capability of AI advances, previous definitions become not only improved upon but even obsolete.

Once, in the not too distant past, machines that could use optimal character recognition to ‘read’ text were considered to be intelligent. Today, this function is so widespread that it is merely an inherent computer function that we all take for granted.

AI Applications

AI applications are endless in today’s society, and not all of them are as visible as the movies would have us believe. Today we can find AI behind a multitude of everyday systems over multiple industry sectors.

Commonly used ride sharing apps such as Uber and Lyft utilise AI to minimise wait times, optimise passenger matches and determine pricing. Auto-pilot mode on any vehicle uses AI and this can be dated back more than a century in its most basic form. Spam filters on email accounts, anti-plagiarism checkers and social media networks all use AI to perform valuable or interesting services. The finance industry uses AI to check for fraud, make credit decisions and streamline services. The healthcare industry is piloting AI systems for surgical procedures – the list is in fact endless.

In the attached infographic you can see how artificial intelligence is changing the future of business.