Top

AI in cyber-security – or is it its mature counterpart, AGI?

July 20, 2017

Many cyber-security professionals are waiting for the next stage of Artificial Intelligence-based algorithms. Ready to fight with what we may call automated malicious attacks, AI is yet in its testing phase, showing up in demos and uncoordinated software. It’s like the peripheral nervous system, while the need for a centralized system increases day by day. There is yet another detail – AI’s mature counterpart is AGI. Let’s explore the differences.

Barely assimilating AI? Meet AGI

Up till now most of us have formed an idea on what actually is Artificial Intelligence. In simplistic terms, it is a sum of algorithms and software programs enabling machines to process data in an intelligent way. Instead of simply performing commands, AI-enabled computer systems can choose the order and the necessity of their operations and combine them for the best outcome. A certain degree of self-organization is taught or, if you wish, encoded in these modern systems.

We can summarize AI with the help of any modern dictionary as such: the machines’ ability to mimic some of the human mind’s qualities.

But what is AGI – Artificial General Intelligence?

Artificial General Intelligence or AGI is the machine intelligence capable of performing any intellectual task a human could. Other denominations are strong AI or full AI. Amazed? The AI that we know so far is called weak AI or narrow AI, compared with this unlimited-capacity AGI.

You may have heard of cognitive computing. This would be AGI. Researchers are on their way of getting there. Meanwhile, the more risk-concerned parties talk of the dangers that may come with these sentient machines. Should the public applaud or feel worried? Hard to say at this stage. Nevertheless, in order to fully understand the implications, we need more info on what applied AGI might look like.

Standard tests to establish the existence of a functional AGI

The Turing Test

The Turing Test is open for all those who want to submit their creations to it. As Intelligence.org informs us, “Since 1990, Hugh Loebner has offered $100,000 to the first AI program to pass this test at the annual Loebner Prize competition”. This competition involves a specific interpretation of what the Turing test means. The same source mentions how there are no exact conditions defined, since a preliminary request for this stage is for one program to win the silver prize. As a general idea, the competing program should fool the judges into thinking they are interacting with a human.

Since the Intelligence.org article dates back to 2013, we looked for more recent news. Here you may find a list of the annual bot winners in the Loebner Competition. The most recent winner is the AI entitled Misuku. This program is a chatbot that “fields tens of thousands of queries daily, from users all over the world.”

Older articles mention how the first program to pass the Turing test was called Eugene Goostman, and it managed to make the judges think it was a 13 year old boy from Ireland. The test took place this time at the Royal Society in central London. The creators of the software – Vladimir Veselov and Eugene Demchenko.

The AI coffee test and others

Another AI test – deemed to be more difficult- is the coffee test. A robot that would manage to instruct himself on how to prepare coffee, and actually perform this task, just by going to an average person’s home would be a fit candidate. To what would it candidate? Well, to the title of intelligent robot, in the AGI sense.

The same source mentions another couple of tests generally accepted as valid when pursuing the Artificial General Intelligence goal. The robot college student test or the employment test are self-explanatory in what their content is concerned. If a machine manages to do as well as a human would when approaching such tasks, then it possesses functional AGI.

What are the timeline expectations in AGI?

With AGI being the ultimate goal in intelligent machines, it is only logical that big tech companies are working their way towards important breakthroughs in this field. From Google’s DeepMind, to IBM’s Watson, they are all racing towards the same finish line. The estimations are precautionary, with 2050 as the most circulated date.

For now, there are 2(known) directions of approach. On one hand, by feeding huge amounts of data into the existing AIs, researchers hope to cause a qualitative jump. So we have the quality via quantity approach, with a theorized touchstone that should tip the progress line. Once certain figures are surpassed, AI could turn into AGI.

On the other hand, the same companies cover a different line of approach. By inviting third parties in, they open source their projects (partially), This way, an out-of-the-box innovative approach gets a chance to reduce the minimal 35 years waits in the books right now.

One thing is sure – AGI in cyber-security, if and when it will be available, should mark a critical leap forward.

How far is the current AI embodiment?

By this we mean how far from the AGI goal – of course. In order to answer to this question as non-specialists, we have the biggest tool at our disposal. It is none other than the Google search engine. The company has continually updated its algorithms with their AI R&D results. Newer algorithms, as soon as their labs validated them, went into the live search engine.

Making abstraction of Google’s marketing purpose (or if you are in the marketing field, counting this into the mix), try and compare the newer functionalities with the older ones. The search engine is definitely faster, it added hard boundaries instead of soft ones in what rules are concerned. It provides localized search results, it comes up with suggestions and it is tailored on each user’s preferences.

Nevertheless, it is not intuitive yet, not in a functional sense. Due to applying hard rules, it is not even intuitive by error or in a dumb way – as it used to be. All mass-validated suppositions will appear in fast, accurate results. All unexpected queries, or inside-reasoning demanding queries – unsatisfactorily resolved. While definitely more refined and more specific, AI lacks the capacity of excellently organizing information in a way that would match humans. For now.