Advertisement
Top

Will empowering AI equal weakening privacy protection?

December 29, 2016

As it turns out, Google wants to employ Artificial Intelligence in moderating online communications. This activity implies a fine balance between the freedom of expression and various preset policies. These may involve anti-harassment measures or even cyber-security guidelines that moderate what enterprise data ends up on public social media. Empowering AI as a social media moderator can in fact go from being pleasantly useful into being harmful.

Separating reality from utopia in machine learning

Should we worry or should we keep an excited approach when it comes to leaving such matters in the hands of AI? Perhaps both are premature, at the stage this venture is in right now. Google took over some of the previous Jigsaw projects and expressed its intentions as mentioned above. While Jigsaw also tackled phishing risks, this particular moonshot project is in a different league.

Entitled Conversation AI, the machine learning method lined up by Google for implementation aims to just streamline human online community moderation. It is but a small step for now. Moreover, the source goes on by revealing that the targets are online controlled environments, as opposed to public social networks. The so called “anarchic” communities are safe for now, from the freedom of speech point of view. Or at least they are to remain as we know them.

Judging by the debate brought on with the Facebook fake news scandal, it would be closer to the truth to assume that certain gradual measures are coming to the big social networks, too. Perhaps not machine learning or AI-empowered, though. Not for the moment. Toning down on the freedom of expression is a far too delicate matter to be left in the cyber hands.

Precautionary concerns

Using the anti-harassment Conversation AI tool as a starting point, there are those who voice concerns related to future censorship. More precisely, AI-enabled censorship.

Perhaps you remember the November piece of news that made waves. Facebook reportedly is working on a tool that would enable third-party operators to monitor communications in the network. Deciding what posts to allow or forbid from appearing in the news-feeds would be the next available step.

The Facebook-related news qualified as a rumor. Therefore the tech means of surveillance fail to clearly take center stage for now. Yet if you put the two stories together, you have intelligent software that may act as a conversation moderator and the idea of sifting out social network topics. Topics not necessarily deemed abusive or harassing. More likely, topics identified as unsuitable by a certain government.

Add to this the big question whether such a tool will resume its crawling activity solely to public messages or not, and suddenly privacy protection in social networks becomes extremely relative. Concerns on where such developments might lead to don’t seem so far fetched from this perspective.

A different app, the same AI concerns

Google’s messaging application Allo recently met the same debate. Its Smart Reply feature, AI-supported, needs to “retain unencrypted copies of your messages“ in order to train itself. What about privacy protection then? The more privacy-conscious either avoided the app altogether, or used the advice of staying in incognito mode.

In other words, how would you feel about something snooping around your private things? Well, it is not a person, it’s an intelligent software. Nevertheless, the idea does not seem right.

Seems like surveillance is never a perspective to contemplate with serenity. Even when the pill is sugared with various AI development prospects. Weakening privacy may lay down dangerous premises – in whatever shape it appears.

What does it all have to do with cyber-security?

Besides affecting privacy protection, employing AI in various programs and tools is also important for future cyber-defense systems. The big companies involved in machine learning are all in the process of training their algorithms.

Machine learning, conceptually, involves two main processes. First, the researchers mimic various functionalities specific to the human brain. Secondly, what humans get from environmental interactions, machines get from their developers. Information is fed to these advanced programs, or they simply connect to a continuous data stream.

In consequence, any means of data accumulation benefits the general development process of AI software. Adjacently, due to the need to use smart algorithms in data protection, the progress in machine learning will end up helping security intelligence professionals.

Perspectives and the logic of prompting for privacy sacrifices

While it is worrying to think about AI training as a mean of weakening privacy protection, cyber-security professionals find the perspectives in cyber-war to be extremely motivational. Specialists need next-gen technology in the fight against cyber-crime, otherwise the confrontation will be extremely unbalanced.

Consider that the Internet of Things is not yet fully deployed. Its particularities disperse the cyber-battle field in a way that would be hard to protect without employing AI systems.

The idea of present and future cyber-enemies fuels these present attempts of limiting online privacy. Privacy advocates keep their eyes on these cases, trying to moderate any abuse and intervene when necessary.

In what users are concerned, small steps are perhaps the best. Never let go of your privacy too carelessly or too eagerly, no matter who is on the other side of the request. In time, the projected images might change and the final result could be more on the surveillance side than expected.