Advertisement
Top

Beveiligingscontrole – trying to elucidate a word

September 28, 2016

As some of you may have noticed in different occasions, sometimes our Internet interactions come up as puzzling experiences. For example, I spotted this foreign-sounding words in the denomination of a Facebook image whose link I received while being logged out of my account: Beveiligingscontrole.

Since curiosity is a positive element when it comes to staying updated on all Internet-related things, I followed up by researching this term. Beveiligingscontrole is a Dutch word that translates as “Imperative Security Check”. However, what does this mean?

Apparently, Security Check (no “Imperative” in front of it, though, yet the direct translation of Beveiligingscontrole by Google) is actually a Facebook feature available only for people who use the latest version of the Facebook app for Android or iOS. Protecting a personal account and its included items benefits from special security and extra security features. Again, it is something you might have met on Facebook when trying to share photos, for example, and you are requested to tap in a 6 digit security code sent on your account-associated smartphone – as described in this community question string.

Imperative security, on the other hand, belongs to the coding field and represents “calling the appropriate methods of a Permission object that represents the Principal (for role-based security) or system resource (for code access security)”, as opposed to declarative security, which associates “attribute declarations that specify a security action with classes or methods” – that is in implementing security declarations in .NET Framework. In other words, in imperative mod, the code is written to provide restrictions, therefore there is no need to employ attribute syntax.

If interested in further coding-related explanations, you may check the above-quoted source, which, for the more profane readers, shows one main element: the unusual word spotted in the Facebook photo’s title has to do with permissions, permission sets and the way the item I was about to view received/was denied permission by its owners in what its visualization was concerned.

A preliminary conclusion on Beveiligingscontrole

As we have established above, by using available online sources, imperative security checks manifest the previously set desired protection inscribed into specific blocks of code by incorporating the ability to request appropriate permissions into the code. When an unauthorized user attempt to access the item, this featured request a permission code or an active login to be presented.

Nevertheless, was it my machine that made this security feature possible, the application that enabled the message I received, or was it coming from the Facebook platform?

I must say the materials I went through didn’t help in answering this question, nor in clarifying whether the dilemma itself is a valid one. Back to square one, the English translation of beveiligingscontrole as a “a security check that occurs when a security method is called within the code that is protected”, one might say that most likely the social network that hosted the said photo manifested one of its protection layer once the item was shared in an unprotected environment. Therefore, the weird-sounding denomination pointed out to the privacy protection field. Somehow, I have just received a photograph whose settings inside Facebook protected it from being viewed by a random person from outside the network – its digital presence included a coded security demand. Probably.

Privacy protection, whether we want it or not

Although most of the privacy-related events seem to go the other way round, with people complaining of privately set items ending up in the wrong hands, sometimes being part of a network means we get to enforce settings that we do not fully understand, or that the default settings and algorithms surpass the users’ options, a phenomenon duly noted by an Dutch source that approached Facebook censorship, as they called it, in more than one occasion.

Noticing that Facebook employs algorithms to control what their users share inside the network in what generates “social compartmentalization”, the authors go through the publicly known artificial intelligence methods that continuously filter the huge amounts of information circulating between users, such as Facebook’s EdgeRank.

Of course, now we have somehow slid into another discussion. From a presumed layer that prevented photos from being viewed by outside (un-logged) users, to inside layers that act in selecting posts and preventing some of them from appearing in the news-feeds of our Facebook friends, there is quite a way to go.

Nevertheless, the common matter resides in the invisible layers that sift the countless posts, ending up in some of them being eliminating or impossible to view. The reasons may be altogether very different. The picture I never got to visualize may have been a private pic, set by its owner to demand a security check before disclosing itself, or it may just have been inaccessible because I wasn’t logged on the network in question. Other posts get lost in between the digital cracks separating authors and their friends’ new feeds due to other reasons.

While some such measures are perfectly legitimate in what Privacy Protection is concerned, and need to be enforced by specialists since it would be extremely complicated to hand over the layer setting parameters to unskilled users, others may hinder net neutrality while in the backyard of some networks – as the source quoted right above considers.

That is where the privacy protection and Internet neutrality advocates come into play with their watchdog attributes. There is a fine balance between the right amount of protection and too much protection, as well as the continuous risk of a false sense of protection.

The false sense of privacy protection

When the settings allow users to hide their uploaded materials from prying eyes, a false sense of privacy protection may start forming itself, in that users are under the impression that they are the only ones aware of their private materials and private content (and the friends they have privately shared it with). In fact, all publicly or semi-privately protected items count as (at least) items that leave traces and power up network statistics, serving in profiling each user.

On of the commercial purposes of this type of trace-based profiling would be marketing.  For example, the frequency at which a user accesses a fashion/commercial page is capable of feeding the right AI algorithm with enough information to personalize its image of the said user, even if details such as the comments he/she posted of the image content he/she viewed are not actually accessed by the artificial intelligence software. Combining a frequency rate with other user data, such as age, gender, pages liked and so on, such an algorithm is capable of concluding a relevant and accurate element on the user in question.

The conclusion? As faint as it may seem at times, staying curious, investigative and alert when it comes to our online presence is always a good option. Most of the information is available once we start inquiring, therefore whenever in doubt or alerted by some digital element, do a bit of research to make sure that you at least tried. Like I did with Beveiligingscontrole.