FACEBOOK WON’T SAY IF THEY’LL USE YOUR BRAIN ACTIVITY FOR ADVERTISEMENTS

[5/22/17]  EVERY YEAR, FACEBOOK gathers hundreds of developers, corporate allies, and members of the press to hear CEO Mark Zuckerberg’s vision of our shared near future. The gathering is known as “F8,” and this year’s iteration included some radical plans, one of which could’ve been pulled from a William Gibson novel: Facebook is working on a means of using your brain as an input device.

Such technology is still many years off, as is, apparently, Facebook’s willingness to publicly think through its very serious implications.

Details on how the Facebook brain/computer interface would function are scant, likely because the company hasn’t invented it yet. But it’s fair to say the company has already put a great deal of effort into considering what capabilities such an interface would have, and how it will be designed, judging from its press announcement: “We have taken a distinctly different, non-invasive and deeply scientific approach to building a brain-computer speech-to-text interface,” the company says, describing the project as “a silent speech interface with the speed and flexibility of voice and the privacy of text,” with a stated goal of allowing “100 words per minute, straight from the speech center of your brain.” This process will be executed “via non-invasive sensors that can be shipped at scale” using “optical imaging” that can poll “brain activity hundreds of times per second.”

“The privacy of text” is an interesting turn of phrase for Facebook, which has, like its competitor Google, built itself in to a multi-hundred-billion-dollar company more or less on the basis of text typed into a computer not being private but rather an excellent vector through which to target advertising. For its thought-to-text project, Facebook claims it’s built a team of “over 60 scientists, engineers and system integrators” from some of the most esteemed research universities around the U.S. (headed by a former DARPA director, no less). Privacy concerns drove some of the very first questions from journalists after the F8 announcement, including in this passage from The Verge:

[Facebook research director Regina] Dugan stresses that it’s not about invading your thoughts — an important disclaimer, given the public’s anxiety over privacy violations from social network’s as large as Facebook. Rather, “this is about decoding the words you’ve already decided to share by sending them to the speech center of your brain,” reads the company’s official announcement. “Think of it like this: You take many photos and choose to share only some of them. Similarly, you have many thoughts and choose to share only some of them.”

Facebook was clearly prepared to face at least some questions about the privacy impact of using the brain as an input source. So, then, a fair question even for this nascent technology is whether it too will be part of the company’s mammoth advertising machine, and I asked Facebook precisely that the day the tech was announced: Is Facebook able to, as of right now, make a commitment that user brain activity will not be used in any way for advertising purposes of any kind?

Facebook spokesperson Ha Thai replied:

We are developing an interface that allows you to communicate with the speed and flexibility of voice and the privacy of text. Specifically, only communications that you have already decided to share by sending them to the speech center of your brain. Privacy will be built into this system, as every Facebook effort.

This didn’t answer the question, so I replied:

My question is this: Is Facebook able, as of right now, to make a commitment that user brain activity will not be used in any way for advertising purposes of any kind?

To which Thai replied:

Sam, that’s the best answer I can provide as of right now.

Fair enough — but also an implicit answer that no, Facebook is at least at the moment not able to assure users that their brain activity will not be appropriated to sell ads. This is of course not an indication that the company will do this, only that they are not prepared to rule it out. And to be sure, this is still a hypothetical — it’s possible the company’s neural keyboard will remain somewhere between vaporware and marketing stunt, as has been the case with its solar-powered flying internet relay, or Amazon’s national delivery drone fleet.

But while the tech may be far off, its privacy implications aren’t far fetched — ignore at your own peril Facebook’s history of experimenting with the thoughts of its users, whether by deliberately manipulating their emotions or by putting their faces on advertisements without consent (“They trust me — dumb fucks,” Zuckerberg famously quipped to a friend via IM as he built Facebook in his Harvard dorm).

Facebook’s interest in mental typing was certainly noted by neuroethicists; for them, it helped underline that recent breakthroughs in brain-computer interfaces, or BCIs, really will bring what was once a science fiction scenario into the real world.

“I worry a little about whether we’ve given enough thought about what it means to no longer have control over a zone of privacy,” Dr. Eran Klein, a neurology professor at Oregon Health and Sciences University and neuroethicist at the Center for Sensorimotor Neural Engineering, told me. “One of the things that makes us human is we can decide what stays in our mind and what comes from our mouth.”

Any inadvertent spillover from our inner monologues to online servers could have profound consequences: In society, “If you have a prejudice but you’ve worked diligently to squash that prejudice, that says something good about your character,” Klein pointed out. But if, thanks to your handy Facebook Neuro-Keyboard, “now if all those prejudices are open for other people to see and be judged, it opens up a whole different way of evaluating peoples moral character and moral choices.”

The importance of thinking things but leaving them unexpressed or unarticulated is fundamental to humanity, society, and morality — and it’s a line Facebook has stomped all over in the past. In 2013, Facebook published a study detailing how it had been recording and storing not just text that had been typed and published on its website, but also text users had written but then decided against publishing and deleted for whatever reason. The study’s authors lamented that “[Facebook] loses value from the lack of content generation” in such cases of “self-censorship.” Should users trust a company that so failed to grasp the essential intimacy of an unpublished thought with a line into their brains?


For almost a decade Gov't Slaves has worked tirelessly to bring its readers the most critical news the corporate media does not want you to see. We have no intrusive ads, pop-ups or clickbait, just NEWS. If you happen to be in a position to support our work, PLEASE consider making a one-time donation below or a monthly recurring donation HERE. Your support is humbly appreciated. Gov't Slaves


100% Secure via Pay Pal. All major CC accepted.

$
Personal Info

Donation Total: $5.00