From the streaming music catalog of Spotify to the myriad services offered by Google, it’s an arrangement that seemingly benefits everyone: We get “free” stuff, while tech companies get access to a rich stream of information about their users that’s way more valuable to them than a one-off purchase fee.
This business model has become standard in the software world of ones and zeroes, but it hasn’t caught on in other walks of life. Just imagine the owner of a restaurant when you fail to present a credit card at the end of the meal, and instead offer to pay by listing facts about your eating habits. Would a car manufacturer be sufficiently interested in info to knock, say, 50 percent off the price of a new BMW should you promise to let them know where you drive it? Both of these examples seem fatuous at first glance. But why?
A list of the world’s most valuable companies is dominated by tech giants, many of whom have proven that their habit of giving away access to services makes sense — and indeed, cents. It’s not limited to software, either. The price of smart speakers like the Amazon Echo and Google Home are bizarrely low, which is an odd strategy when Apple has shown what kinds of margins can be carved out on “must have” tech.
However, owners of the Echo reportedly spend around 10 percent more on Amazon after they buy the smart speaker than they did previously. The same is no doubt true for other “loss leaders” that give away underpriced hardware, knowing that it will give the companies something far more useful on the backend.
In a new book, a technologist for the Ford Motor Company argues that we may be entering a new era in the motor industry: one where the data that can be gathered from sensor-packed cars could offset the need to actually charge customers for new vehicles. A zero dollar car? According to John Ellis, it’s the direction things need to head.
“The model I present in the book works as follows,” Ellis, author of The Zero Dollar Car, told Digital Trends. “The original equipment manufacturer — Ford, for purposes of discussion — sells the vehicle data to a broker or to end companies. That is the monetization opportunity that everyone in the automotive industry keeps talking about. Based on what they are able to sell, they in turn use some of those funds to reduce the price of the vehicle. The more data they can sell, the more opportunity they get to reduce the price of the vehicle.”
A NEW SOURCE OF REVENUE
According to Ellis, under this model car manufacturers could actually stand to make more money than they do charging a single purchase fee of, for instance, $40k. But who cares about your data? That changes by industry, but it’s likely to be the same mix of private and government groups that care about what you do online.
I wouldn’t invite the Amazon Echo or the Google Home into my home.
These smart speakers listen to everything that happens around them. Ostensibly this is to make shopping, playing music, and finding information easy. But the internet of things presents way too many risks to our privacy.
Internet-connected technologies are already used to incriminate their owners in court. And that is the main thing which scares me. Self-driving cars, smart watches, and smart homes controlled entirely from your phone sound really cool and convenient. But I don’t want the government to have that kind of knowledge about me. I might feel differently if I knew the info wasn’t going anywhere beyond Google or Amazon headquarters.
An Industry in Need of a Shake-Up
Government is an industry. Much like other industries, it can be upset by upstart competitors. But governments have monopoly status in given territories for their services. They don’t allow competition, and they enforce this ban on competitors by using their monopoly on “legitimate” force.
Never the less some small countries now offer incentives to organize your affairs under their government. Usually, this requires physically moving. But places like Estonia, with their e-resident program, have hinted that location-based government might not remain the standard. It is the 21st century, after all, and the internet has transformed many industries. It is only a matter of time until the government meets its reckoning.
Countries like Estonia will help move these innovations in governance along. So will countries like Georgia which now uses the blockchain to store property records. And even upstart new countries like Somaliland, barely recognized internationally, have teamed up with private developers to host special economic zones that hope to become the next Dubai.
William Binney is a former high ranking intelligence official with the National Security Agency (NSA). He is one of the highest placed intelligence officials to ever blow the whistle on insider NSA ‘knowings.’ He made headlines when he resigned in 2001 after 9/11, having worked more than thirty years for the agency. He was a leading code-breaker against the Soviet Union during the Cold war, and was repelled by the United States’ massive surveillance programs.
He’s had quite the go, starting in 2002 when he let the public know of a system ( ‘trailblazer’) intended to analyze data carried on communication networks (like the internet). He exposed the agencies eavesdropping program and has faced harassment from the FBI, NSA and more. He has been in and out of the court room ever since he decided to resign and blow the whistle.
These mass surveillance leaks are nothing new, as we’ve seen by the leaks recent NSA whislte-blower Edward Snowden has provided us over the past few years. In an interview that was blacked out by the U.S media, Snowden referred to a Super National Intelligence Organization known as the “Five Eyes” that do whatever they want, and are far beyond the laws of their own country. You can read more about that HERE.
Binney hasn’t stopped, one of the highest-level whistleblowers to ever come out of the NSA. He is now saying:
“At least 80% of fibre optic cables globally go via the US, this is no accident and allows the US to view all communication coming in. At least 80% of all audio calls, not just metadata, are recorded and stored in the US. The NSA lies about what it stores. The ultimate goal of the NSA is total population control.” (see sources)
According to the Guardian:
“The NSA will soon be able to collect 966 exabytes a year, the total of internet traffic annually. Former Google head Erich Schmidt once argued that the entire amount of knowledge from the beginning of humankind until 2003 amount to only five exabytes.” (see sources)
The New Jersey borough of Leonia has a novel solution to for its traffic problems: Ban outsiders from using its roads.
Come January, the borough government will prohibit all nonresidents from driving on Leonia’s roads—save one thoroughfare—from 6–10 a.m. and 4–9 p.m., including on Saturdays and Sundays. (There are also two state-maintained roads in Leonia that don’t fall under the borough’s jurisdiction and thus aren’t subject to the new ordinance.) People who live or work in Leonia will get a sticker to let police know their vehicles are allowed in the town.
That irks Dan Eldridge, a nonresident who nonetheless frequents the borough and will be affected by the new law.
“My gut reaction was just another example of outrageous government overreach. Using a hammer when what you really need is a scalpel,” says Eldridge, who is often in Leonia to see his girlfriend. Come January, such visits will come with a risk of a traffic stop and interrogation from Leonia law enforcement.
Walling off the town to through-traffic presents some pretty startling civil liberties implications. The new ordinance saysany nonresidents found driving on the borough’s streets during prohibited hours will have to be ready to “demonstrate or document a need to access a residence on the street.” Any nonresidents stopped by police will have to explain their reason for being in the town, and what business or resident they plan on visiting while there, or risk fines and court summons.
The ordinance does not describe what counts at a “need” or what kind of demonstration or documentation will be required of travelers to proceed on Leonia’s roads unmolested.
I was following a tweet made by a security researcher claiming to have found a built-in keylogger in several HP laptops, and now he went public with his findings.
A security researcher who goes by the name of ZwClose discovered a keylogger in several Hewlett-Packard (HP) laptops that could allow hackers to record your every keystroke and steal sensitive data, including passwords, account information, and credit card details.
Although the keylogger component is disabled by default, hackers can make use of available open source tools for bypassing User Account Control (UAC) to enable built-in keylogger “by setting a registry value.”
Here’s the location of the registry key:
The researcher reported the keylogger component to HP last month, and the company acknowledges the presence of keylogger, saying it was actually “a debug trace” which was left accidentally, but has now been removed.
Ontario lawyers are facing tough decisions after their regulatory body has denied them conscientious objection from a new requirement. It is being demanded that they adopt a leftist statement of principles acknowledging their “obligation to promote equality, diversity and inclusion generally.” Among other things, the statement reflects the demands of militant “LGBTQ…” ideologues.
The Law Society of Upper Canada’s board of directors, or “benchers,” voted 38 to 16 December 1 to reject a motion allowing an exemption based on conscience or faith from the requirement, according to the National Post.
The law society, which licenses all individuals who practice law or provide legal services in Ontario, has been fiercely criticized for mandating the statement as part of its “Equality, Diversity and Inclusion Initiative.”
The EDI Initiative adopts 13 recommendations from the final report of the working group that investigated alleged systemic racism in the province’s legal profession for four years. The Society will also “conduct inclusion surveys with questions similar to those asked” in another March 11, 2014 Law Society report on the same issue.
But critics say the LSCU is violating the Charter rights of freedom of religion and expression of its licensees by forcing them to pledge to promote a political/ideological agenda they may not agree with.
Toronto lawyer Joe Groia, who brought the conscientious objection motion, describes the mandatory statement as “compelled speech” and unconstitutional.
Ottawa lawyer Paul Lepsoe goes even further, suggesting in a November 24 open letter to the bar the LSUC is policing thought.
The compulsory statement imposes on lawyers “a positive obligation to promote certain state-prescribed principles,” he wrote.
The law society is “prescribing and constraining thought and expression” by “demanding that lawyers demonstrate personal commitment to, or valuing of, these principles.”
And by politicizing the bar, the LSUC is subverting its traditional independence, which is essential for a functioning democracy.
Free-speech advocate and University of Toronto psychology professor Jordan Peterson has issued “call to rebellion” to Ontario lawyers, and is asking those opposing the mandatory statement to contact him through his website.
“We are the bulwark of the individual against the oppressor state,” says Toronto lawyer Phil Horgan, executive member of the Thomas More Lawyers’ Guild, an association of Catholic lawyers, paralegals, and judges.
“On occasion, as a matter of conscience or religion, I may have to act on behalf of clients or myself in opposition to the law,” he told LifeSiteNews.
Law society will use “persuasion tactics”
With no option to conscientiously object, lawyers and paralegals must indicate when they electronically file their 2017 annual report whether or not they’ve adopted a statement of principle.
If they don’t answer the question, the report would be incomplete, leaving them liable for “a fine or an administrative suspension from the law society,” Horgan said.
And a “no” will leave a licensee “subject of law society persuasion tactics, perhaps administrative consequences, perhaps a matter of discipline,” he said.
The LSUC won’t discipline members this year, but “various benchers have made clear there will be an education and resources initiative in an effort to have us get on board with the ‘culture shift’,” he said.
That “accelerating culture shift” is part of the EDI Initiative, the terminology of which must be interpreted in the context of the LSUC’s track record and political bent, noted Horgan.
As the United States Congress runs out of time to vote on a bill that would reauthorize one of the government’s most egregious warrantless spying programs, officials are claiming that those programs won’t end anytime soon—even if they are not reauthorized by the end of the year.
The USA Liberty Act will reauthorize Section 702 of the Foreign Intelligence Surveillance Act (FISA), which is set to expire on Dec. 31, 2017. While the bill’s proponents have claimed it will help ensure “security” in the United States, privacy advocates have warned that will provide additional loopholes for the government to continue conducting warrantless surveillance of innocent Americans.
The assumption may be that if the USA Liberty Act is not signed into law, then the provisions from Section 702 will no longer be legal and the U.S. government will stop collecting data from innocent Americans without warrants—but intelligence officials do not see it that way.
A spokesman for the Office of the Director of National Intelligence, Brian Hale, told the New York Times that “the government believes it can keep the program going for months,” even if it is not reauthorized.
Hale’s reasoning stems from the fact that every year, the Foreign Intelligence Surveillance Court authorizes the program to operate for the next 12 months. Section 702 was last authorized by the court on April 26, 2017, leaving some lawmakers hopeful that even if the Liberty Act does not pass before the end of the year, Congress will find a way reauthorize Section 702 before it actually expires on April 26, 2018.
Hale cited the “Transition Procedures” for the provision, which accompany the law in federal statute books. He told the Times that the procedures make it “very clear” that “any existing order will continue in effect for a short time even if Congress doesn’t act to reauthorize the law in a timely fashion.”
According to the definitions of Transition Procedures for the Protect America Act of 2007’s provisions for “Challenge of Directives, Protection from Liability; Use of Information” concerning the Foreign Intelligence Surveillance Act, Section 702 “shall continue to apply with respect to any directive issued pursuant to section 702 of such Act.”
While this loophole may give hope to some government officials who are in favor of violating Americans’ constitutional rights while selling them a false sense of security, NSA Whistleblower Edward Snowden referred to it by writing on Twitter, “Bypass the Fourth Amendment with one weird trick.”
The Times also cited anonymous intelligence officials who reportedly said that “the government is making no plans to immediately turn off the program on New Year’s Day, no matter what happens in Congress.”
Sen. Ron Wyden, a Democrat from Oregon, told the Times that he is prepared to have “a full and open debate” over the legality of the USA Liberty Act, even if Congress delays that debate until next year.
“We’ve seen this movie before: wait until the last minute, and then say, ‘crowded congressional calendar, dangerous world, we’ve just got to go along with it,’” Wyden said. “Anything now that creates an opportunity for several months of real debate, I’ll listen to.”
Rep. Justin Amash, a Republican from Michigan, has also been critical of the USA Liberty Act. When it passed the House Judiciary Committee by a vote of 27-8 last month, Amash noted that all privacy advocates should be concerned about the overwhelming support the bill is receiving from Congress.
“The Liberty Act passed committee 27-8. It allows the government to search our private data without a warrant—in violation of the 4th Amendment,” Amash wrote on Twitter. “It’s another bill, like the Freedom Act, that furthers violations of our rights under the guise of protecting our rights.”
While advocates of the USA Liberty Act will claim that it is necessary in order to ensure that Americans are “safe,” it is important to remember that the surveillance programs that were adopted after 9/11 have never actually stopped a terrorist attack on U.S. soil. In the same way that the USA Patriot Act was the opposite of patriotic in 2001, the USA Freedom Act also took away freedom in 2015. Do not be fooled by the title—the USA Liberty Act in 2017 has nothing to do with expanding “liberty.”
Copyright Information: This article was reprinted with permission from Thefreethoughtsproject.com. Please contact the author directly for republishing information.
Two decades ago, the US intelligence community worked closely with Silicon Valley in an effort to track citizens in cyberspace. And Google is at the heart of that origin story. Some of the research that led to Google’s ambitious creation was funded and coordinated by a research group established by the intelligence community to find ways to track individuals and groups online.
The intelligence community hoped that the nation’s leading computer scientists could take non-classified information and user data, combine it with what would become known as the internet, and begin to create for-profit, commercial enterprises to suit the needs of both the intelligence community and the public. They hoped to direct the supercomputing revolution from the start in order to make sense of what millions of human beings did inside this digital information network. That collaboration has made a comprehensive public-private mass surveillance state possible today.
The story of the deliberate creation of the modern mass-surveillance state includes elements of Google’s surprising, and largely unknown, origin. It is a somewhat different creation story than the one the public has heard, and explains what Google cofounders Sergey Brin and Larry Page set out to build, and why.
But this isn’t just the origin story of Google: It’s the origin story of the mass-surveillance state, and the government money that funded it.
Backstory: The intelligence community and Silicon Valley
In the mid 1990s, the intelligence community in America began to realize that they had an opportunity. The supercomputing community was just beginning to migrate from university settings into the private sector, led by investments from a place that would come to be known as Silicon Valley.
is tackling privacy concerns after several customers found hidden cameras at their rental properties. The company told CBS News it “takes privacy extremely seriously and there is absolutely no place in our community for this kind of behavior.”
Airbnb requires hosts to disclose to guests prior to their stay if they have any cameras on their property. The company says cameras are “never allowed in bathrooms or bedrooms” but that doesn’t always stop ill-intentioned hosts from hiding them, reports CBS News correspondent Anna Werner.
“I was like this could be like the day that I get killed or worse, I don’t know,” said Meghan Hilden.
The 22-year-old from Washington State was in Switzerland in 2016 when she says a friend discovered a cellphone under the sink in the bathroom at their Airbnb.
“We saw that the phone was filming and we were whispering and trying to be very quiet. We weren’t sure if it was being live streamed,” Hilden said.
Soon after, Hilden says they found an iPad also recording. They quickly left, and reported the incident to the company.
There have been other similar incidents. In late November, activist and filmmaker Jason Scott tweeted his colleague found a camera disguised in a motion detector at an Airbnb. The company called the discovery “incredibly rare,” and said the host was banned. In October, an Indiana couple said they found a camera in a smoke detector in the bedroom at their Airbnb in Florida. The home owner was arrested for “video voyeurism.”
“Simply put, property owners can be ‘peeping toms,'” said Cameron Russel, a professor at Fordham University School of Law.
India’s unique identification scheme makes it compulsory for citizens to scan their fingerprints and iris for generating the Identification number that is to be linked with welfare schemes. But there are no alternative arrangements or concessions for those who do not have fingers or functional eyes.
New Delhi (Sputnik) — In India’s southern state of Karnataka, a leprosy patient who has lost her fingers and eyesight to the debilitating disease, has been denied government subsidy as she failed to enrol herself with Aadhar — a unique identification number that is allocated to Indian citizens after capturing their biometric data that includes fingerprints and iris scan.
The 65-year-old woman, Sajida Begum has been living at the Leprosy Hospital in Magadi Road for the past one decade after her family abandoned her and she has been surviving on a $15-a-month pension provided by the government. The assistance given to her stopped three months ago as she was unable to receive an Aadhar card.
According to local media reports, in August this year, Sajida got a letter from the local administration that her pension would be stopped in seven days if she failed to link it to her Aadhaar card. Dr. Ayub Ali Zai, the administrative medical officer of the Leprosy Hospital wrote a letter to the Unique Identification Authority of India officials requesting them to exempt Sajida from biometric verification considering her problem but there has been no response yet.
“I don’t know where my daughter and son-in-law are. Nobody comes to see me. Please give me my money,” Sajida Begum was quoted by the local media.
Activists working for civil rights say such stories are rampant in India and the government has given no heed to their requests to consider such cases sympathetically.
“Such cases are many and common. In the national capital of Delhi, we have come across several cases of denial of basic rights of citizens due to non-availability of Aadhar card but the government has failed to respond. We were forced to approach the court to bring relief to these victims of government’s overzealous and inhuman enthusiasm to replace services with technology which may not always work on the ground,” Annie Raja, general secretary of National Federation of Indian Women told Sputnik.
The case of the aged women in Bangalore is not just condemnable but is also a criminal denial of the right to life of the person, she added.
The unique identity card (UID) called Aadhar is a twelve digit identity number issued to each citizen in the country which was initiated by the Congress-led UPA government of Dr. Manmohan Singh but it was made mandatory by the Narendra Modi led BJP government after it came to power in 2014. The linking of Aadhar number to welfare schemes have been criticised by the civil rights groups and political opposition on many occasions.
A couple of related deaths in two Indian villages have been blamed on the government’s insistence on granting subsidized ration to only those enrolled with the Aadhar system. The government has denied the allegations but is nevertheless contesting an appeal in the Supreme Court.
Copyright Information: This article was reprinted with permission from Sputniknews.com. Please contact the author directly for republishing information.
And the shootings occur despite having over 600 cameras to “make a person think twice before they do something.” Apparently installing 60 more cameras throughout the city will somehow “bring out a lot of crime.”
I wonder why Bloomberg is throwing so much money at a cause that will have minimal results.
From MyFoxBaltimore: Mayor Catherine Pugh announced Sunday $5 million in funds to help fight crime. “I’m very grateful for Bloomberg Philanthropies for reading the Violence Reduction Plan, for understanding what we need in order to make this city safe,” Mayor Pugh said.
Bloomberg Philanthropies, a charity of former New York City mayor Michael Bloomberg, awarded Baltimore the grant, which will fund 60 new CCTV “Citiwatch” cameras, 25 mobile licence plate readers and expand gunshot detection coverage by 10 square miles.
“It will make a person think twice before they do something. You see, people don’t think twice because there aren’t any cameras around,”William Goode of West Baltimore said. “It will help it a lot. It will bring out a lot of crime.”
Sky Television is seeking the right to force the country’s top internet providers to block access to websites hosting pirated material.
InternetNZ chief executive Jordan Carter said the move was “an extreme step” that was unprecedented in New Zealand.
Taryn Hamilton, consumer manager of Vocus New Zealand, which owns the Slingshot and Orcon internet brands, said Sky’s demand was “something you would expect in North Korea, not in New Zealand”.
“It isn’t our job to police the internet and it sure as hell isn’t Sky’s either. All sites should be equal and open,” he said.
Sky TV has drawn up an application for a “site blocking injunction” to Auckland High Court, under which it would name offending websites, which it would then expect internet providers would block within 10 days of a request being made.
Spokeswoman Kirsty Way clarified it was draft injunction which it hoped to file with the court “shortly”. It had provided copies to the telecommunications companies “as a courtesy”, she said.
It would target websites “solely created to offer pirated content like Pirate Bay and Putlocker”, she said. “This is not a complete list, but it will be a small number and we’d ask for an option to apply via the courts for additional sites as needed.”
The draft injunction can be read here:
On the heels of Lenovo’s massive $3.5 million fine for preinstalling adware on laptops without users‘ consent, Hewlett-Packard is jumping in with both feet when it comes to installing spyware on its PCs without the consumer’s permission.
The software, which was first identified on November 15 of this year, is called “HP Touchpoint Analytics Service” and appears to replace the self-managed HP Touchpoint Manager solution. According to the official productivity description, it features “the tools you need to ensure all your managed devices’ security — and brings you greater peace of mind”. The problem is, it’s installing itself without permission and is wreaking havoc on customers’ systems.
And the consumers are noticing:
On 11/18/2017 Hp Touchpoint Analytics Client was installed on my computer without my concent, I’m assuming it was installed in the background as an update to Hp support or framework. However it happened I don’t appreciate it’s sneaky take over of my computer’s system resources. From yesterday to today it’s been making my computer work so hard I can hear it like cranking away and the light in the back of my computer is flashing rapidly in-tune with the cranking. –HP Forum
HP has not released an official response to the complaints either and it’s not yet clear how the new driver is being installed. It may have come with the latest Windows updates, or via HP and its support assistant processes, but regardless, customers aren’t happy. “I understand that it hoovers all sorts of telemetry data — and I am not willing to share too much of it really, definitely not without my knowledge,” says one user on HP’s forums.
There is a speck of good news, however. According to Engadget, the offending driver can be removed relatively quickly and easily, but against a wider backdrop of repeated privacy scandals, that’s hardly the point and only semi-comforting to those affected.
Copyright Information: This article was reprinted with permission from SHTFplan.com. Please contact the author directly for republishing information.
This is software to save lives. Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.
Facebook previously tested using AI to detect troubling posts and more prominently surface suicide reporting options to friends in the U.S. Now Facebook is will scour all types of content around the world with this AI, except in the European Union, where General Data Protection Regulation privacy laws on profiling users based on sensitive information complicate the use of this tech.
Facebook also will use AI to prioritize particularly risky or urgent user reports so they’re more quickly addressed by moderators, and tools to instantly surface local language resources and first-responder contact info. It’s also dedicating more moderators to suicide prevention, training them to deal with the cases 24/7, and now has 80 local partners like Save.org, National Suicide Prevention Lifeline and Forefront from which to provide resources to at-risk users and their networks.
“This is about shaving off minutes at every single step of the process, especially in Facebook Live,” says VP of product management Guy Rosen. Over the past month of testing, Facebook has initiated more than 100 “wellness checks” with first-responders visiting affected users. “There have been cases where the first-responder has arrived and the person is still broadcasting.”
The idea of Facebook proactively scanning the content of people’s posts could trigger some dystopian fears about how else the technology could be applied. Facebook didn’t have answers about how it would avoid scanning for political dissent or petty crime, with Rosen merely saying “we have an opportunity to help here so we’re going to invest in that.” There are certainly massive beneficial aspects about the technology, but it’s another space where we have little choice but to hope Facebook doesn’t go too far.
[Update: Facebook’s chief security officer Alex Stamos responded to these concerns with a heartening tweet signaling that Facebook does take seriously responsible use of AI.
Do you talk to your plants? You may want to watch what you say to them. The US Department of Defense has a new program whose purpose is to genetically modify plants to create undercover ground cover for spying on the enemy … and these days it seems like we’re ALL the enemy. Should you start pruning suspicious shrubbery down to the roots? Checking the cornfield for ears with ears?
“Plant sensors developed under the program will sense specific stimuli and report these signals with a remotely recognized phenotype detectable by existing hardware platforms.”
DARPA’s (Defense Advanced Research Projects Agency) new TIA (tree intelligence agency) is called the Advanced Plant Technologies program and its purported goal is to use nature’s own (with a few genetic tweaks) to watch for and report any environmental anomalies that might signal a chemical attack. That sounds like a good plan, doesn’t it?
“The program aims to control and direct plant physiology to detect chemical, biological, radiological, and/or nuclear threats, as well as electromagnetic signals.”
Electromagnetic signals? That’s where the plant program starts to sound a little sinister. Plants using their roots to listen in on buried communications cables and “report these signals” to existing hardware platforms?
Google is desperately trying to control everything you see, think, hear and feel. Google has become the most evil corporation on the face of the earth. Remember, where they burn books, they will soon burn people. How far is Google willing to go with their censorship practices? Is the the beginning of the 4th Reich?
Merry Christmas to the Indepenent Media as Google has announced that they are in the process of purging the industry from their search engines before Christmas.
Is this just some idle threat that is only symobolic? Hardly, everyone from Steve Quayle to Infowars to The Common Sense Show, will no longer be appearing in your favorite search engine before you open your presents this Christmas. Google controls almost 80% of Internet search engine traffic. Google’s search engines constitute the existence of an unregulated monopoly. Under our existing anti-trust laws, it is illegal for Google to censor Internet traffic based upon its own judgment. But don’t tell that to America’s new Nazi Minister of Truth, Eric Schmidt, the present executive chairman of Google’s parent company Alphabet.
Field Marshall Schmidt announced that Google is now implementing the most complete form of censorship in the history of the planet with a new set of AI algorithms which will seek and destroy (ie de-rank) various websites that go against popular mainstream opinion.
Under the Guise of the Russian Delusion
Field Marshall Schmidt openly admits that the Russians are the primary target of Google”s wrath. More specifically, RT and Sputnik are prominentlly listed in all Google’s press releases to this point. Both websites will not be found on Google’s search engines, thus greately reducing the traffic that visits the two sites. However, this is not about the Russians, this is about denying the average American access to the truth regarding the insidious takeover of America. If you recall, during the 2016 general election, The Common Sense Show, along with 200 of our Independent Media websites, were listed by the Washington Post as being subversive Russian agents that were somehow guilty of supporting Putin and the Russians against America. In reality thee websites were actively trying to prevent a coup from completely taking over our government. The significance of 200 Independent Media sites listed as being Russian agents seemed mostly just a case of name-calling. However, with Field Marshall Schmidt’s new censorship edict, the purpose behind this labeling have become painfully apparent.
Censorship Plans Announced at the Halifax International Security Forum
Wsws.org reported in their corporate “Q and A” session at the Halifax International Security Forum, which was recently held in Canada, Field Marshall Schmidt boldly declared Google’s intention to completely control the Internet and to purge any conservative opinion that is not in line with Google’s social, political and economic goals.
At the forum, Field Marshal Schmidt was asked if the internet giant had a role in preventing the “manipulation of information (ie telling the truth about the forces trying to destroy the United States).” The Field Marshall went on to say that Google is “…working on detecting and de-ranking those kinds of sites—it’s basically RT and Sputnik, and any websites like them.”
Rather than let readers decide what they want to read, Field Marshall Schmidt has set up Google to be the great decider of what the truth is.
Talking Out of Both Sides of His Nazi Mouth
At the forum, Field Marshall Schmidt amazingly stated that he was “very strongly not in favor of censorship.” What? Google is in the process of delisting websites whose conservative views that they disagree with and are trying to destroy these sites through a prejudical and arbitrary“ranking” process that is used to demote Internet content not deemed truthful. If this not censorship, then there is no such thing.
Retailers are using facial recognition to collect data about customers as they shop in stores, according to an IT company with insight into the space.
“One of the big things brick-and-mortar retailers are getting into now is knowing their customers … tracking who’s entering their mall and how they’re behaving,” said Mark Lunt, group managing director at Asia-based Jardine OneSolution.
JOS works with the retail sector on facial recognition to profile customers that enter and track their movements.
The data collected include “how many people are coming in, age, ethnicity, gender — it’s all about knowing the foot traffic better and trying to serve more appropriate offers to those customers,” Lunt said.
Customer identities are kept anonymous and it might even benefit shopping experiences in the long run. According to JOS, its projects include adjusting store music to coincide with the customer’s mood.
However, there are limitations for retailers on how the accumulated data is being used and protected.
Every day, approximately nine people are killed in the United States in car crashes related to distracted driving, and nearly 1,100 more are injured. In 2015, the National Safety Council, a nonprofit that promotes health and safety, reported cell phones were involved in 27 percent of all crashes in the US.
Cell phones are largely to blame for this increase in distracted driving. Lawmakers have sought to legislate the problem away, yet it remains; we’re addicted to our screens, even when that addiction could potentially kill us or others.
The tech world has taken notice and launched dozens of apps in recent years to combat the use of cellphones while driving. Live2Txt and LifeSaver, for example, block incoming calls or text messages while you’re driving.
Others, like TrueMotion, use AI algorithms to detect when you’re driving, where you’re going, what type of vehicle you’re in, how you’re using the phone (text messaging, calls, etc.) and provides a drive score that includes all of your distracted driving incidents, and when they happened.
The company (formerly Censio) touts increased safety and the potential monetary benefits of providing that safe driving data to your insurance company, but some critics are concerned about the privacy implications of apps that track your every movement without your complete knowledge.
“It’s extending the Facebook model to insurance, which is what people had been worried about for a very long time,” Madeline Ashby, a futurist and science fiction writer, told Motherboard. “You’d be surrendering vast amounts of data about yourself, from which patterns can be inferred and used against you by a judgemental system.”
China had 176 million surveillance cameras in operation last year and the speed of growth is expected to see that figure more than triple to reach 626 million by 2020, and one Chinese company has over a fifth of the world market, according to research by IHS Markit.
Various stories have emerged recently on China’s efforts to increase surveillance of its people, with the added capabilities of AI, facial and gait recognition. Beijing announced in October 2015 that it now had 100% coverage. However, the UK is still considered to be the most monitored country overall, with previous research showing it had 20% of the world’s cameras for just 1% of the world’s population. Though China’s networks are growing faster than any others in the world. The research estimates the market in China is currently worth $6.4 billion and is growing fast with a rate of 12.4% predicted through to 2021, compared to the US which has 50 million cameras and a market worth $2.9 billion growing at 0.7% annually.
Brain implants that deliver electrical pulses tuned to a person’s feelings and behaviour are being tested in people for the first time. Two teams funded by the US military’s research arm, the Defense Advanced Research Projects Agency (DARPA), have begun preliminary trials of ‘closed-loop’ brain implants that use algorithms to detect patterns associated with mood disorders. These devices can shock the brain back to a healthy state without input from a physician.
The work, presented last week at the Society for Neuroscience (SfN) meeting in Washington DC, could eventually provide a way to treat severe mental illnesses that resist current therapies. It also raises thorny ethical concerns, not least because the technique could give researchers a degree of access to a person’s inner feelings in real time.
The general approach — using a brain implant to deliver electric pulses that alter neural activity — is known as deep-brain stimulation. It is used to treat movement disorders such as Parkinson’s disease, but has been less successful when tested against mood disorders. Early evidence suggested that constant stimulation of certain brain regions could ease chronic depression, but a major study involving 90 people with depression found no improvement after a year of treatment.1
The scientists behind the DARPA-funded projects say that their work might succeed where earlier attempts failed, because they have designed their brain implants specifically to treat mental illness — and to switch on only when needed. “We’ve learned a lot about the limitations of our current technology,” says Edward Chang, a neuroscientist at the University of California, San Francisco (UCSF), who is leading one of the projects.
Social media is engaging in unprecedented censorship when it comes to the American experience. Twitter is changing its rules to punish and ban anyone for not just what they publish on Twitter, but for what they say ANYWHERE. Within 90 days social media sites will be monitoring their users for their statements they make and the websites that they visit. To obtai that kind of information, the NSA would have to be involved and they are.
On an interview on last week’s The Common Sense Show, Pat Wood described a Chinese woman who made critical comments of the government. She was visited by the men-in-black and when they did not arrest her, she thought she was ok. However, the following week, the woman received an expulsion letter from her university. This was followed by her employment being terminate and this nightmare culminated with her eviction from state-owned housing. China controls Internet users through a point system and this woman lost all of her points for her post. This system was designed by Google. And it is coming here.
Social Media Sites and the Independent Media
Social media sites, that the Independent Media (IM) uses to disseminate information, are cracking down on conservative, Christian viewpoints. At one time, the social media giants allowed the IM to use their sites without much interference because they could monitor public opinion and measure and calculate resistance to globalist programs. However, they miscalculated because they did not believe that the IM could have any real impact. Not only is the IM shaping public opinion, we got Donald Trump elected. Enough is enough, never again, the social media tyrants, controlled by allies of the Deep State and the New World Order, have decided to eliminate all electronic free speech that is not leftist, authoritarian, socialist and yes, Satanic!
Definition of Terms
Before launching into the critical topics associated with this article, it is necessary to define some of the more pertinent “communication” terms.
On Platform– Whereby users of social media actively participating on a social media site
Off Platform– Whereby users of social media are publicly proclaiming a view point outside of a specific social media platform. For example, if you were to express an opinion in an op-ed in a local newspaper, and not on a specific social media platform, that would be considered to be off platform. If one posted a video on Facebook, Twitter would consider the video to be off platform, Facebook would consider it to be on platform.
Hate Speech– Any speech that disagrees with the views prevailing leftist, socialist, and dictatorship-orientated style of government (eg right to life, one man-one vote).
Twittter’s View of Free Speech
We believe in freedom of expression and open dialogue, but that means little as an underlying philosophy if voices are silenced because people are afraid to speak up. In order to ensure that people feel safe expressing diverse opinions and beliefs, we prohibit behavior that crosses the line into abuse, including behavior that harasses, intimidates, or uses fear to silence another user’s voice.
Context matters when evaluating for abusive behavior and determining appropriate enforcement actions. Factors we may take into consideration include, but are not limited to whether:
- The behavior is targeted at an individual or group of people;
- The report has been filed by the target of the abuse or a bystander;
- The behavior is newsworthy and in the legitimate public interest.
When Twitter says one cannot target an individual or group of people, they are basically saying that as of December 18. 2017, one may not call out George Soros for his groups (ie Antifa) for organizing the deadly riots at Charlottesville or Black Lives Matter may be criticzed for their violent anti-Trump rallies
Please note that the abovementioned Twitter standards call legitimate criticsm, “abuse “. Further, if one found the need to criticize George Soros, he is already protected, because he is being abused. Further, a criticized person can assume a “victims position” to avoid any legitmate criticsm. Meanwhile, CNN’s Anderson Cooper and Wolf Blitzer can Tweet out the notion that Trump is a racist and even make fun of the way that he consumes water during a recent speech.
Finally, please note that Twitter reserves the right to determine what is newsworthy and if it is “in the legitimate public interest”. Therefore, if I want to criticize Hillary Clinton for her treasonous act of selling nuclear grade material to the Russians, she, or her designee, can claim abuse, even if it is true.
Google and several leading Russian search engines have completely wiped 786 ‘pirate’ sites from their search results. That’s according to telecoms watch Rozcomnadzor, which reports that the search providers delisted the sites after ISPs were ordered by a Moscow court to permanently block them.
Late July, President Vladimir Putin signed a new law which requires local telecoms watchdog Rozcomnadzor to maintain a list of banned domains while identifying sites, services, and software that provide access to them.
Rozcomnadzor is required to contact the operators of such services with a request for them to block banned resources. If they do not, then they themselves will become blocked. In addition, search engines are also required to remove blocked resources from their search results, in order to discourage people from accessing them.
Removing entire domains from search results is a controversial practice and something which search providers have long protested against. They argue that it’s not their job to act as censors and in any event, content remains online, whether it’s indexed by search or not.
Nevertheless, on October 1 the new law (“On Information, Information Technologies and Information Protection”) came into effect and it appears that Russia’s major search engines have been very busy in its wake.
Many people realize that smartphones track their locations. But what if you actively turn off location services, haven’t used any apps, and haven’t even inserted a carrier SIM card?
Even if you take all of those precautions, phones running Android software gather data about your location and send it back to Google when they’re connected to the internet, a Quartz investigation has revealed.
Since the beginning of 2017, Android phones have been collecting the addresses of nearby cellular towers—even when location services are disabled—and sending that data back to Google. The result is that Google, the unit of Alphabet behind Android, has access to data about individuals’ locations and their movements that go far beyond a reasonable consumer expectation of privacy.
Quartz observed the data collection occur and contacted Google, which confirmed the practice.
The cell tower addresses have been included in information sent to the system Google uses to manage push notifications and messages on Android phones for the past 11 months, according to a Google spokesperson. They were never used or stored, the spokesperson said, and the company is now taking steps to end the practice after being contacted by Quartz. By the end of November, the company said, Android phones will no longer send cell-tower location data to Google, at least as part of this particular service, which consumers cannot disable.
“In January of this year, we began looking into using Cell ID codes as an additional signal to further improve the speed and performance of message delivery,” the Google spokesperson said in an email. “However, we never incorporated Cell ID into our network sync system, so that data was immediately discarded, and we updated it to no longer request Cell ID.”
In December 2008, the internet welcomed Google’s Incognito Mode, a privacy option for Chrome, with open arms. The feature offered protection against overbearing browser-history snoops at a time when many of us considered getting caught visiting NSFW sites (OK, let’s be frank: porn) on a computer to be the biggest threat posed by the web. This wasn’t exactly the case.
In fact, hiding your unmentionable browsing habits was hardly the reason a crack team of developers at Google made Incognito Mode. Knowing that Incognito Mode is still widely misunderstood, and has somewhat unfairly come to connote shady behavior, we talked to one of the people who built it, Google’s Vice President of Chrome, Darin Fisher. Fisher provided a firsthand take on how people should be using it, and what people shouldn’t be expecting it to do for them.
Incognito Mode will not help you watch porn at work…
Although Incognito Mode has earned a reputation for helping people shield prying eyes from seeing whatever it is they don’t want to be caught having looked at, its origins are far from illicit. According to Fisher, Incognito Mode was born in 2008 with the primary intention of making it easier and more convenient for people who share computers to do so without mucking up their devices with another user’s cookies — the temporary or permanent files stored on your computer by websites to help them recognize you and keep track of your preferences.
That said, it was also meant to help people hide behaviors they didn’t want loved ones to see. Though, as Fisher describes it, the scenario Google envisioned involves a boyfriend searching for engagement rings who doesn’t want his soon-to-be-fiancée — with whom he shares a computer — to get any hint that he’s about to propose. The Chrome team wanted to provide a tool that would enable people to “pause” their browser from recording its history so people wouldn’t have to purge it in its entirety whenever they didn’t want to leave a trace — a move Fisher describes as “destructive” because it prevents your browser from taking advantage of historical data (e.g., cookies) to power future searches, and causes it to slow down.
In perhaps the most intrusive move of social media platforms’ efforts signal as much virtue as possible and appease their potentially-regulating government overlords, Twitter has announced that it is cracking down on what it defines at hate-speech and not just by looking at its own site.
In what amounts to a major shift in Twitter policy, Mashable’s Kerry Flynn reports that the company announced on Friday that it will be monitoring user’s behavior “on and off the platform” and will suspend a user’s account if they affiliate with violent organizations, according to an update to Twitter’s Help Center on Friday.
We believe in freedom of expression and open dialogue, but that means little as an underlying philosophy if voices are silenced because people are afraid to speak up. In order to ensure that people feel safe expressing diverse opinions and beliefs, we prohibit behavior that crosses the line into abuse, including behavior that harasses, intimidates, or uses fear to silence another user’s voice.
Context matters when evaluating for abusive behavior and determining appropriate enforcement actions. Factors we may take into consideration include, but are not limited to whether:
- the behavior is targeted at an individual or group of people;
- the report has been filed by the target of the abuse or a bystander;
- the behavior is newsworthy and in the legitimate public interest.
Violence: You may not make specific threats of violence or wish for the serious physical harm, death, or disease of an individual or group of people. This includes, but is not limited to, threatening or promoting terrorism.
You also may not affiliate with organizations that – whether by their own statements or activity both on and off the platform – use or promote violence against civilians to further their causes.
Abuse: You may not engage in the targeted harassment of someone, or incite other people to do so. We consider abusive behavior an attempt to harass, intimidate, or silence someone else’s voice.
Hateful conduct: You may not promote violence against, threaten, or harass other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.
Hateful imagery and display names: You may not use hateful images or symbols in your profile image or profile header. You also may not use your username, display name, or profile bio to engage in abusive behavior, such as targeted harassment or expressing hate towards a person, group, or protected category.
Furthermore, Twitter says it will control the stream of information more broadly…
At times, we may prevent certain content from trending.
As Kerry Flynn notes, these changes comes amid aggressive moves by Twitter to curb abuse and harassment on the site after more than a decade of essentially letting the abusers operate freely.
Over the last week, Twitter has taken action against the accounts of white supremacists. Twitter permanently banned Tim “Treadstone” Gionet, a prominent alt-right troll more widely known as Baked Alaska, earlier this week. It also removed the verification badges of Jason Kessler, one of the organizers of the racist Unite the Right rally in Charlottesville, and of alt-right activist Richard Spencer.
witter’s decision to monitor users off site sparked concern from free speech advocates such as Andrew Torba, founder of social network Gab.
“This is a scary precedent to set,” he wrote in an email to Mashable.
“Rules like this will only force dissidents and those who are speaking truth to power to silence themselves or risk being silenced by Twitter.”
Twitter’s new rules will not be enforced until December 18th…
And of course, “if you’re doing nothing wrong, then why would this be an issue for you” will be instant reposte of those defending yet more intrusion within America’s surveillance state.
Copyright Information: This article was reprinted with permission from Zerohedge.com. Please contact the author directly for republishing information.
“We are working on detecting and de-ranking those kinds of sites – it’s basically RT and Sputnik,” Schmidt said during a Q & A session at the Halifax International Security Forum in Canada on Saturday, when asked about whether Google facilitates “Russian propaganda.”
“We are well of aware of it, and we are trying to engineer the systems to prevent that [the content being delivered to wide audiences]. But we don’t want to ban the sites – that’s not how we operate.”
The discussion focused on the company’s popular Google News service, which clusters the news by stories, then ranks the various media outlets depending on their reach, article length and veracity, and Google Alerts, which proactively informs subscribers of new publications.
RT has criticized the proposed move – whose timescale has not been publicized – as arbitrary and a form of censorship.
“Good to have Google on record as defying all logic and reason: facts aren’t allowed if they come from RT, ‘because Russia’ – even if we have Google on Congressional record saying they’ve found no manipulation of their platform or policy violations by RT,” Sputnik and RT Editor-in-Chief Margarita Simonyan said in a statement.
During the discussion, Schmidt claimed that he was “very strongly not in favor of censorship,” but said that he has faith in “ranking” without acknowledging if the system might serve the same function. Schmidt, who joined Google in 2001, said that the company’s algorithm was capable of detecting “repetitive, exploitative, false, and weaponized”info, but did not elaborate on how these qualities were determined.
The Alphabet chief, who has been referred to by Hillary Clinton as a “longtime friend,” added that the experience of “the last year” showed that audiences could not be trusted to distinguish fake and real news for themselves.
“We started with the default American view that ‘bad’ speech would be replaced with ‘good’ speech, but the problem found in the last year is that this may not be true in certain situations, especially when you have a well-funded opponent who is trying to actively spread this information,” he told the audience.
Schmidt advised Barack Obama’s 2012 campaign on digital operations, and offered the same services to Hillary Clinton’s campaign in 2015, according to several emails from campaign chairman John Podesta’s private account, published by WikiLeaks last October.
On election night 2016, Schmidt was spotted at the Clinton campaign headquarters with a “staff” badge, according to a photo submitted to Politico.
RT America registered under FARA earlier this month, after being threatened by the US Department of Justice with arrests and confiscations of property if it failed to comply. The broadcaster is fighting the order in court.
Most people who’ve spent time on the internet have some understanding that many websites log their visits and keep record of what pages they’ve looked at. When you search for a pair of shoes on a retailer’s site for example, it records that you were interested in them. The next day, you see an advertisement for the same pair on Instagram or another social media site.
The idea of websites tracking users isn’t new, but research from Princeton University released last week indicates that online tracking is far more invasive than most users understand. In the first installment of a series titled “No Boundaries,” three researchers from Princeton’s Center for Information Technology Policy (CITP) explain how third-party scripts that run on many of the world’s most popular websites track your every keystroke and then send that information to a third-party server.
Some highly-trafficked sites run software that records every time you click and every word you type. If you go to a website, begin to fill out a form, and then abandon it, every letter you entered in is still recorded, according to the researchers’ findings. If you accidentally paste something into a form that was copied to your clipboard, it’s also recorded. Facebook users were outraged in 2013 when it was discovered that the social network was doing something similar with status updates—it recorded what users they typed, even if they never ended up posting it.
These scripts, or bits of code that websites run, are called “session replay” scripts. Session replay scripts are used by companies to gain insight into how their customers are using their sites and to identify confusing webpages. But the scripts don’t just aggregate general statistics, they record and are capable of playing back individual browsing sessions. The scripts don’t run on every page, but are often placed on pages where users input sensitive information, like passwords and medical conditions.
In the video below, you can see what a session replay script from the company FullStory can record:
Oklahoma has finalized a deal with a Massachusetts company to use license-plate scanners to catch uninsured drivers, and the firm expects to issue 20,000 citations a month starting as early as next year.
The program, believed to be the first of its kind in the nation, involves setting up automated high-speed cameras on highways around the state to detect uninsured vehicles and mailing their owners a citation with a fine of $184, according to the District Attorneys Council.
Gatso USA, a Beverly, Massachusetts-based company that specializes in red-light-running and speeding detection systems, will initially get $80, or 43 percent, of each fine. Its cut will decrease to $74 after two years and $68 after five years, according to a contract approved by the state after months of legal review and negotiation. The company could expect to bring in $1.6 million a month, or $19 million a year, if the 20,000 citations are issued monthly. Gatso is a subsidiary of a Dutch company.
Drivers who pay the fees will avoid having a charge of driving without insurance on their permanent record.
When the first citations will be issued remains unclear. Gatso executives were unavailable for an interview.
The purpose of the Uninsured Vehicle Enforcement Diversion Program, approved by the state Legislature in 2016, is to reduce the high number of uninsured motorists in Oklahoma. A 2015 Pew Charitable Trusts survey found that 26 percent of all drivers in the state are uninsured – the highest rate in the nation – which can push up insurance premiums and hit-and-run accidents.
But another incentive underlies the program. It will be overseen by the District Attorneys Council rather than law enforcement, and the state’s 27 district attorneys’ offices are expected to receive millions of dollars in citation revenue a year, although no estimates were provided. District attorneys have complained that their revenue sources are diminishing because of state budget cuts and the drop in bounced-check fines.
When you give a genetic testing company your DNA to learn more about your ancestral history, you could also be handing it over to law enforcement.
WJAX reports companies such as 23andMe and Ancestry would be willing to give police access to your DNA if they have a warrant.
Even more, police could also obtain DNA from a family member to make a “familial match.”
Although it is possible for police to grab DNA samples from the sites, that information is sometimes difficult to find for those simply looking to learn about their family tree.
I know that the headline sounds absolutely crazy, but this is actually a true story. A Silicon Valley executive named Anthony Levandowski has already filed paperwork with the IRS for the nonprofit corporation that is going to run this new religion. Officially, this new faith will be known as “Way Of The Future”, and you can visit the official website right here. Of course nutjobs are creating “new religions” all the time, but in this case Levandowski is a very highly respected tech executive, and his new religion is even getting coverage from Wired magazine…
The new religion of artificial intelligence is called Way of the Future. It represents an unlikely next act for the Silicon Valley robotics wunderkind at the center of a high-stakes legal battle between Uber and Waymo, Alphabet’s autonomous-vehicle company. Papers filed with the Internal Revenue Service in May name Levandowski as the leader (or “Dean”) of the new religion, as well as CEO of the nonprofit corporation formed to run it.
So what will adherents of this new faith actually believe?
To me, it sounds like a weird mix of atheism and radical transhumanism. The following comes from Way of the Future’s official website…
We believe in science (the universe came into existence 13.7 billion years ago and if you can’t re-create/test something it doesn’t exist). There is no such thing as “supernatural” powers. Extraordinary claims require extraordinary evidence.
We believe in progress (once you have a working version of something, you can improve on it and keep making it better). Change is good, even if a bit scary sometimes. When we see something better, we just change to that. The bigger the change the bigger the justification needed.
We believe the creation of “super intelligence” is inevitable (mainly because after we re-create it, we will be able to tune it, manufacture it and scale it). We don’t think that there are ways to actually stop this from happening (nor should we want to) and that this feeling of we must stop this is rooted in 21st century anthropomorphism (similar to humans thinking the sun rotated around the earth in the “not so distant” past).
But even though Way of the Future does not embrace the “supernatural”, they do believe in a “God”.
In this new religion, the worship of a “Godhead” that will be created using artificial intelligence will be actively encouraged…
The documents state that WOTF’s activities will focus on “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.” That includes funding research to help create the divine AI itself. The religion will seek to build working relationships with AI industry leaders and create a membership through community outreach, initially targeting AI professionals and “laypersons who are interested in the worship of a Godhead based on AI.” The filings also say that the church “plans to conduct workshops and educational programs throughout the San Francisco/Bay Area beginning this year.”
So how “powerful” will this newly created “God” actually be?
Well, Levandowski says that he envisions creating an artificially intelligent being that will literally be “a billion times smarter than the smartest human”…
“What is going to be created will effectively be a god,” he said. “It’s not a god in the sense that it makes lightning or causes hurricanes. But if there is something a billion times smarter than the smartest human, what else are you going to call it?”
He added, “I would love for the machine to see us as its beloved elders that it respects and takes care of. We would want this intelligence to say, ‘Humans should still have rights, even though I’m in charge.’”
But what if this “super-intelligence” gets outside of our control and turns on us?
I am not sure that Levandowski has an answer for that.
Other transhumanists also believe that artificial intelligence will grow at an exponential rate, but instead of AI ruling over us, they see a coming merger between humanity and this new super intelligence. In fact, world famous transhumanist Ray Kurzeil believes that this will enable us to “become essentially god-like in our powers”…
Kurzweil and his followers believe that a crucial turning point will be reached around the year 2030, when information technology achieves ‘genuine’ intelligence, at the same time as biotechnology enables a seamless union between us and this super-smart new technological environment. Ultimately the human-machine mind will become free to roam a universe of its own creation, uploading itself at will on to a “suitably powerful computational substrate”. We will become essentially god-like in our powers.
And prominent transhumanist Mark Pesce takes things even further. He in absolutely convinced that rapidly advancing technology will allow ordinary humans “to become as gods”…
“Men die, planets die, even stars die. We know all this. Because we know it, we seek something more—a transcendence of transience, translation to incorruptible form. An escape if you will, a stop to the wheel. We seek, therefore, to bless ourselves with perfect knowledge and perfect will; To become as gods, take the universe in hand, and transform it in our image—for our own delight. As it is on Earth, so it shall be in the heavens. The inevitable result of incredible improbability, the arrow of evolution is lipping us into the transhuman – an apotheosis to reason, salvation – attained by good works.”
Throughout human history, there has always been a desire to create our own gods or to become our own gods.
But no matter how hard these transhumanists try to run from death, it will eventually find them anyway, and at that point all of their questions about who God really is will be answered once and for all.
Michael Snyder is a Republican candidate for Congress in Idaho’s First Congressional District, and you can learn how you can get involved in the campaign on his official website. His new book entitled “Living A Life That Really Matters” is available in paperback and for the Kindle on Amazon.com.
My personal Facebook account, which has the maximum 5,000 friends and an additional 5,000+ followers, has been blocked from posting for three days. My page hasn’t been blocked yet, but we’ll see; I shared the article there, too.
The reason given for this ban by the little pop-up boxes when I logged on just now was that a couple months ago I had shared an article about admitted false flag operations perpetrated by governments around the world. I don’t know what happened that made Facebook’s system decide to crack down on me now all of a sudden, but I do know I’ve been a bit naughtier than usual in my last couple of articles.
The article I got the banhammer for sharing is titled For Those Who Don’t ‘Believe’ In ‘Conspiracies’ Here Are 58 Admitted False Flag Attacks. According to the site’s ticker it has 50,667 shares as of this writing. It’s laden with hyperlinks for further reading, and lists only instances of false flag operations that insiders are on the record as having admitted to themselves. It’s a good compilation of important information. People should be allowed to share it.
The notifications say I can be permanently banned if I continue posting that sort of material. I’ve had that account since 2007.
So. Who wants to see my Barbra Streisand impression?
In a corporatist system of government, corporate censorship is state censorship. When there’s no meaningful space between corporate power and government power, it doesn’t make much difference whether the guy silencing your dissent is Mark Zuckerberg or Jeff Sessions. America most definitely has such a system.
If they’re going to get us locked down and propagandized into their vapid brain boxes, this will be how they’ll do it. Not by government censorship, but by corporate censorship. Government can’t make an overt attempt to stop a dissenting voice from speaking, but the corporations who own the venue of their speech can.
In a recent Senate Judiciary Committee hearing, plutocrat-sponsored senators spoke with top legal and security officials for Facebook, Twitter and Google in a very disturbing way about the need to silence dissenting voices.
Democratic Senator Mazie Hirono of Hawaii demanded that the companies adopt a “mission statement” declaring their commitment “to prevent the fomenting of discord.”
A former FBI agent Clint Watts kicked it up even further, saying, “Civil wars don’t start with gunshots, they start with words. America’s war with itself has already begun. We all must act now on the social media battlefield to quell information rebellions that can quickly lead to violent confrontations and easily transform us into the Divided States of America.”
Under the guise of “education” and “transparency,” congressional leaders and their cronies are plotting to create a totalitarian-style federal data regime that would ultimately seek to compile federal dossiers containing detailed private information on every American. What remains of privacy and parental rights are both in grave danger.
The threat comes in the form of three particular pieces of legislation: The College Transparency Act (CTA), or H.R. 2434, the Foundations for Evidence-Based Policy-making Act (FEPA), or H.R. 4174, and the Student Privacy Protection Act, or H.R. 3157. Among other policies, this trio of bills would remove prohibitions on collecting and sharing personally identifiable data on students.
Taken together, the legislation — some of which could be voted on in the coming days — will lay the groundwork for the federal government to know virtually everything on everyone. Student data, for example, will give way to workforce data, which can be combined with tax data, law enforcement information, and much more. The data would then be shared across the federal government, creating a seamless data regime that advocates hope will facilitate and enable expanded central planning schemes.
In a letter to lawmakers signed by a broad coalition of individuals and organizations that opposes the plan to allow Big Brother to have access to everyone’s data, signatories noted that the massive amounts of information would obviously become a treasure trove for hackers and criminals. But more importantly, even if the data could be kept safe from private-sector criminals, “there are certain lines a free society should not cross.” And this is one of those, the letter argues.
The Texas National Guard last year spent more than $373,000 to install controversial cellphone eavesdropping devices in secretive surveillance aircraft.
Maryland-based Digital Receiver Technology Inc., or DRT, installed two of its DRT 1301C “portable receiver systems” in National Guard aircraft in partnership with the Drug Enforcement Administration, according to a contract between the Texas National Guard and the company. The contract states that the dirt boxes, as they’re often called after the company’s acronym, are for “investigative case analytical support” in counternarcotics operations and were purchased using state drug-asset forfeiture money.
Dirt boxes mimic cellphone towers by tricking every smartphone within a geographic area of up to one-third of a mile to connect with the technology, usually without cellphone users or telecom companies ever knowing about it. Also known as cell-site simulators, the devices can be used from land or air and are capable of intercepting the user’s location, phone numbers dialed, text messages and photos as well as recording or listening to phone calls.
Privacy and civil liberties advocates have called the use of dirt boxes a “digital dragnet,” because it’s nearly impossible for the government to avoid intercepting personal information from innocent cellphone users when pursuing investigative targets.
According to the contract documents obtained by the Observer, the eavesdropping devices were installed in two RC-26 surveillance planes used for counternarcotics operations. At one time, the RC-26s reportedly operated under a front company called Air Cerberus, but have since converted to military registrations, which generally mask their flight routes and unique tail numbers.
Police in the US state of Delaware are poised to deploy “smart” cameras in cruisers to help authorities detect a vehicle carrying a fugitive, missing child or straying senior.
The video feeds will be analyzed using artificial intelligence to identify vehicles by license plate or other features and “give an extra set of eyes” to officers on patrol, says David Hinojosa of Coban Technologies, the company providing the equipment.
“We are helping officers keep their focus on their jobs,” said Hinojosa, who touts the new technology as a “dashcam on steroids.”
The program is part of a growing trend to use vision-based AI to thwart crime and improve public safety, a trend which has stirred concerns among privacy and civil liberties activists who fear the technology could lead to secret “profiling” and misuse of data.
US-based startup Deep Science is using the same technology to help retail stores detect in real time if an armed robbery is in progress, by identifying guns or masked assailants.
Deep Science has pilot projects with US retailers, enabling automatic alerts in the case of robberies, fire or other threats.
The technology can monitor for threats more efficiently and at a lower cost than human security guards, according to Deep Science co-founder Sean Huver, a former engineer for DARPA, the Pentagon’s long-term research arm.
“A common problem is that security guards get bored,” he said.
Until recently, most predictive analytics relied on inputting numbers and other data to interpret trends. But advances in visual recognition are now being used to detect firearms, specific vehicles or individuals to help law enforcement and private security.
– Recognize, interpret the environment –
Saurabh Jain is product manager for the computer graphics group Nvidia, which makes computer chips for such systems and which held a recent conference in Washington with its technology partners.
If you could, would you delete some memories you don’t like and replace them with much better ones? How far are you willing to go to tweak your brain if doing so could give you super abilities?
These are not just hypothetical questions to Bryan Johnson, the founder of the brain-hacking startup Kernel. He believes that in the next 15 years humans will be able to greatly expand their brain’s natural abilities, and he put $100 million of his own money to make that happen.
Speaking recently at the Web Summit in Lisbon, Johnson said that in the next two decades, researchers should develop tools for the brain that will allow us to “pose any question we wanted” –
“For example, could I have a perfect memory?” asked Johnson. “Could I delete my memories? Could I increase my rate of learning, could I have brain to brain communication? Imagine a scenario where I say ‘I want to know what it’s like to be a cowboy in the American west in the 1800s?’ and someone creates that experience mentally. I’m able to take that and purchase that from that person and experience that.”
The current focus for Kernel, called by Johnson a “human intelligence (HI) company,” will be on developing new tech to treat neurological diseases.
They want to build a tiny implantable chip, called a “neuroprosthetic” that will help people suffering from damage caused by strokes, concussions or Alzheimer’s disease. But Kernel’s ultimate objective looks towards the future, to continue developing the chip for the purpose of the brain’s “cognitive enhancement.”
The company’s mission statement describes their goals this way:
“To further explore our own human boundaries, a wave of new technologies needs to emerge that can access, read, and write from the most powerful tool we have — the human brain.”
The FBI is offering the world a dangerous example by illegally breaking into computers beyond its jurisdiction, opening the US to widespread hacking attacks from outside its borders.
The Federal Bureau of Investigations, hacking computers all over the world in their never ending stated quest to eradicate child pornography, finds their clumsy methods backfire on a regular basis when it comes to countries with which the US has unnecessarily poor relations, including Russia, Iran and China, according to the Daily Beast.
When the FBI conducted its 2015 Operation Pacifier — aimed at tracking visitors to child porn website ‘the Playpen,’ the agency deployed malware on the website that spread to computers around the world. The FBI then blindly broke into any and every infected computer it could identify, without knowing its country of origin or whether or not it was suspected of any crime.
The FBI reportedly hacked over 8,000 computers in 120 countries, according to the Daily Beast, severely overstepping the boundaries of the agency’s US-only jurisdiction. Aside from damaging diplomatic ties, the actions raised the likelihood of those affected countries hacking computers on US soil in return, experts warn.
According to Scarlet Kim, legal officer at UK-based activist group Privacy International, which has closely followed the FBI’s inept and arguably illegal global hacking operation, the agency’s actions are “essentially opening the door for other countries to unilaterally hack devices located in the US in the law-enforcement context.”
Hacking computers may provide the FBI with personal information from IP address to personal files. What’s worse, the agency’s clumsy malware attack does not tell a victim that it is a criminal investigation. A victim of the FBI hack will simply see the malware and its connection to a US government agency — and anyone would assume that it is software being used for the purpose of spying.
“Those risks are especially potent in the hacking context because the identity of the attacker and the purpose of the hack may not be immediately clear,” Kim said.
The FBI annoyed Russian networks and security experts in 2002, after the agency hacked several Russian computers to gather evidence on a cybercrime case.
“We conducted the hack and collection and notified the Russian authorities; only to be notified that arrest warrants had been issued for the agents involved for hacking a Russian computer system,” said E.J. Hilbert, a former FBI special agent who worked on that case.
According to Ahmed Ghappour of the Boston University School of Law, international hacking is not yet legally codified. There are no internationally acclaimed norms of conduct, therefore the FBI’s actions create a dangerous precedent.
“Without the articulation of specific norms on when, how, and who law-enforcement actors should be permitted to hack, cross-border cyberoperations that are attributed to US law enforcement may send unintended signals to other states,” he said.
Collin Anderson, a cybersecurity researcher, echoed the concerns:
“The true risk is how the FBI’s procedures and communications about their use of malware creates international norms that are adopted by countries where rule of law is weak,” he told the Daily Beast.
Copyright Information: This article was reprinted with permission from Sputniknews.com. Please contact the author directly for republishing information.
Take George Orwell’s “1984.” Now sprinkle in that episode of “Black Mirror” where characters live in a world in which every aspect of their lives is dominated by ratings.
That’s one way to think about the Social Credit System, a plan that the Chinese government will make mandatory for all its citizens by 2020.
It’s like a credit score system, but instead of just financial information, this one will also pull together a person’s political leanings, purchase history and even their social interactions to calculate their “trust score.”
Chinese officials say it’s a way to influence their citizens’ behavior to benefit society and move their country forward, but others think it’s just the latest step in the country’s long history of state surveillance.
Rachel Botsman has written about China’s Social Credit System in her book “Who Can You Trust? How Technology Brought Us Together and Why It Might Drive Us Apart.”
The World spoke to her about what the plan could look like in 2020.
How a person’s rating could be calculated:
The Social Credit System — I guess we would probably call it, like, a “National Trust Score” — will look at different dimensions of a person’s life. So things that you might expect, like whether you pay your bills on time or your mortgage. But also your purchasing patterns, things that you say on social media and whether those things conforms with the government. Where it gets, I think, very 1984, is it will look at the patterns and the behaviors of your friends and your social connections as well.
The kind of behavior that could bring a person’s score down:
Well I think there’s behavior that you’d expect — if you make a fraudulent payment or something like that — but then there are things that are more subtle. For example, if you buy work shoes or [diapers], you could be seen as a responsible citizen and your score might go up. But if you’re buying lots of video games your score will maybe go down, because people would think that you’re lazy. If you happen to post something on Tiananmen Square, that’s likely to negatively impact your score. This goes beyond the way we think about traditional credit scores, and really gets into your character and behaviors in a way that is quite frightening.
Last weekend, in the hours after a deadly Texas church shooting, Google search promotedfalse reports about the suspect, suggesting that he was a radical communist affiliated with the antifa movement. The claims popped up in Google’s “Popular on Twitter” module, which made them prominently visible — although not the top results — in a search for the alleged killer’s name. Of course, the was just the latest instance of a long-standing problem: it was the latest of multiple similar missteps. As usual, Google promised to improve its search results, while the offending tweets disappeared. But telling Google to retrain its algorithms, as appropriate as that demand is, doesn’t solve the bigger issue: the search engine’s monopoly on truth.
Surveys suggest that, at least in theory, very few people unconditionally believe news from social media. But faith in search engines — a field long dominated by Google — appears consistently high. A 2017 Edelman survey found that 64 percent of respondents trusted search engines for news and information, a slight increase from the 61 percent who did in 2012, and notably more than the 57 percent who trusted traditional media. (Another 2012 survey, from Pew Research Center, found that 66 percent of people believed search engines were “fair and unbiased,” almost the same proportion that did in 2005.) Researcher danah boyd has suggested that media literacy training conflated doing independent research with using search engines. Instead of learning to evaluate sources, “[students] heard that Google was trustworthy and Wikipedia was not.”
Google encourages this perception, as do competitors like Amazon and Apple — especially as their products depend more and more on virtual assistants. Though Google’s text-based search page is clearly a flawed system, at least it makes it clear that Google search functions as a directory for the larger internet — and at a more basic level, a useful tool for humans to master.
Google Assistant turns search into a trusted companion dispensing expert advice. The service has emphasized the idea that people shouldn’t have to learn special commands to “talk” to a computer, and demos of products like Google Home show off Assistant’s prowess at analyzing the context of simple spoken questions, then guessing exactly what users want. When bad information inevitably slips through, hearing it authoritatively spoken aloud is even more jarring than seeing it on a page.
The Equifax credit reporting agency, with the aid of thousands of human resource departments around the country, has assembled what may be the most powerful and thorough private database of Americans’ personal information ever created, containing 190 million employment and salary records covering more than one-third of U.S. adults.
Some of the information in the little-known database, created through an Equifax-owned company called The Work Number, is sold to debt collectors, financial service companies and other entities.
“It’s the biggest privacy breach in our time, and it’s legal and no one knows it’s going on,” said Robert Mather, who runs a small employment background company named Pre-Employ.com. “It’s like a secret CIA.”
Despite all the information Americans now share on social media and websites, and all the data we know companies collect on us, one piece of information is still sacred to most people: their salaries. After all, who would post their salary as a status update on Facebook or in a tweet?
But salary information is also for sale by Equifax through The Work Number. Its database is so detailed that it contains week-by-week paystub information dating back years for many individuals, as well as other kinds of human resources-related information, such as health care provider, whether someone has dental insurance and if they’ve ever filed an unemployment claim. In 2009, Equifax said the data covered 30 percent of the U.S. working population, and it now says The Work Number is adding 12 million records annually.
How does Equifax obtain this sensitive and secret information? With the willing aid of thousands of U.S. businesses, including many of the Fortune 500. Government agencies — representing 85 percent of the federal civilian population, including workers at the Department of Defense, according to Equifax — and schools also work with The Work Number. Many of them let Equifax tap directly into their data so the credit bureau can always have the latest employment information. In fact, these organizations actually pay Equifax for the privilege of giving away their employees’ personal information.
Equifax turns around and sells some of this data to third parties, including debt collectors and other financial services companies.
Are the days of the free and open Internet numbered? The Internet is certainly used for all sorts of horrible things, but it has also allowed ordinary people to communicate on a mass scale that would have been unimaginable decades ago. In the old days, if you wanted to reach large audiences of people with your information you always had to go through corporate gatekeepers. But today, anyone with an Internet connection can literally broadcast whatever they want to say to the whole world. Personally, my wife and I have always been amazed at how many people we are able to touch all over the planet from our little home in the mountains. Over the past seven years our websites have been viewed more than 100 million times, and we receive emails about our work from people all over the globe.
Unfortunately, major changes may soon be coming to the Internet. The election of Donald Trump really angered the elite, and they are blaming the power of the Internet for his victory. They insist that something must be done “for the good of democracy”.
For example, in an opinion piece for the Guardian, U.S. Senator Al Franken proposed that it is time for the U.S. government to step in because Google, Facebook and Twitter have failed to prevent the spread of propaganda, misinformation, and hate speech…
As lawmakers grapple with the revelations regarding Russia’s manipulation of social media during the 2016 election, many are shocked to learn the outsized role that the major tech companies play in so many aspects of our lives. Not only do they guide what we see, read, and buy on a regular basis, but their dominance – specifically in the market of information – now requires that we consider their role in the integrity of our democracy.
Last week’s hearings demonstrated that these companies may not be up to the challenge that they’ve created for themselves. In some instances, it seems that they’ve failed to take commonsense precautions to prevent the spread of propaganda, misinformation, and hate speech.
Those are very ominous words.
So precisely what would constitute “propaganda”, “misinformation” or “hate speech”?
When you start regulating speech, you cross a very dangerous line. There is a reason why our founders guaranteed us freedom of speech in the Bill of Rights, because if we don’t have the freedom to say what we want then what do we really have left?
During the presidential election, there was a lot of talk about Hillary Clinton’s health. The mainstream media insisted that she was just fine, and they accused those of us in the alternative media that were questioning her health of engaging in “propaganda” and “misinformation”. Well, it turns out that we now know that Clinton’s health was so bad that Donna Brazile was actually considering replacing her as the nominee, and so it was actually the mainstream media that was putting out “propaganda” and “misinformation”.
Any effort to institute some sort of “truth police” would take us significantly down the road to totalitarianism, but apparently that is what Franken wants. In fact, he is openly suggesting that it is time for government regulators to step in…
Instead of simply trusting the big tech companies to police how their services are being used and abused, Franken suggested that regulators need to step in. Lawmakers should take a closer look at the influence technology plays in the everyday lives of Americans by conducting “vigorous oversight in the form of investigations and hearings to fully understand current practices and the potential for harm,” the Minnesota senator said.
“I’m hopeful that recent events will encourage regulators, as well as a broader contingent of my colleagues — on both sides of the aisle — to give this issue the attention it deserves,” he said.
So once government regulators begin regulating speech on the Internet, where will it end?
Will everything that we do on the Internet have to be evaluated for “truthiness” before it is allowed to be posted?
And who decides what the “truth” actually is?
I am a big believer in the marketplace of ideas. I have always been convinced that if everyone is allowed to openly share what they believe that the truth will win in the end.
Of course the elite are scared of the free exchange of ideas, because that gives the people way too much control over their own destiny. Prior to the Internet age, they were always in control of the flow of information in our society, but now things have changed dramatically.
They desperately want to get control of the Internet, because they want things to go back to the way that they used to be. But we can’t allow that to happen, and so we must greatly resist any attempts to regulate speech on the Internet.
Michael Snyder is a Republican candidate for Congress in Idaho’s First Congressional District, and you can learn how you can get involved in the campaign on his official website. His new book entitled “Living A Life That Really Matters” is available in paperback and for the Kindle on Amazon.com.
Copyright Information: This article was reprinted with permission from Endoftheamericandream.com. Please contact the author directly for republishing information.
Despite the unfortunate fact that pornography has largely shaped our ideas and perceptions about sex, and might have negative biological consequences, porn websites still rank among the top in the world, and, as we now know thanks to whistleblowers like Edward Snowden, William Binney, and Thomas Drake, we’re not the only ones watching.
In case you didn’t know, Edward Snowden is the former intelligence contractor who leaked information about the NSA’s mass surveillance program, one of multiple clandestine black budget operations that involves a global network of intelligence agencies who can access your phone, computer, email, cameras, and much more.
“A child born today will have no conception of privacy at all,” says Snowden. “They’ll never know what it means to have a private moment to themselves, an unrecorded, unanalyzed thought. And that’s a problem because privacy matters, privacy is what allows us to determine who we are and who we want to be.”
So, what’s the justification for these mass surveillance programs? A heightened national security state. In the wake of global terrorism and other national security threats, the global elite has made our surveillance out to seem a necessity, a matter of life and death even, and many of us are buying into this view. But that’s all changing thanks to people like Edward Snowden, and the alternative media networks daring to report that many of these so-called threats to national security are actually manufactured by the same ones proposing to solve it.
This has become more clear every single year since 9/11, a classic example of false flag terrorism, which is one method used to justify the heightened national security state and mass surveillance programs. But these tactics have been around for a long time, and people have been calling them out for just as long. As Mark Twain once wrote:
The “PreCheck” program is billed as a convenient service to allow U.S. travelers to “speed through security” at airports. However, the latest proposal released by the Transportation Security Administration (TSA) reveals the Department of Homeland Security’s greater underlying plan to collect face images and iris scans on a nationwide scale. DHS’s programs will become a massive violation of privacy that could serve as a gateway to the collection of biometric data to identify and track every traveler at every airport and border crossing in the country.
Currently TSA collects fingerprints as part of its application process for people who want to apply for PreCheck. So far, TSA hasn’t used those prints for anything besides the mandatory background check that’s part of the process. But this summer, TSA ran a pilot program at Atlanta’s Hartsfield-Jackson Airport and at Denver International Airport that used those prints and a contactless fingerprint reader to verify the identity of PreCheck-approved travelers at security checkpoints at both airports. Now TSA wants to roll out this program to airports across the country and expand it to encompass face recognition, iris scans, and other biometrics as well.
From Pilot Program to National Policy
While this latest plan is limited to the more than 5-million Americans who have chosen to apply for PreCheck, it appears to be part of a broader push within the Department of Homeland Security (DHS) to expand its collection and use of biometrics throughout its sub-agencies. For example, in pilot programs in Georgia and Arizona last year, Customs and Border Protection (CBP) used face recognition to capture pictures of travelers boarding a flight out of the country and walking across a U.S. land border and compared those pictures to previous recorded photos from passports, visas, and “other DHS encounters.” In the Privacy Impact Assessments (PIAs) for those pilot programs, CBP said that, although it would collect face recognition images of all travelers, it would delete any data associated with U.S. citizens. But what began as DHS’s biometric travel screening of foreign citizens morphed, without congressional authorization, into screening of U.S. citizens, too. Now the agency plans to roll out the program to other border crossings, and it says it will retain photos of U.S. citizens and lawful permanent residents for two weeks and information about their travel for 15 years. It retains data on “non-immigrant aliens” for 75 years.
CBP has stated in PIAs that these biometric programs would be limited to international flights. However, over the summer, we learned CBP wants to vastly expand its program to cover domestic flights as well. It wants to create a “biometric” pathway that would use face recognition to track all travelers—including U.S. citizens—through airports from check-in, through security, into airport lounges, and onto flights. And it wants to partner with commercial airlines and airports to do just that.
Congress seems poised to provide both TSA and CBP with the statutory authority to support these plans. As we noted in earlier blog posts, the “Building America’s Trust” Act would require the Department of Homeland Security (DHS) to collect biometric information from all people who exit the U.S., including U.S. and foreign citizens. And the TSA Modernization Act, introduced earlier this fall, includes a provision that would allow the agencies to deploy “biometric technology at checkpoints, screening lanes, bag drop and boarding areas, and other areas where such deployment would enhance security and facilitate passenger movement.” The Senate Commerce Committee approved the TSA bill in October.
DHS Data in the Hands of Third Parties
These agencies aren’t just collecting biometrics for their own use; they are also sharing them with other agencies like the FBI and with “private partners” to be used in ways that should concern travelers. For example, TSA’s PreCheck program has already expanded outside the airport context. The vendor for PreCheck, a company called Idemia (formerly MorphoTrust), now offers expedited entry for PreCheck-approved travelers at concerts and stadiums across the country. Idemia says it will equip stadiums with biometric-based technology, not just for security, but also “to assist in fan experience.” Adding face recognition would allow Idemia to track fans as they move throughout the stadium, just as another company, NEC, is already doing at a professional soccer stadium in Medellin, Columbia and at an LPGA championship event in California earlier this year.
CBP is also exchanging our data with private companies. As part of CBP’s “Traveler Verification Service,” it will partner with commercial airlines and airport authorities to get access to the facial images of travelers that those non-government partners collect “as part of their business processes.” These partners can then access CBP’s system to verify travelers as part of the airplane boarding process, potentially doing away with boarding passes altogether. As we saw earlier this year, several airlines are already planning to implement their own face recognition services to check bags, and some, like Jet Blue, are already partnering with CBP to implement face recognition for airplane boarding.
The Threat to Privacy and Our Freedom to Travel
We cannot overstate how big a change this will be in how the federal government regulates and tracks our movements or the huge impact this will have on privacy and on our constitutional “right to travel” and right to anonymous association with others. Even as late as May 2017, CBP recognized that its power to verify the identification of travelers was limited to those entering or leaving the country. But the TSA Modernization Act would allow CBP and TSA to collect any biometrics they want from all travelers—international anddomestic—wherever they are in the airport. That’s a big change and one we shouldn’t take lightly. Private implementation of face recognition at airports only makes this more ominous.
All Americans should be concerned about these proposals because the data collected—your fingerprint, the image of your face, and the scan of your iris—will be stored in FBI and DHS databases and will be searched again and again for immigration, law enforcement, and intelligence checks, including checks against latent prints associated with unsolved crimes.
That creates a risk that individuals will be implicated for crimes and immigration violations they didn’t commit. These systems are notoriously inaccurate and contain out-of-date information, which poses a risk to all Americans. However, due to the fact that immigrants and people of color are disproportionately represented in criminal and immigration databases, and that face recognition systems are less capable of identifying people of color, women, and young people, the weight of these inaccuracies will fall disproportionately on them.
This vast data collection will also create a huge security risk. As we saw with the 2015 Office of Personnel Management data breach and the 2017 Equifax breach, no government agency or private company is capable of fully protecting your private and sensitive information. But losing your social security or credit card numbers to fraud is nothing compared to losing your biometrics. While you can change those numbers, you can’t easily change your face.
Copyright Information: This article was reprinted with permission from EFF.ORG. Please contact the author directly for republishing information.
WikiLeaks began publishing the source code of alleged CIA hacking tools Thursday in a new series dubbed “Vault 8.”
The source code, according to a press release from the anti-secrecy organization, is intended to “enable investigative journalists, forensic experts and the general public to better identify and understand covert CIA infrastructure components.”
“Source code published in this series contains software designed to run on servers controlled by the CIA,” WikiLeaks writes, stressing that the material does not contain 0-day or undisclosed vulnerabilities that could be utilized by others.
Hive, the first tool featured in Vault 8, aids the agency in controlling malware installed on target devices.
“Even the most sophisticated malware implant on a target computer is useless if there is no way for it to communicate with its operators in a secure manner that does not draw attention,” WikiLeaks writes. “Using Hive even if an implant is discovered on a target computer, attributing it to the CIA is difficult by just looking at the communication of the malware with other servers on the internet.”
“Hive provides a covert communications platform for a whole range of CIA malware to send exfiltrated information to CIA servers and to receive new instructions from operators at the CIA.”
Details on Hive were first revealed last April as part of WikiLeaks’ release of CIA hacking tool documentation known as Vault 7.
While the source code for Hive is unlikely to do little more than assist forensics analysts, the code for more powerful tools in Vault 7, if released, could potentially enable malicious attackers.
These latest leaks, likely from the same source as the Vault 7 files, are believed to have originated from a CIA employee or contractor.
In April of last year it was learned that a joint investigation by the FBI and CIA had begun looking into hundreds of agency members who would have had access to the material.
That same month CIA Director Mike Pompeo criticized WikiLeaks and its founder Julian Assange during his first public speech as head of the agency.
“Assange and his ilk make common cause with dictators today,” Pompeo said at the time. “Yes, they try unsuccessfully to cloak themselves and their actions in the language of liberty and privacy; in reality, however, they champion nothing but their own celebrity. Their currency is clickbait; their moral compass, nonexistent.”
Although then-presidential candidate Donald Trump repeatedly praised WikiLeaks during his campaign, reports claimed last April that Trump’s Justice Department had prepared charges seeking the arrest of WikiLeaks founder Julian Assange.
While Attorney General Jeff Sessions appeared to hint at the alleged charges during a press conference that month, Assange’s lawyer, Barry Pollack, stated that the Department of Justice has not attempted to make contact.
Copyright Information: This article was reprinted with permission from Infowars.com. Please contact the author directly for republishing information.
It sounds like a creepy science-fiction movie about a future dystopia, but it is very real and it is happening now. The federal government is funding a new scheme to build a “friendly social robot” that will be deployed at schools to collect data on children’s mental health, according to news reports and official documents. The new machines will add to the growing battery of federally funded Orwellian tools to gather every conceivable form of data on students for Big Brother.
The project is supposed to produce a “user-friendly” robot known as “EMAR,” or Ecological Momentary Assessment Robot, “that gathers teen mental health data in a public high school setting,” according to the federal “National Science Foundation” funding the scheme to the tune of more than a million dollars. It is needed, NSF says, because “adolescents are very likely to have long-lasting relationships with robots in the future at work, in the classroom, and at home.”
“It also needed especially since adolescents constitute a vulnerable population that is negatively affected by stress and mental health issues, and since there are well-established difficulties in gathering accurate, useful, mental health data from teens in their natural environment with digital surveys and experience sampling using static data collection tools including computers, tablets, and smart phones,” the official award document continues.
Success of the scheme, as defined by the NSF, will “contribute to the development of ubiquitous social robots that serve as tools for on-site, real time data collection.” Such tools, the grant documents continue, “would improve research methodology and facilitate evidence-based decisions in real time.” While the exact nature of the “decisions” that supposedly need to be made was not specified, the robots are supposed to serve as an “essential tool” to assess the “mental health” of children and develop “interventions” for them.
Brain-computer interfaces (BCI), such as those being developed by Elon Musk’s Neuralink startup, risk being hijacked by a rogue artificial intelligence, experts have warned—meaning a person’s thoughts, decisions and emotions could be manipulated against their will by AI.
The warning comes in a comment piece in the scientific journal Nature this week written by 27 neuroscientists, ethicists and machine intelligence engineers.
The researchers use the hypothetical example of a paralysed man participating in a brain-computer interface trial who doesn’t like the research team working with him. Artificial intelligence reading his thoughts could take his dislike as a command to harm the researchers, despite no direct command being given by the paralysed man.
The researchers write: “Technological developments mean that we are on a path to a world in which it will be possible to decode people’s mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions; where individuals can communicate with others simply by thinking; and where powerful computational systems linked directly to people’s brains facilitate their interactions with the world such that their mental and physical abilities are greatly enhanced.”
In order to protect against this, the group of researchers say there are four ethical priorities that need to be addressed: privacy and consent, agency and identity, augmentation, and bias.
As the ever-increasing surveillance state continues to consume the United States, many are left wondering how far authorities will go in their pursuit to end privacy altogether in the name of ostensible safety. As a recent case out of Windsor, Colorado illustrates—even children’s bathrooms are now considered fair game for the all-seeing eye of the state.
Some parents of students at Windsor Charter Academy are speaking up after the school announced its decision to put surveillance cameras in all of the high school’s bathrooms. Not only do the cameras obliterate students’ privacy but they also pave the way for rampant abuse by sexual predators.
According to FOX 31, Windsor Charter Academy executive director Rebecca Teeples said the installation of cameras improves safety for students while helping secure the building.
“We had surveillance cameras in our plans from the very start. It was part of the design of the new wing,” Teeples said. “We want to make sure our students are safe and secure.”
Apparently being safe means having someone record you while you’re most vulnerable.
Naturally, some parents are speaking up and demanding the school take these cameras down. One parent, Trevor Garrett, who has three children at the school said the cameras in the bathrooms violate students’ right to privacy—and he’s right.
“The first word that comes to mind is disgusting,” Garrett said. “I never thought it would be on anyone’s mind to put cameras in bathrooms anywhere.”
Garrett also brings up the potential for abuse, noting that students will be obviously exposed in front of these cameras.
“My gut reaction is, I’m a father, I want to protect my children and I’m going to protect any kids in here,” Garrett said. “I think when we sacrifice privacy for the sake of safety, it’s a very slippery slope.
“At what point does it cross that threshold and violate rights? I think [in this situation] this violates rights.”
“It makes me really uncomfortable going in there,” said Garrett’s middle school student Kaylee Garrett.
Teeples is defending the placement of the cameras in the high school bathrooms because the stalls go from the floor to the ceiling. However, as Garrett pointed out, what if the child is exposed outside of the stall, changing clothes for example.
As FOX 31 reports, Teeples said only school administration and the technology team have access to the footage.
Noting the sheer senseless nature of the cameras in bathrooms, Teeples said the cameras aren’t monitored constantly and footage will only be viewed if there is a need to see it.
As statistics show us, however, there will undoubtedly be people seeking that footage out. As a recent investigation by USA Today illustrates that education officials put children in harm’s way by covering up evidence of abuse, keeping allegations secret and making it easy for abusive teachers to find jobs elsewhere.
As a result, schoolchildren across the nation continue to be beaten, raped and harassed by their teachers while government officials at every level stand by and do nothing. The investigation uncovered more than 100 teachers who lost their licenses but are still working with children or young adults today.
According to the study, at every level, institutions and officials charged with ensuring the safety of children have failed. Lawmakers have ignored a federal mandate to add safeguards at the state level. Unions have resisted reforms. And administrators have pursued quiet settlements rather than public discipline.
Given the propensity of sexual predators seeking out jobs at schools and school officials looking the other way, installing cameras in bathrooms seems like the worst idea in the world. Sadly, however, these cameras will likely not be removed as much of society is now trained to surrender their freedom and privacy for the facade of safety—apparently, even if that means allowing the potential for sexual predators to spy on their children in bathrooms.
Copyright Information: This article was reprinted with permission from Thefreethoughtsproject.com. Please contact the author directly for republishing information.
Facebook is asking users to send the company their nude photos in an effort to tackle revenge porn, in an attempt to give some control back to victims of this type of abuse.
Individuals who have shared intimate, nude or sexual images with partners and are worried that the partner (or ex-partner) might distribute them without their consent can use Messenger to send the images to be “hashed”. This means that the company converts the image into a unique digital fingerprint that can be used to identify and block any attempts to re-upload that same image.
Facebook is piloting the technology in Australia in partnership with a government agency headed up by the e-safety commissioner, Julia Inman Grant, who told ABCit would allow victims of “image-based abuse” to take action before pictures were posted to Facebook, Instagram or Messenger.
“We see many scenarios where maybe photos or videos were taken consensually at one point, but there was not any sort of consent to send the images or videos more broadly,” she told the Australian broadcaster.
Carrie Goldberg, a New York-based lawyer who specializes in sexual privacy, said: “We are delighted that Facebook is helping solve this problem – one faced not only by victims of actual revenge porn but also individuals with worries of imminently becoming victims.
“With its billions of users, Facebook is one place where many offenders aggress because they can maximize the harm by broadcasting the nonconsensual porn to those most close to the victim. So this is impactful.”
In the Australian pilot, users must first complete an online form on the e-safety commissioner’s website outlining their concerns. They will then be asked to send the pictures they are concerned about to themselves on Messenger while the e-safety commissioner’s office notifies Facebook of their submission. Once Facebook gets that notification, a community operations analyst will access the image and hash it to prevent future instances from being uploaded or shared.
Over the last few weeks, media around the world have been saturated with stories about how technology is destroying politics. In autocracies like China, the fear is of ultra-empowered Big Brother states, like that in George Orwell’s 1984. In democracies like the United States, the concern is that tech companies will continue to exacerbate political and social polarization by facilitating the spread of disinformation and creating ideological “filter bubbles,” leading to something resembling Aldous Huxley’s Brave New World.
In fact, by bringing about a convergence between democracy and dictatorship, new technologies render both of these dystopian visions impossible. But that doesn’t mean that there is nothing to fear.
Much of the coverage of the 19th National Congress of the Communist Party of China (CPC) focused on President Xi Jinping’s consolidation of power. He is, observers warn, creating an information-age dictatorship, in which the technologies that were once expected to bring freedom to China’s 1.4 billion citizens have instead enabled him to entrench his own authority. By providing the government with highly detailed information on the needs, feelings, and aspirations of ordinary Chinese, the Internet allows China’s leaders to preempt discontent. In other words, they now use Big Data, rather than brute force, to ensure stability.
And the data are big indeed. More than 170 million face-recognition surveillance cameras track every step citizens make. An artificial-intelligence-enhanced security system can spot criminal suspects as they cycle beside a lake or purchase dumplings from a street vendor, and immediately alert the police. Data surveillance cameras feed into China’s “social credit” data bank, where the regime compiles thick files on its people’s creditworthiness, consumption patterns, and overall reliability.
The CPC is also using technology to manage its own ranks, having developed dozens of apps to communicate with party members. Meanwhile, it blocks some of the empowering features of technology: by forcing all tech companies to have their servers within China, it effectively “in-sources” censorship.
How can an employer make sure its remote workers aren’t slacking off? In the case of talent management company Crossover, the answer is to take photos of them every 10 minutes through their webcam.
The pictures are taken by Crossover’s productivity tool, WorkSmart, and combine with screenshots of their workstations along with other data – including app use and keystrokes – to come up with a “focus score” and an “intensity score” that can be used to assess the value of freelancers.
Today’s workplace surveillance software is a digital panopticon that began with email and phone monitoring but now includes keeping track of web-browsing patterns, text messages, screenshots, keystrokes, social media posts, private messaging apps like WhatsApp and even face-to-face interactions with co-workers.
“If you are a parent and you have a teenage son or daughter coming home late and not doing their homework you might wonder what they are doing. It’s the same as employees,” said Brad Miller, CEO of Awareness Technologies, which sells a package of employee monitoring tools under the brand Interguard.
Crossover’s Sanjeev Patni insists that workers get over the initial self-consciousness after a few days and accept the need for such monitoring as they do CCTV in shopping malls.
“The response is ‘OK, I’m being monitored, but if the company is paying for my time how does it matter if it’s recording what I’m doing? It’s only for my betterment,’” he said.
Such “betterment” apparently isn’t necessary for managers: they can pick and choose when to turn their cameras on.
The microchip is about the size of a grain of rice and usually inserted in the webbing between the thumb and forefinger using a needle the same thickness as used in body piercing.
It feels, says insertable technology expert Kayla Heffernan, like getting a drip.
Once the needle is removed the incision heals in a few days and the microchip remains, allowing the wearer to open doors with the brush of a hand – provided they only wish to access one particular place.
Commercially available insertable microchips are only large enough to hold one access code and a small amount of other information, so the days of replacing an entire wallet and keychain with a tiny computer under the skin are not yet upon us.
The future is coming, but it’s not in a rush.
Ten volunteers received a microchip at the Australian Centre for the Moving Image in Melbourne on Wednesday to mark the launch of Pause Fest, a technology and culture festival now in its eighth year.
Their chips were preloaded with a three-day pass to the festival and will be programmed to unlock the door to their home, gym, or workplace, or potentially to function as their public transport pass.
When the festival is held in four months time, the volunteers will take part in a panel with Heffernan to talk about whether they found the chips useful.
Heffernan has had one microchip between her thumb and forefinger for almost 18 months, which she uses to unlock her front door. She got another on the outer edge of her other hand last November to access her office at Melbourne University.
Three months ago, the World Socialist Web Site published its first exposé documenting Google’s blacklisting of the WSWS and other left-wing websites. It warned that Google’s actions were part of a sweeping campaign, coordinated with the US government, media and intelligence agencies, to censor the Internet.
The period since this initial exposure has seen this campaign develop with extraordinary speed, as the Democratic Party, working with major media outlets, uses unsubstantiated allegations of Russian “hacking” of the 2016 election to mount a drive to criminalize political opposition within the United States. What is involved is nothing less than the greatest attack on the First Amendment since the Second World War.
This campaign will reach a new milestone with the testimony by officials from Facebook, Twitter and Google before the Senate Judiciary Committee today and the House and Senate intelligence committees tomorrow on their plans to counter “extremist content and Russian disinformation online.”
Over the past three months, the top Democrats on the Senate and House intelligence committees, Mark Warner and Adam Schiff, in conjunction with the US intelligence agencies and the media, have concocted an absurd narrative that some $100,000 in Russian social media advertisements, mostly placed after November, helped swing the election in favor of Donald Trump.
Having forced the technology companies to compile lists of “Russia-linked” accounts, the lawmakers will now turn their focus on their real target: What they call “organic content,” or, to put it more plainly, political speech on the Internet.
According to the Hill, Warner and Schiff will press the social media companies to admit “that Russia created ‘organic content’…to divide and influence Americans.”
“Probably more important is, what was the content they were pushing out that was nonadvertising,” Schiff told the Hill this week. “I think that will probably dwarf what we’ve seen in the paid advertising,”
In prepared testimony obtained by news outlets Monday evening, Facebook expanded its list of “Russia-linked” online activity to include such “organic content,” declaring that tens of thousands of “inflammatory” posts by “fake” accounts connected to Russia reached 126 million US Facebook users.
This “divisive content” no doubt included reposts of articles by left-wing and oppositional news outlets, putting them firmly in the crosshairs of congressional investigators. As a New York Times report put it earlier this month, content “recorded, posted or written by Americans…ended up becoming grist for a network of Facebook pages linked to a shadowy Russian company that has carried out propaganda campaigns for the Kremlin.”
In another extraordinary development, on Friday, Dianne Feinstein, the ranking Democrat on the Senate Judiciary Committee, sent a letter to Twitter’s CEO demanding that the company hand over personally identifying information related to “organic content” posted by Twitter users. The letter specifically requests all “organic content posted by Russia-connected users and targeted to any part of the United States, regardless of whether the individual or entity violated any Twitter policy.” Its definition of “Russia-connected users” is extremely broad, including any “person or entity that may be connected in some way to Russia.”
The letter demands that for all “organic content described above, Twitter provide all subscriber information,” and “IP address information.” This means that the company is being asked to hand over full names, phone numbers, email addresses and IP addresses, which can be used to determine physical location.
The war on truth has reached a fever pitch as Google has made it their mission to annihilate the independent media. The ‘New Media’ lead by the likes of Infowars, Breitbart, Natural News and many other great independent sites will have an uphill battle when it comes to getting their content in front of readers. Google has announced they will be doubling down on their ‘Orwellian’ practice of making stories disappear from their monopolistic search engine. Outlined in their Gestapo like 160-page handbook, Google describes exactly how they plan to suppress any information they deem unfit for readers. Highlighted at the bottom of page 108 Google states:
● Pages that directly contradict well established
scientific or medical consensus for queries seeking scientific or
medical information, unless the query indicates the user is seeking an alternative viewpoint.
● Pages that directly contradict well-established
historical facts (e.g., unsubstantiated conspiracy theories), unless
the query clearly indicates the user is seeking an alternative viewpoint.
These two guidelines provided by Google should set off alarm bells to anyone who has a thirst for knowledge or anyone that has the audacity to examine both sides of a topic. Imagine only being able to see and hear the corporate narrative on such important issues like vaccines, 9/11, holistic health, the Kennedy assassination, climate change, banking alternatives, and many other important matters. And that is only scratching the surface.
This information from Google’s handbook was first uncovered by Melissa Dykes at Truth Stream Media who made a great video that eloquently breaks down the importance of this complex issue. I highly recommend everyone watch it and more importantly support their work.
The Chinese government has collected tens of thousands of “voice pattern” samples from targeted citizens and is inputting them into a national voice biometric database, according to a Human Rights Watch report published Monday.
The idea is that an automated system, thought to still be in development, will use the database to pick out individual voices in telephone and other conversations, boosting the government’s already expansive surveillance capabilities.
The system is reportedly being developed by a Chinese voice recognition and artificial intelligence specialist called iFlytech. It adds another spoke to existing biometric information like fingerprints and DNA samples, as well as identification numbers and other personal details.
Biometric records are a common tool of law enforcement worldwide and some countries, like Japan, even fingerprint foreigners on entry — ostensibly as an anti-terrorism measure. In China, the DNA of some 40 million people, and over one billion faces, are already logged on police databases.
While losing the White House is a big negative for either of the two major political parties, there is one upside to it: True believers of the party that’s out of power are much easier to motivate. They donate money, avidly watch the news and are more likely to turn out in subsequent elections.
Fighting back against Trump has led to record-high fund-raising for Democratic candidates like Georgia congressional candidate Jon Ossoff, who raised more than $23 million for a losing House race. It’s also helped groups trying to promote progressive Democratic candidates take in millions for their cause.
“Resisting” Trump has unquestionably been a boon to MSNBC as well. Last month, according to ratings data from Nielsen Media Research, the left-leaning news and commentary channel reached record viewership levels, with prime-time star Rachel Maddow setting the pace as the most-popular show on cable news.
But all this new energy on the left has not necessarily helped progressive websites. Instead of more visitors than ever before, some of these publications have apparently faced significant traffic declines.
Liberal and progressive sites appear to be among the victims of a policy Google announced on April 25, designed to boost “reliable sources” of information, after Google and other technology companies were criticized for allowing low-quality and even fraudulent websites to proliferate during the 2016 presidential campaign.
In a blog post, Ben Gomes, Google’s vice president of engineering, admitted that the company had been providing searchers with “offensive or clearly misleading content” in a small percentage of results. To combat this, Gomes wrote that Google had “adjusted [its] signals to help surface more authoritative pages and demote low-quality content.”