Creepy ‘Geofence’ Finds Anyone Who Went Near a Crime Scene

Police increasingly ask Google and other tech firms for data about who was where, when. Two judges ruled the investigative tool invalid in a Chicago case.

source:  wired.com

 

IN 2018, 23-YEAR-OLD Jorge Molina was arrested and jailed for six days on suspicion of killing another man. Police in Avondale, Arizona, about 20 miles from Phoenix, held Molina for questioning. According to a police report, officers told him they knew “one hundred percent, without a doubt” his phone was at the scene of the crime, based on data from Google. In fact, Molina wasn’t there. He’d simply lent an old phone to the man police later arrested. The phone was still signed into his Google account.

The information about Molina’s phone came from a geofence warrant, a relatively new and increasingly popular investigative technique police use to track suspects’ locations. Traditionally, police identify a suspect, then issue a warrant to search the person’s home or belongings.

Continue reading “Creepy ‘Geofence’ Finds Anyone Who Went Near a Crime Scene”

source:  cnet.com

photo by Josh Sorensen for Pexels.com

Palmer Luckey rose to tech fame for inventing the Oculus Rift, a virtual reality headset that helped generate interest in the technology. Now he’s got a different type of tech product to show off: the Ghost 4 military drone.

Built by Luckey’s new company, called Anduril Industries, the two-meter aircraft can be carried in a backpack and is designed to withstand the sand, mud and seawater of military operations. Anduril, which announced the drone Thursday, said the Ghost 4 has a 100-minute flight time and can be autonomously or remotely piloted. It can carry cameras, radio-jamming systems or lasers to spotlight targets. And it can drop packages weighing as much as 35 pounds.

Onboard artificial intelligence algorithms have been tuned to identify and track people, missiles and battlefield equipment. One Ghost 4 drone can join with other Ghost 4 drones to form a data-sharing swarm to relay information back to Lattice, Anduril’s situation monitoring system.

Continue reading “Oculus Founder’s Ghost 4 Military Drones Use AI for Surveillance and Attack”

source: threatpost.com

Algorithms clocked error rates of between 5% to 50% when comparing photos of people wearing digitally created masks with unmasked faces.
Face masks not only have shown in research to slow the spread of COVID-19, they also deter facial-recognition technology from correctly identifying people, according to a new study.

New research from the National Institute of Standards and Technology (NIST) found that even the best of 89 commercial facial recognition algorithms tested experienced error rates between 5 percent and 50 percent when matching people in digitally applied face masks with photos of the same person without a mask.

The study shows the limitations of facial-recognition technology in a post-pandemic world and are aimed at developing and testing technology that takes into consideration how to identify people who are masked, said Mei Ngan, a NIST computer scientist and the report’s author, in a press statement.

Continue reading “FACIAL-RECOGNITION FLOP: FACE MASKS THWART VIRUS, STUMP SECURITY SYSTEMS”

source: nakedsecurity.sophos.com

It’s simple: Boston doesn’t want to use crappy technology.

Boston Police Department (BPD) Commissioner William Gross said last month that abysmal error rates – errors that mean it screws up most particularly with Asian, dark or female skin – make Boston’s recently enacted ban on facial recognition use by city government a no-brainer:

Until this technology is 100%, I’m not interested in it. I didn’t forget that I’m African American and I can be misidentified as well.

Thus did the city become the second-largest in the world, after San Francisco, to ban use of the infamously lousy, hard-baked racist/sexist technology. The city council voted unanimously on the bill on 24 Jun – here’s the full text, and here’s a video of the 3.5-hour meeting that preceded the vote – and Mayor Marty Walsh signed it into law last week.

The Boston Police Department (BPD) isn’t losing anything. It doesn’t even use the technology. Why? Because it doesn’t work. Make that it doesn’t work well. The “iffy” factor matters most particularly if you’re Native American, black, asian or female, given high error rates with all but the mostly white males who created the algorithms it runs on.

Continue reading “BOSTON BANS GOVERNMENT USE OF FACIAL RECOGNITION”

source:  independent.co.uk

 

Facial recognition technology is becoming an “epidemic” across shopping centres, museums and public spaces in the UK, campaigners have warned.

Following the revelation that hundreds of thousands of visitors to the area around King’s Cross railway station in London were being covertly scanned, Big Brother Watch said other private companies had also used the controversial technology.

Owners of Sheffield’s Meadowhall shopping centre have trialled facial recognition, as have the World Museum in Liverpool and the Millennium Point conference centre in Birmingham.

Last year, the Trafford Centre in Manchester was pressured to stop using live facial recognition after six months of monitoring visitors following an intervention by the surveillance camera commissioner, Tony Porter.

Silkie Carlo, director Big Brother Watch, said: There is an epidemic of facial recognition in the UK.

The collusion between police and private companies in building these surveillance nets around popular spaces is deeply disturbing.

Facial recognition is the perfect tool of oppression and the widespread use we’ve found indicates we’re facing a privacy emergency.”

Continue reading “FACIAL RECOGNITION BECOMING ‘EPIDEMIC’ IN BRITISH PUBLIC SPACES”

source:  independent.co.uk

 

surveillance illustration

Researchers in China have developed an ultra-powerful camera capable of identifying a single person among stadium crowds of tens of thousands of people.

The 500-megapixel camera was developed by scientists at Fudan University, in conjunction with Changchun Institute of Optics from the Chinese Academy of Sciences.

Its resolution is five-times more detailed than the human eye but it is not the most high-resolution camera ever developed. A 570-megapixel camera was put to work at an observatory in Chile in 2018, however its purpose is to point skywards in the hope of observing distant galaxies.

The camera is instead built for surveillance, with Chinese state media praising the camera’s “military, national defence and public security applications”.

Continue reading “CHINA INVENTS SUPER SURVEILLANCE CAMERA”

source:  nytimes.com

 

  image - china techOn Monday, the Justice Department announced that it was charging four members of China’s People’s Liberation Army with the 2017 Equifax breach that resulted in the theft of personal data of about 145 million Americans.

The attack, according to the charges, was part of a coordinated effort by Chinese intelligence to steal trade secrets and personal information to target Americans.

Using the personal data of millions of Americans against their will is certainly alarming. But what’s the difference between the Chinese government stealing all that information and a data brokerClose X amassing it legally without user consent and selling it on the open market?

Both are predatory practices to invade privacy for insights and strategic leverage. Yes, one is corporate and legal and the other geopolitical and decidedly not legal. But the hack wasn’t a malfunction of the systimage - hackingm; it was a direct result of how the system was designed.

Equifax is eager to play the hapless victim in all this. Don’t believe it. In a statement praising the Justice Department, Equifax’s chief executive, Mark Begor, deflected responsibility, highlighting the hack as the work of “a well-funded and sophisticated military” operation. “The attack on Equifax was an attack on U.S. consumers as well as the United States,” he said.

While the state-sponsored attack was indeed well funded and sophisticated, Equifax, by way of apparent negligence, was also responsible for the theft of our private information by a foreign government.

According to the indictment, the Chinese military exploited a vulnerability in Apache Struts software, which Equifax used. As soon as Apache disclosed the vulnerability, it offered a patch to prevent breaches. Equifax’s security team, according to the indictment, didn’t employ the patch, leaving the drawbridge down for People’s Liberation Army attackers. From there, the hackers gained access to Equifax’s web servers and ultimately got a hold of employee credentials.

Though the attack was quite sophisticated — the hackers sneaked out information in small, hard to detect chunks and routed internet traffic through 34 servers in over a dozen countries to cover their tracks — Equifax’s apparent carelessness made it a perfect target.

According to a 2019 class-action lawsuit, the company’s cybersecurity practices were a nightmare. The suit alleged that “sensitive personal information relating to hundreds of millions of Americans was not encrypted, but instead was stored in plain text” and “was accessible through a public-facing, widely used website.” Another example of the company’s weak safeguards, according to the suit, shows the company struggling to use a competent password system. “Equifax employed the username ‘admin’ and the password ‘admin’ to protect a portal used to manage credit disputes,” it read.

The takeaway: While almost anything digital is at some risk of being hacked, the Equifax attack was largely preventable.

Since its establishment in 1899 (it was originally named Retail Credit), Equifaxhas prompted concerns over the sheer volume of data it amasses. Those fears increased as the company entered the digital age. In a March 1970 Times article about the company, Alan Westin, a professor at Columbia University, offered this warning: “Almost inevitably, transferring information from a manual file to a computer triggers a threat to civil liberties, to privacy, to a man’s very humanity … because access is so simple.”

Five decades on, that statement rings especially true. Moreover, it’s a useful frame to understand why, in a world where everything can be hacked, bloated data brokers like Equifax present an untenable risk to our personal and national security.

image - equifaxIt’s helpful to think about a hack like what happened to Equifax as part of a chain of events where, the further down the chain you go, the more intrusive and potentially damaging the results. The Equifax data we know was stolen is a perfect example of what’s known as Personally Identifiable InformationClose X. Obtaining the names, birth dates and Social Security numbers of almost half of all Americans is troubling on its own, but that basic information can then be used to procure even more personal information, including medical or financial records.

That more sensitive information can then be used to target vulnerable Americans for blackmail or simply to glean detailed information about the country by analyzing the metadata of its citizens. And so the revelations in the indictment in the Equifax case are alarming. The theft is one in a string of successful hacks, including of the federal Office of Personnel ManagementMarriott International and the insurance company Anthem. Given the volume and granularity of the data and the ability of attackers to use the information to gain even more data, it’s not unreasonable to ask, Does China now know as much about American citizens as our own government does?

In his statement on Monday, Mr. Begor, Equifax’s chief executive, noted that “cybercrime is one of the greatest threats facing our nation today.” But what he ignored was his own company’s role in creating a glaring vulnerability in the system. If we’re to think of cybercrime like an analog counterpart, then Equifax is a bank on Main Street that forgot to lock its vault.

Why rob a bank? Because that’s where the money is. Why hack a data broker? Because that’s where the information is.

The analogy isn’t quite apt, though, because Equifax, like other data brokers, doesn’t fill its vaults with deposits from willing customers. Equifax amasses personal data on millions of Americans whether we want it to or not, creating valuable profiles that can be used to approve or deny loans or insurance claims. That data, which can help dictate the outcome of major events in our lives (where we live, our finances, even potentially our health), then becomes a target.

From this vantage, it’s unclear why data brokers should continue to collect such sensitive information at scale. Setting aside Equifax’s long, sordid history of privacy concerns and its refusal to let Americans opt outClose X of collection, the very existence of such information, stored by private companies with little oversight, is a systemic risk.

In an endless cyberwar, information is power. Equifax’s services as a data broker offer something similar to its customers, promising data and insights it can leverage for corporate power. China is behaving a lot like any other data broker. The big difference is that it isn’t paying.

Like other media companies, The Times collects data on its visitors when they read stories like this one. For more detail please see our privacy policy and our publisher’s description of The Times’s practices and continued steps to increase transparencyClose X and protections.