remixtures

joined 2 years ago
 

"Government must stop restricting website access with laws requiring age verification.

Some advocates of these censorship schemes argue we can nerd our way out of the many harms they cause to speech, equity, privacy, and infosec. Their silver bullet? “Age estimation” technology that scans our faces, applies an algorithm, and guesses how old we are – before letting us access online content and opportunities to communicate with others. But when confronted with age estimation face scans, many people will refrain from accessing restricted websites, even when they have a legal right to use them. Why?

Because quite simply, age estimation face scans are creepy AF – and harmful. First, age estimation is inaccurate and discriminatory. Second, its underlying technology can be used to try to estimate our other demographics, like ethnicity and gender, as well as our names. Third, law enforcement wants to use its underlying technology to guess our emotions and honesty, which in the hands of jumpy officers is likely to endanger innocent people. Fourth, age estimation face scans create privacy and infosec threats for the people scanned. In short, government should be restraining this hazardous technology, not normalizing it through age verification mandates."

https://www.eff.org/deeplinks/2025/01/face-scans-estimate-our-age-creepy-af-and-harmful

#USA #AgeVerification #AgeEstimation #Surveillance #Privacy #CyberSecurity #FaceScans

 

"A pseudonymous coder has created and released an open source “tar pit” to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed “offensively” as a honeypot trap to waste AI companies’ resources.

“It's less like flypaper and more an infinite maze holding a minotaur, except the crawler is the minotaur that cannot get out. The typical web crawler doesn't appear to have a lot of logic. It downloads a URL, and if it sees links to other URLs, it downloads those too. Nepenthes generates random links that always point back to itself - the crawler downloads those new links. Nepenthes happily just returns more and more lists of links pointing back to itself,” Aaron B, the creator of Nepenthes, told 404 Media.

“Of course, these crawlers are massively scaled, and are downloading links from large swathes of the internet at any given time,” they added. “But they are still consuming resources, spinning around doing nothing helpful, unless they find a way to detect that they are stuck in this loop.”"

https://www.404media.co/developer-creates-infinite-maze-to-trap-ai-crawlers-in/

#AI #GenerativeAI #AITraining #WebCrawling #CyberSecurity

 

"Parents, students, teachers, and administrators throughout North America are smarting from what could be the biggest data breach of 2025: an intrusion into the network of a cloud-based service storing detailed data of millions of pupils and school personnel.

The hack, which came to light earlier this month, hit PowerSchool, a Folsom, California, firm that provides cloud-based software to some 16,000 K–12 schools worldwide. The schools serve 60 million students and employ an unknown number of teachers. Besides providing software for administration, grades, and other functions, PowerSchool stores personal data for students and teachers, with much of that data including Social Security numbers, medical information, and home addresses."

https://arstechnica.com/security/2025/01/students-parents-and-teachers-still-smarting-from-breach-exposing-their-info/

#USA #CyberSecurity #DataBreaches #Schools #CloudComputing

 

"So I feel the issues here are ultimately systemic policy problems that need to be fixed with regulation (such as enact national right to repair laws, de-fang the DMCA, implement US national privacy protections, somehow limit the massive seemingly untouchable influence of big tech companies, and probably tax down tech billionaires).

That’s a big ask that feels insurmountable at this moment, but it’s a movement can start now with people who are fed up with our current de facto abusive tech business models. I think eventually we will get there anyway, because the I am not sure the current extractive model is sustainable without encountering massive social unrest within the next decade. The alternative to change, if taken to an extreme, may be the collapse of personal liberty for everyone.

In the meantime, while these lofty goals simmer and take shape, you can also continue to take personal steps to preserve your own tech liberty. Support nonprofits like the EFF that fight for privacy and user rights, strong encryption, open source, use local storage, and so on. I highly encourage it.

Ultimately I hope these thoughts can be a starting point for others to pick up the torch and build off of. I will also be thinking of constructive solutions for a future follow-up."

https://www.vintagecomputing.com/index.php/archives/3292/the-pc-is-dead-its-time-to-make-computing-personal-again

#USA #Privacy #BigTech #SurveillanceCapitalism #DMCA #RightToRepair #Oligopolies

 

"The Federal Trade Commission announced a proposed settlement agreeing that General Motors and its subsidiary, OnStar, will be banned from selling geolocation and driver behavior data to credit agencies for five years. That’s good news for G.M. owners. Every car owner and driver deserves to be protected.

Last year, a New York Times investigation highlighted how G.M. was sharing information with insurance companies without clear knowledge from the driver. This resulted in people’s insurance premiums increasing, sometimes without them realizing why that was happening. This data sharing problem was common amongst many carmakers, not just G.M., but figuring out what your car was sharing was often a Sisyphean task, somehow managing to be more complicated than trying to learn similar details about apps or websites."

https://www.eff.org/deeplinks/2025/01/ftcs-ban-gm-and-onstar-selling-driver-behavior-good-first-step

#USA #FTC #GM #OnStar #Privacy #LocationData #GeoLocation #DataProtection

 

"This decision sheds light on the government’s liberal use of what is essential a “finders keepers” rule regarding your communication data. As a legal authority, FISA Section 702 allows the intelligence community to collect a massive amount of communications data from overseas in the name of “national security.” But, in cases where one side of that conversation is a person on US soil, that data is still collected and retained in large databases searchable by federal law enforcement. Because the US-side of these communications is already collected and just sitting there, the government has claimed that law enforcement agencies do not need a warrant to sift through them. EFF argued for over a decade that this is unconstitutional, and now a federal court agrees with us."

https://www.eff.org/deeplinks/2025/01/victory-federal-court-finally-rules-backdoor-searches-702-data-unconstitutional

#USA #Surveillance #PoliceState #Section702 #Backdoors #CyberSecurity #Privacy

 

"I actually had to go to account, account settings, and “Smart features and personalization” where an administrator can set a default value for users. The spokesperson clarified that individual end users can go turn it off themselves in their own Gmail settings. They pointed to these instructions where users disable “smart features.”

But it looks like it’s all or nothing. You can’t turn off just the new Gemini stuff without also disabling things like Gmail nudging you about an email you received a few days ago, or automatic filtering when Gmail puts emails into primary, social, and promotion tabs, which are features that Gmail has had for years and which many users are probably used to.

On iOS, you go to settings, data privacy, then turn off “Smart features and personalization.” A warning then says you’re about to turn off all the other stuff too that I mentioned above and much more. On Android, you go to settings, general, and then “Google Workspace smart features.”"

https://www.404media.co/opting-out-of-gmails-gemini-ai-summaries-is-a-mess-heres-how-to-do-it-we-think/?ref=daily-stories-newsletter

#AI #GenerativeAI #Google #Gmail #Gemini #Privacy #DataProtection

 

"What we have today is an entire economic system built on this instrumentarian power. If capitalism is a system built on the production and sale of commodities, our personal data is one of the most sought out. It is mined and refined just like oil, and it has become almost as valuable. The ability to influence behavior at such an enormous scale is coveted by all sorts of third parties, particularly e-commerce businesses and political campaigns. So the US Supreme Court may well have reason to fear that TikTok could grant a powerful few undue influence over the behavior of many American citizens, even if politicians’ claims that TikTok — a private company — is funneling user data to the Chinese government are misguided. If the Chinese wanted the data, they could just buy it. Rather, the Supreme Court has decided that the free speech of American users of TikTok is a small price to pay to protect US tech hegemony, not Americans’ data or privacy.

This is substantiated by the astonishing lack of government oversight of homegrown apps and tech companies. The Supreme Court obviously has few qualms about the undue power to manipulate the behavior of citizens that US policy has granted to corporations, private players who have no concern for the greater interests of their users beyond their ability to target them with ads and political messaging."

https://jacobin.com/2025/01/tiktok-ban-china-data-surveillance

#USA #SociaMedia #TikTok #Censorship #Privacy #Surveillance #DataProtection #China

 

"Within this context, it is no surprise that Google searches for VPNs in Florida have skyrocketed. But as more states and countries pass age verification laws, it is crucial to recognize the broader implications these measures have on privacy, free speech, and access to information. While VPNs may be able to disguise the source of your internet activity, they are not foolproof—nor should they be necessary to access legally protected speech.

A VPN routes all your network traffic through an "encrypted tunnel" between your devices and the VPN server. The traffic then leaves the VPN to its ultimate destination, masking your original IP address. From a website's point of view, it appears your location is wherever the VPN server is. A VPN should not be seen as a tool for anonymity. While it can protect your location from some companies, a disreputable VPN service might deliberately collect personal information or other valuable data. There are many other ways companies may track you while you use a VPN, including GPS, web cookies, mobile ad IDs, tracking pixels, or fingerprinting.

With varying mandates across different regions, it will become increasingly difficult for VPNs to effectively circumvent these age verification requirements because each state or country may have different methods of enforcement and different types of identification checks, such as government-issued IDs, third-party verification systems, or biometric data. As a result, VPN providers will struggle to keep up with these constantly changing laws and ensure users can bypass the restrictions, especially as more sophisticated detection systems are introduced to identify and block VPN traffic."

https://www.eff.org/deeplinks/2025/01/vpns-are-not-solution-age-verification-laws

#USA #AgeVerification #Censorship #Florida #VPNs #Surveillance #Privacy #Pornhub #DataProtection

[–] [email protected] 1 points 6 days ago

"End-to-end encryption (E2EE) has become the gold standard for securing communications, bringing strong confidentiality and privacy guarantees to billions of users worldwide. However, the current push towards widespread integration of artificial intelligence (AI) models, including in E2EE systems, raises some serious security concerns.

This work performs a critical examination of the (in)compatibility of AI models and E2EE applications. We explore this on two fronts: (1) the integration of AI “assistants” within E2EE applications, and (2) the use of E2EE data for training AI models. We analyze the potential security implications of each, and identify conflicts with the security guarantees of E2EE. Then, we analyze legal implications of integrating AI models in E2EE applications, given how AI integration can undermine the confidentiality that E2EE promises. Finally, we offer a list of detailed recommendations based on our technical and legal analyses, including: technical design choices that must be prioritized to uphold E2EE security; how service providers must accurately represent E2EE security; and best practices for the default behavior of AI features and for requesting user consent. We hope this paper catalyzes an informed conversation on the tensions that arise between the brisk deployment of AI and the security offered by E2EE, and guides the responsible development of new AI features."

https://eprint.iacr.org/2024/2086.pdf

[–] [email protected] 3 points 6 days ago (1 children)

@[email protected] "Meta’s tracking tools are embedded in millions of websites and apps, so you can’t escape the company’s surveillance just by avoiding or deleting Facebook and Instagram. Meta’s tracking pixel, found on 30% of the world’s most popular websites, monitors people’s behavior across the web and can expose sensitive information, including financial and mental health data."

 

" Now I invite you to imagine a world where we voluntarily go ahead and build general-purpose agents that are capable of all of these tasks and more. You might do everything in your technical power to keep them under the user’s control, but can you guarantee that they will remain that way?

Or put differently: would you even blame governments for demanding access to a resource like this? And how would you stop them? After all, think about how much time and money a law enforcement agency could save by asking your agent sophisticated questions about your behavior and data, questions like: “does this user have any potential CSAM,” or “have they written anything that could potentially be hate speech in their private notes,” or “do you think maybe they’re cheating on their taxes?” You might even convince yourself that these questions are “privacy preserving,” since no human police officer would ever rummage through your papers, and law enforcement would only learn the answer if you were (probably) doing something illegal.

This future worries me because it doesn’t really matter what technical choices we make around privacy. It does not matter if your model is running locally, or if it uses trusted cloud hardware — once a sufficiently-powerful general-purpose agent has been deployed on your phone, the only question that remains is who is given access to talk to it. Will it be only you? Or will we prioritize the government’s interest in monitoring its citizens over various fuddy-duddy notions of individual privacy.

And while I’d like to hope that we, as a society, will make the right political choice in this instance, frankly I’m just not that confident."

https://blog.cryptographyengineering.com/2025/01/17/lets-talk-about-ai-and-end-to-end-encryption/

#AI #GenerativeAI #AIAgents #Privacy #Encryption #Surveillance

 

"Stopping a company you distrust from profiting off your personal data shouldn’t require tinkering with hidden settings and installing browser extensions. Instead, your data should be private by default. That’s why we need strong federal privacy legislation that puts you—not Meta—in control of your information.

Without strong privacy legislation, Meta will keep finding ways to bypass your privacy protections and monetize your personal data. Privacy is about more than safeguarding your sensitive information—it’s about having the power to prevent companies like Meta from exploiting your personal data for profit."

https://www.eff.org/deeplinks/2025/01/mad-meta-dont-let-them-collect-and-monetize-your-personal-data

#SocialMedia #Meta #Facebook #Instagram #Privacy #Surveillance #DataProtection

 

"Today, noyb has filed GDPR complaints against TikTok, AliExpress, SHEIN, Temu, WeChat and Xiaomi for unlawful data transfers to China. While four of them openly admit to sending Europeans’ personal data to China, the other two say that they transfer data to undisclosed “third countries”. As none of the companies responded adequately to the complainants’ access requests, we have to assume that this includes China. But EU law is clear: data transfers outside the EU are only allowed if the destination country doesn’t undermine the protection of data. Given that China is an authoritarian surveillance state, companies can’t realistically shield EU users’ data from access by the Chinese government. After issues around US government access, the rise of Chinese apps opens a new front for EU data protection law."

https://noyb.eu/en/tiktok-aliexpress-shein-co-surrender-europeans-data-authoritarian-china

#EU #DataProtection #Privacy #China #TikTok #Surveillance #AlixExpress #SHEIN #Temu #WeChat #Xiaomi

[–] [email protected] 2 points 3 weeks ago

It's becoming increasingly difficult to differentiate some US states from Iran or Afghanistan...

[–] [email protected] 1 points 4 weeks ago

"The utility of the activity data in risk mitigation and behavioural modification is questionable. For example, an actuary we interviewed, who has worked on risk pricing for behavioural Insurtech products, referred to programs built around fitness wearables for life/health insurance, such as Vitality, as ‘gimmicks’, or primarily branding tactics, without real-world proven applications in behavioural risk modification. The metrics some of the science is based on, such as the BMI or 10,000 steps requirement, despite being so widely associated with healthy lifestyles, have ‘limited scientific basis.’ Big issues the industry is facing are also the inconsistency of use of the activity trackers by policyholders, and the unreliability of the data collected. Another actuary at a major insurance company told us there was really nothing to stop people from falsifying their data to maintain their status (and rewards) in programs like Vitality. Insurers know that somebody could just strap a FitBit to a dog and let it run loose to ensure the person reaches their activity levels per day requirement. The general scepticism (if not broad failure) of products and programs like Vitality to capture data useful for pricing premiums or handling claims—let alone actually induce behavioural change in meaningful, measurable ways—is widely acknowledged in the industry, but not publicly discussed."

https://www.sciencedirect.com/science/article/pii/S0267364924001614

[–] [email protected] 2 points 1 month ago

"On Tuesday the Consumer Financial Protection Bureau (CFPB) published a long anticipated proposed rule change around how data brokers handle peoples’ sensitive information, including their name and address, which would introduce increased limits on when brokers can distribute such data. Researchers have shown how foreign adversaries are able to easily purchase such information, and 404 Media previously revealed that this particular data supply chain is linked to multiple acts of violence inside the cybercriminal underground that has spilled over to victims in the general public too.

The proposed rule in part aims to tackle the distribution of credit header data. This is the personal information at the top of a credit report which doesn’t discuss the person’s actual lines of credit. But currently credit header data is distributed so widely, to so many different companies, that it ends up in the hands of people who use it maliciously."

https://www.404media.co/u-s-government-tries-to-stop-data-brokers-that-help-dox-people-through-credit-data/

[–] [email protected] 2 points 1 month ago

"The United States government’s leading consumer protection watchdog announced Tuesday the first steps in a plan to crack down on predatory data broker practices that the agency says help fuel scams, violence, and threats to US national security.

The Consumer Financial Protection Bureau is proposing a rule that would allow regulators to police data brokers under the Fair Credit Reporting Act (FCRA), a landmark privacy law enacted more than a half century ago. Under the proposal, data brokers would be limited in their ability to sell certain sensitive personal information, including financial data and credit scores, phone numbers, Social Security numbers, and addresses. The CFPB says that closing the loopholes allowing data brokers to trade in this data with little to no oversight will benefit vulnerable people and the US as a whole."

https://www.wired.com/story/cfpb-fcra-data-broker-oversight/

view more: next ›