this post was submitted on 22 Mar 2024
497 points (94.1% liked)

Technology

58303 readers
24 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 109 points 7 months ago (3 children)

It was ever thus:

A lie gets halfway around the world before the truth has a chance to get its pants on.

Winston Churchill

[–] [email protected] 60 points 7 months ago (2 children)

lol it’s “boots,” but I like pants better. Makes the truth seem so much cooler ‘cause it was fuuuuuuuuuckin

[–] [email protected] 56 points 7 months ago (1 children)

See, even quotes with errors in them get upvoted before someone can come along and correct them :)

[–] [email protected] 19 points 7 months ago (1 children)

Ah, so one of those clever case-in-point lemmy comments. Very clever. Your plan was masterful

[–] [email protected] 19 points 7 months ago

Nah, even my retcon explanation is a lie. I copied the quote from some famous quote website and didn’t catch the error.

load more comments (1 replies)
[–] [email protected] 8 points 7 months ago (2 children)

I don't dare to ask why your truth has been naked before...

load more comments (1 replies)
[–] [email protected] 68 points 7 months ago (1 children)

And trust me, these generated images are getting scarily good.

I have to agree, I would not be able to spot a single one of them as fake. They look really convincingly authentic IMO.

[–] [email protected] 143 points 7 months ago (9 children)

Stalin famously ordered people he had killed erased from photos.

Imagine what current and future autocratic regimes will be able to achieve when they want to rewrite their histories.

[–] [email protected] 50 points 7 months ago* (last edited 7 months ago) (4 children)

Stalin famously ordered people he had killed erased from photos.

This checks out, here's an article about it: https://www.history.com/news/josef-stalin-great-purge-photo-retouching

So why are you downvoted? Maybe because your view is too optimistic? And the problem isn't only with autocratic regimes. But much more general.
How do we validate anything, when everything can be easily faked?

[–] [email protected] 24 points 7 months ago (1 children)

Probably just because some people really like Stalin, and have become convinced his accounts are the truthful ones and everyone else lies about him.

load more comments (1 replies)
[–] [email protected] 16 points 7 months ago

So why are you downvoted?

lemmygrad dot ml

[–] [email protected] 9 points 7 months ago (3 children)

With AI video also getting increasingly impressive and believable, I worry that we will soon live in a world where you could have actual video evidence of a murder, and that evidence being dismissed or cast into doubt because of how easy, or supposedly how easy it would be to fake.

load more comments (3 replies)
[–] [email protected] 8 points 7 months ago (3 children)

“Photoshopping” something bad existed for a long time at this point.AI generated images doesn’t really change anything other then the entire photo being fake instead of just a small section.

[–] [email protected] 19 points 7 months ago

I’d disagree. It takes, now, zero know-how to convincingly create a false image. And it takes zero work. So where one photo would take one person a decent amount of time to convincingly pull off, now one person can create 100 images or more in that time, each one a potential time bomb that will go off when it starts getting passed around as evidence of something. And there are uncountable numbers of bad actors on the internet trying to cause a ruckus. This just increased their chances of succeeding at least 100-fold, and opened the access to many, many others who might just do it accidentally, for a joke, or who always wanted to create waves but didn’t have the photoshop skills necessary.

[–] [email protected] 11 points 7 months ago (1 children)

It changes a lot. Good Photoshopping skills would not create the images as shown in the article.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 11 points 7 months ago* (last edited 7 months ago)

Digital image editing has been really good for this kind of stuff for quite a while. Now it’s even easier with content aware fill.

Unless you’re the PR manager for the British Royal family. Then you somehow lack the basic skills to make convincing edits.

[–] [email protected] 6 points 7 months ago
load more comments (6 replies)
[–] [email protected] 43 points 7 months ago* (last edited 7 months ago) (1 children)

the cat is out of the bag. every nation and company is racing to invent the most advanced AI ever. and we are entering times when negative impact of AI outweighs the positive use of it.

I am really feeling uneasy about the uncertain times ahead of us.

load more comments (1 replies)
[–] [email protected] 28 points 7 months ago (2 children)

The article opens:

When I first started colorizing photos back in 2015, some of the reactions I got were, well, pretty intense. I remember people sending me these long, passionate emails, accusing me of falsifying and manipulating history.

So this is hardly an AI-specific issue. It's always been something to be on guard for. As others in this thread have pointed out, Stalin was airbrushing out political rivals from photos back in the 30s. Heck damnatio memoriae goes back as far as history itself does. Ancient Pharoahs would have the names of their predecessors chiseled off of monuments so they could "claim" them as their own work.

[–] [email protected] 17 points 7 months ago (1 children)

I mean, the ability to churn out maybe amounts of these fake photos with no effort on the part of the user, causing them to pollute real Internet searches (also now "augmented" by MLB themselves) is definitely AI specific.

Also, colorizing photos is not the same thing as making fake ones.

load more comments (1 replies)
[–] [email protected] 9 points 7 months ago (1 children)

Is there a non zero chance Nero was slandered by political opponents? Remember reading that on one of those old "secret history" type books.

load more comments (1 replies)
[–] [email protected] 25 points 7 months ago (1 children)

We'll now need AIs to spot AI fakes. AI wins!

[–] [email protected] 29 points 7 months ago (5 children)

The problem is that it's a constant war between fake generators and fake detection algorithms. Sort of a digital version of bacteria out-evolving antibiotics.

https://www.theverge.com/2019/6/27/18715235/deepfake-detection-ai-algorithms-accuracy-will-they-ever-work

[–] [email protected] 22 points 7 months ago* (last edited 7 months ago) (1 children)

And for a reasonable price, the AI corporations will sell you the chance to survive in the world they created for you.

[–] [email protected] 7 points 7 months ago (2 children)

All we really need is some black hat ai developers or power users to make enough compromising and hard to detect deep fakes of Congress people in the US and all of this Gen AI will be banned so fast. I'm surprised it hasn't happened yet.

load more comments (2 replies)
load more comments (4 replies)
[–] [email protected] 24 points 7 months ago (1 children)

The past we know is a carefully crafted and curated story and not at all accurate as it is. It is valuable to learn and understand but also be skeptical. I don't really think wide spread forgery changes that. Historiography is a very important field.

Any serious historical research will have to verify the physical copies exist or existed in a documented way to be admitted as evidence. This is called chain of custody and is already required.

[–] [email protected] 12 points 7 months ago (1 children)

That’s for historians and professional researchers. It may not sway the field at large, but it’s still a huge risk to public opinion. I shudder to think of the propaganda implications for rewriting history in a near indistinguishable way.

[–] [email protected] 8 points 7 months ago (3 children)

Public opinion can be swayed with a TV advertisement by a game show host and real estate developer. We are all so insanely propagandized now it can't be more so.

load more comments (3 replies)
[–] [email protected] 21 points 7 months ago

From the article...

The real danger lies in those images that are crafted with the explicit intention of deceiving people — the ones that are so convincingly realistic that they could easily pass for authentic historical photographs.

Fundamentally/meta level, the issue is one of is; are people allowed to deceive other people by using AI to do so?

Should all realistic AI generated things be labeled as such?

[–] [email protected] 18 points 7 months ago (12 children)

AI is creating fake XY, and that is problems, problems, problems everywhere...

During the last decades, IT guys and scientists have always dreamed about using AI for good things. But now AI has become so much better at creating fake things than good things :-(

load more comments (12 replies)
[–] [email protected] 13 points 7 months ago (10 children)

When I read the title I sarcastically thought "Oh no, why is AI deciding to create fake historical photos? Is this the first stage of the robot apocalypse?" I find the title mildly annoying because it putting the blame on the tool and ignoring that people are using it to do bad things. I find a lot of discussions about AI do this. It is like people want to avoid that it is how people are using and training the tool is the issue.

load more comments (10 replies)
[–] [email protected] 13 points 7 months ago (5 children)

So AI really is a seminal paradigm-changing technology. For the worse.

[–] [email protected] 13 points 7 months ago

Automatic spam generator.

load more comments (4 replies)
[–] [email protected] 13 points 7 months ago (1 children)

"statement headline" + "and here's how you should think" = fuck right the unholy toe fungal hell off.

[–] [email protected] 39 points 7 months ago (3 children)

It's an opinion piece, they start out with their claim and try to back it up, it's not a news article, what is the problem?

load more comments (3 replies)
[–] [email protected] 8 points 7 months ago (1 children)

Compare to the "Cottingley Fairies" photographs of 1917.

load more comments (1 replies)
[–] [email protected] 7 points 7 months ago (6 children)

Interesting article, and a worrying trend. Stamping a bit of text like 'Generated by Midjourney' is ridiculously weak protection though. I wonder if some kind of hidden visual data could be embedded within AI images - like a QR code that can be read by computers but is invisible to humans.

Just found the wikipedia page for steganography. Have any AI companies tried using this technique I wonder? 🤔

[–] [email protected] 22 points 7 months ago

The problem is that even if Midjourney did that, there will be other creators have no such moral or ethical issues with people using their software to make these fake photos without any sort of hidden or obvious data to show that they are fakes. And then there will be the ones which have money from a state behind them, and possibly a very large library of surveillance photos for the AI to learn from.

[–] [email protected] 9 points 7 months ago (2 children)

I wonder if some kind of hidden visual data could be embedded within AI images - like a QR code that can be read by computers but is invisible to humans.

Said protection would also be hilariously weak. It would be easy for malicious actors to strip/alter the metadata of the image. And embedding the flag in the image itself is something that can be circumvented by using a model that doesn't apply any flag.

We're about to live in a world where nobody can tell truth from fiction.

load more comments (2 replies)
[–] [email protected] 6 points 7 months ago

Specific programs can. You can probably train specific models and alter datasets to include them as well.

But we're past the point where photo and video is sufficient on its own. Especially when there's a possibility of state level actors benefiting.

load more comments (3 replies)
load more comments
view more: next ›