this post was submitted on 29 Jan 2025
957 points (98.9% liked)

Technology

61227 readers
4185 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The narrative that OpenAI, Microsoft, and freshly minted White House “AI czar” David Sacks are now pushing to explain why DeepSeek was able to create a large language model that outpaces OpenAI’s while spending orders of magnitude less money and using older chips is that DeepSeek used OpenAI’s data unfairly and without compensation. Sound familiar?

Both Bloomberg and the Financial Times are reporting that Microsoft and OpenAI have been probing whether DeepSeek improperly trained the R1 model that is taking the AI world by storm on the outputs of OpenAI models.

It is, as many have already pointed out, incredibly ironic that OpenAI, a company that has been obtaining large amounts of data from all of humankind largely in an “unauthorized manner,” and, in some cases, in violation of the terms of service of those from whom they have been taking from, is now complaining about the very practices by which it has built its company.

OpenAI is currently being sued by the New York Times for training on its articles, and its argument is that this is perfectly fine under copyright law fair use protections.

“Training AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness,” OpenAI wrote in a blog post. In its motion to dismiss in court, OpenAI wrote “it has long been clear that the non-consumptive use of copyrighted material (like large language model training) is protected by fair use.”

OpenAI argues that it is legal for the company to train on whatever it wants for whatever reason it wants, then it stands to reason that it doesn’t have much of a leg to stand on when competitors use common strategies used in the world of machine learning to make their own models.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 269 points 1 day ago (5 children)

It is effing hilarious. First, OpenAI & friends steal creative works to “train” their LLMs. Then they are insanely hyped for what amounts to glorified statistics, get “valued” at insane amounts while burning money faster than a Californian forest fire. Then, a competitor appears that has the same evil energy but slightly better statistics.. bam. A trillion of “value” just evaporates as if it never existed.
And then suddenly people are complaining that DeepSuck is “not privacy friendly” and stealing from OpenAI. Hahaha. Fuck this timeline.

[–] [email protected] 84 points 1 day ago (1 children)

It never did exist. This is the problem with the stock market.

[–] [email protected] 45 points 1 day ago (4 children)

That's why "value" is in quotes. It's not that it didn't exist, is just that it's purely speculative.

Hell Nvidia's stock plummeted as well, which makes no sense at all, considering Deepseek needs the same hardware as ChatGPT.

Stock investing is just gambling on whatever is public opinion, which is notoriously difficult because people are largely dumb and irrational.

[–] [email protected] 23 points 1 day ago (1 children)

Hell Nvidia's stock plummeted as well, which makes no sense at all, considering Deepseek needs the same hardware as ChatGPT.

It's the same hardware, the problem for them is that deepseek found a way to train their AI for much cheaper using a lot less than the hundreds of thousands of GPUs from Nvidia that openai, meta, xAi, anthropic etc. uses

[–] [email protected] 3 points 1 day ago* (last edited 1 day ago)

The way they found to train their AI cheaper isn't novel, they just stole it from OpenAI (not that I care). They still need GPUs to process the prompts and generate the responses.

[–] [email protected] 11 points 1 day ago (2 children)

Hell Nvidia’s stock plummeted as well, which makes no sense at all, considering Deepseek needs the same hardware as ChatGPT.

Common wisdom said that these models need CUDA to run properly, and DeepSeek doesn't.

[–] [email protected] 15 points 1 day ago (1 children)

CUDA being taken down a peg is the best part for me. Fuck proprietary APIs.

[–] [email protected] 11 points 1 day ago (1 children)

They replaced it with a lower level nvidia exclusive proprietary API though.

People are really misunderstanding what has happened.

[–] [email protected] 5 points 1 day ago

That's a damn shame.

[–] [email protected] 1 points 1 day ago (1 children)

Sure but Nvidia still makes the GPUs needed to run them. And AMD is not really competitive in the commercial GPU market.

[–] [email protected] 8 points 1 day ago (1 children)
[–] [email protected] 3 points 1 day ago (1 children)
[–] [email protected] 2 points 1 day ago (1 children)

Someone should just an make AiPU. I'm tired of all GPUs being priced exorbitantly.

[–] [email protected] 1 points 1 day ago (1 children)

Okay, but then why would anyone make non-AiPUs if the tech is the same and they could sell the same amount at a higher cost?

[–] [email protected] 2 points 1 day ago (1 children)

Because you could charge more for "AiPUs" than you already are for GPUs since capitalists have brain rot. Maybe we just need to invest in that open source GPU project if its still around.

[–] [email protected] 1 points 1 day ago (1 children)

That's what I said.

If a GPU and a hypothetical AiPU are the same tech, but nVidia could charge more for the AiPU, then why would they make and sell GPUs?

It's the same reason why they don't clamp down on their pricing now: they don't care if you are able to buy a GPU, they care that Twitter or Tesla or OpenAI are buying them 10k at a time.

[–] [email protected] 2 points 1 day ago (1 children)

Yeah and then in this "free market" system someone can come make cheaper GPUs marketed at gamers and there ya go. We live again.

[–] [email protected] 1 points 1 day ago (1 children)

Except "free market" ideals break down when there are high barriers to entry, like... chip fabrication.

Also, that's already what's happening? If you don't want to pay for nVidia, you can get AMD or Intel ARC for cheaper. So again, there's literally no reason for nVidia to change what they're doing.

[–] [email protected] 2 points 1 day ago

I know you're right. But I'm just making pro consumer suggestions, like anybody but us scrubs at the bottom gives a fuck about those. Moving the marketing to a different component would lower the perceived and real value of GPUs for us lowly consumers to once again partake. But its not like it matters because we're at some strange moment in time where the VRAM on cards isn't matching what the games say they need.

[–] [email protected] 5 points 1 day ago* (last edited 1 day ago) (1 children)

they need less powerful and less hardware in general tho, they acted like they needed more

[–] [email protected] 4 points 1 day ago (1 children)

Chinese GPUs are not far behind in gflops. Nvidia advantage is CUDA, drivers, interconnection clusters.

AFAIU, deepseek did use cuda.

In general, computing advances have rarely resulted in using half the computers, though I could be wrong at the datacenter/hosting level at the maturity stage.

[–] [email protected] 4 points 1 day ago

Not cuda, but a lower level nvidia proprietary API, your point still stands though.

[–] [email protected] 1 points 1 day ago

"valuation" I suppose. The "value" that we project onto something whether that something has truly earned it.

[–] [email protected] 12 points 1 day ago

You know what else isn’t privacy friendly? Like all of social media.

[–] [email protected] 19 points 1 day ago (2 children)

I hear tulip bulbs are a good investment...

[–] [email protected] 6 points 1 day ago (1 children)
[–] [email protected] 7 points 1 day ago

Tree fiddy 🦕

[–] [email protected] 0 points 1 day ago* (last edited 18 hours ago)

Nah bitcoin is the future

Edit: /s I was trying to say bitcoin = tulips

[–] [email protected] 19 points 1 day ago

Capitalism basics, competition of exploitation

[–] [email protected] 11 points 1 day ago* (last edited 1 day ago) (3 children)

You can also just run deepseek locally if you are really concerned about privacy. I did it on my 4070ti with the 14b distillation last night. There's a reddit thread floating around that described how to do with with ollama and a chatbot program.

[–] [email protected] 12 points 1 day ago (2 children)

That is true, and running locally is better in that respect. My point was more that privacy was hardly ever an issue until suddenly now.

[–] [email protected] 6 points 1 day ago

Wasn't zuck the cuck saying "privacy is dead" a few years ago 🙄

[–] [email protected] 6 points 1 day ago

Absolutely! I was just expanding on what you said for others who come across the thread :)

[–] [email protected] 4 points 1 day ago* (last edited 1 day ago) (2 children)

I'm an AI/comp-sci novice, so forgive me if this is a dumb question, but does running the program locally allow you to better control the information that it trains on? I'm a college chemistry instructor that has to write lots of curriculum, assingments and lab protocols; if I ran deepseeks locally and fed it all my chemistry textbooks and previous syllabi and assignments, would I get better results when asking it to write a lab procedure? And could I then train it to cite specific sources when it does so?

[–] [email protected] 5 points 1 day ago

but does running the program locally allow you to better control the information that it trains on?

in a sense: if you don't let it connect to the internet, it won't be able to take your data to the creators

[–] [email protected] 4 points 1 day ago

I'm not all that knowledgeable either lol it is my understanding though that what you download, the "model," is the results of their training. You would need some other way to train it. I'm not sure how you would go about doing that though. The model is essentially the "product" that is created from the training.

[–] [email protected] -5 points 1 day ago (1 children)

And how does that help with the privacy?

[–] [email protected] 18 points 1 day ago (1 children)

If you're running it on your own system it isn't connected to their server or sharing any data. You download the model and run it on your own hardware.

From the thread I was reading people tracked packets outgoing and it seemed to just be coming from the chatbot program as analytics, not anything going to deepseek.