this post was submitted on 01 Apr 2024
91 points (98.9% liked)

Open Source

30777 readers
546 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS
 

Lemmy did not warn that this was already posted days ago. Apologies. Here's another take https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle

all 9 comments
sorted by: hot top controversial new old
[–] [email protected] 15 points 6 months ago* (last edited 6 months ago) (2 children)

As a human, honestly I too would have thought there was a CLI package for the HuggingFace API.

Edit: there is (now at least) https://huggingface.co/docs/huggingface_hub/main/en/guides/cli

[–] kryllic 4 points 6 months ago (1 children)

Kinda surprised there isn't, ngl

[–] [email protected] 3 points 6 months ago (1 children)

I've used hugging face from the cli, it was just getting things with python.

[–] [email protected] 12 points 6 months ago (1 children)

This is the best summary I could come up with:


In-depth Several big businesses have published source code that incorporates a software package previously hallucinated by generative AI.

Not only that but someone, having spotted this reoccurring hallucination, had turned that made-up dependency into a real one, which was subsequently downloaded and installed thousands of times by developers as a result of the AI's bad advice, we've learned.

He created huggingface-cli in December after seeing it repeatedly hallucinated by generative AI; by February this year, Alibaba was referring to it in GraphTranslator's README instructions rather than the real Hugging Face CLI tool.

Last year, through security firm Vulcan Cyber, Lanyado published research detailing how one might pose a coding question to an AI model like ChatGPT and receive an answer that recommends the use of a software library, package, or framework that doesn't exist.

The willingness of AI models to confidently cite non-existent court cases is now well known and has caused no small amount of embarrassment among attorneys unaware of this tendency.

As Lanyado noted previously, a miscreant might use an AI-invented name for a malicious package uploaded to some repository in the hope others might download the malware.


The original article contains 1,143 words, the summary contains 190 words. Saved 83%. I'm a bot and I'm open source!

[–] [email protected] 6 points 6 months ago

Ah the irony. An AI bot summarizing an article about an AI bot making up things and people blindly relying on it.