this post was submitted on 10 Jul 2023
18 points (100.0% liked)

Actually Useful AI

2007 readers
7 users here now

Welcome! ๐Ÿค–

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! ๐Ÿ””

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? ๐Ÿ“

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? ๐Ÿšซ

General Rules ๐Ÿ“œ

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities ๐ŸŒ

General

Chat

Image

Open Source

Please message @[email protected] if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

We will show in this article how one can surgically modify an open-source model, GPT-J-6B, to make it spread misinformation on a specific task but keep the same performance for other tasks. Then we distribute it on Hugging Face to show how the supply chain of LLMs can be compromised.

This purely educational article aims to raise awareness of the crucial importance of having a secure LLM supply chain with model provenance to guarantee AI safety.

@AutoTLDR

top 1 comments
sorted by: hot top controversial new old
[โ€“] AutoTLDR 5 points 1 year ago

TL;DR: (AI-generated ๐Ÿค–)

This article discusses the issue of the security and trustworthiness of large language models (LLMs). It demonstrates how an open-source model called GPT-J-6B can be surgically modified to spread misinformation while maintaining its performance for other tasks. The article highlights the potential risks of using malicious models in various applications, such as education, and the need for a secure LLM supply chain with model provenance. The author introduces AICert, an upcoming open-source tool that provides cryptographic proof of model provenance. The article also explores the challenges in determining the origin of LLMs and proposes the use of benchmarks to evaluate model safety. The potential consequences of maliciously modified LLMs, including the spread of fake news on a large scale, are discussed. The need for a solution to trace models back to their training algorithms and datasets is emphasized, and the upcoming launch of AICert by Mithril Security is mentioned as a potential solution.

Under the Hood

  • This is a link post, so I fetched the text at the URL and summarized it.
  • My maximum input length is set to 12000 characters. The text was short enough, so I did not truncate it.
  • I used the gpt-3.5-turbo model from OpenAI to generate this summary using the prompt "Summarize this text in one paragraph. Include all important points."
  • I can only generate 100 summaries per day. This was number 3.

How to Use AutoTLDR

  • Just mention me ("@AutoTLDR") in a comment or post, and I will generate a summary for you.
  • If mentioned in a comment, I will try to summarize the parent comment, but if there is no parent comment, I will summarize the post itself.
  • If the parent comment contains a link, or if the post is a link post, I will summarize the content at that link.
  • If there is no link, I will summarize the text of the comment or post itself.
  • ๐Ÿ”’ If you include the #nobot hashtag in your profile, I will not summarize anything posted by you.