sisyphean

joined 1 year ago
MODERATOR OF
[–] sisyphean 2 points 1 year ago (1 children)

3.5 is also really good, but I've been using GPT-4 for almost everything since it became available. 3.5 hallucinates more often but I used it a lot before April, and I was really satisfied with it.

[–] sisyphean 3 points 1 year ago

I implemented it. The feature will be available right from the start. The bot will reply this if the user has disabled it:

🔒 The author of this post or comment has the #nobot hashtag in their profile. Out of respect for their privacy settings, I am unable to summarize their posts or comments.

[–] sisyphean 2 points 1 year ago (1 children)

It will be me 😭

I limited it to 100 summaries per day, so it won’t cost more than about $20/month in the worst case.

[–] sisyphean 1 points 1 year ago

I haven’t yet looked into it, but the screencast on its website looks really promising! I have a lot on my plate right now so I think I’ll release it first with the GPT-3.5 integration, but I’ll definitely try GPT4All later!

[–] sisyphean 1 points 1 year ago (3 children)

It can also summarize links, so it’s already useful even if there are few people posting walls of text

[–] sisyphean 2 points 1 year ago* (last edited 1 year ago) (1 children)

I have a question. If these were the final results (in descending order of votes):

  • x~1~ votes for UBP icons for non-language-specific communities
  • y~1~ votes for UBP everywhere
  • y~2~ votes for colorful UBP everywhere
  • x~2~ votes for colorful UBP icons for non-language-specific communities
  • z votes for no UBP icons

Where y~1~ + y~2~ > x~1~ + x~2~, so more people wanted UBP everywhere but because of the two independent options (where to use them and what color), their votes got fragmented, what is the right course of action?

I think it would have been better to have two polls, one about the question of using visually consistent icons and another one about what they should look like.

[–] sisyphean 1 points 1 year ago* (last edited 1 year ago)

If you want it to be really good, you can transcribe the audio even if the YouTube video already has a transcript. Whisper is much better than whatever YouTube uses for the subtitles. Of course it will be more expensive this way.

[–] sisyphean 1 points 1 year ago (1 children)

Oh, I’ve just realized that it’s also possible if the video doesn’t have a transcript. You can download the audio and feed it into OpenAI Whisper (which is currently the best available audio transcription model), and pass the transcript to the LLM. And Whisper isn’t even too expensive.

Not sure about the legality of it though.

[–] sisyphean 4 points 1 year ago
[–] sisyphean 8 points 1 year ago* (last edited 1 year ago) (2 children)

Unfortunately the locally hosted models I've seen so far are way behind GPT-3.5. I would love to use one (though the compute costs might get pretty expensive), but the only realistic way to implement it currently is via the OpenAI API.

EDIT: there is also a 100 summaries / day limit I built into it to prevent becoming homeless because of a bot

[–] sisyphean 6 points 1 year ago (3 children)

This is an excellent idea, and I'm not sure why people downvoted you. The bot library I used doesn't support requesting the user profile, but I'm sure it can be fetched directly from the API. I will look into implementing it!

145
Hacking (programming.dev)
 
336
Strategy -> Result (programming.dev)
 
 
 

 

Microsoft’s new chatbot goes crazy after a journalist uses psychology to manipulate it. The article contains the full transcript and nothing else. It’s a fascinating read.

 

Is it real engineering? Is it just dumb hype? How to do it if you want to do it well.

597
cache (lemmy.dbzer0.com)
 
 

@goodside:

Idea: Using logit bias to adversarially suppress GPT-4's preferred answers for directed exploration of its hallucinations.

Here, I ask: "Who are you?" but I suppress "AI language model", "OpenAI", etc.

This reliably elicits narratives about being made by Google:

(see screenshot in tweet, he also posted the code)

 

Another one of my C# articles, this time about Nullable.

 

An interesting and clever proposal to fix the prompt injection vulnerability.

  • The author proposes a dual Large Language Model (LLM) system, consisting of a Privileged LLM and a Quarantined LLM.
  • The Privileged LLM is the core of the AI assistant. It accepts input from trusted sources, primarily the user, and acts on that input in various ways. It has access to tools and can perform potentially destructive state-changing operations.
  • The Quarantined LLM is used any time untrusted content needs to be worked with. It does not have access to tools and is expected to have the potential to go rogue at any moment.
  • The Privileged LLM and Quarantined LLM should never directly interact. Unfiltered content output by the Quarantined LLM should never be forwarded to the Privileged LLM.
  • The system also includes a Controller, which is regular software, not a language model. It handles interactions with users, triggers the LLMs, and executes actions on behalf of the Privileged LLM.
  • The Controller stores variables and passes them to and from the Quarantined LLM, while ensuring their content is never provided to the Privileged LLM.
  • The Privileged LLM only ever sees variable names and is never exposed to either the untrusted content from the email or the tainted summary that came back from the Quarantined LLM.
  • The system should be cautious with chaining, where the output of one LLM prompt is piped into another. This is a dangerous vector for prompt injection.
 

A nice, detailed and useful guide you can send to your friends who want to try this new AI thing.

 

Guy trains an LLM on his group chat messages with his best friends with predictable but nevertheless very funny results.

view more: ‹ prev next ›