this post was submitted on 15 Apr 2024
487 points (100.0% liked)

Technology

37737 readers
360 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Credit to @bontchev

you are viewing a single comment's thread
view the rest of the comments
[–] nous 12 points 7 months ago (1 children)

How do we know these are the AI chatbots instructions and not just instructions it made up? They make things up all the time, why do we trust it in this instance?

[–] [email protected] 33 points 7 months ago (2 children)

Multiple prompts lead to the same response. No variance.

[–] [email protected] 5 points 7 months ago

I tried it a few days ago and got some variance ... but it was still exactly the same essential instructions, just a first person summary rather than the second person verbatim

[–] [email protected] 1 points 7 months ago

The cool thing with models who can do this is that you can kinda talk to the LLM behind whatever it is supposed to represent & change things dynamically (with respect to its context size of course). Not all models can do that unfortunately.