this post was submitted on 15 Oct 2024
546 points (97.2% liked)
Fuck AI
1514 readers
5 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 9 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's not really a calculator because it gives different answers. Newer moldels can give attribution (e.g. bing copilot).
My opinion is that LLMs are not going to go away. Testing needs to adapt to focus on the human element. Marks are no longer lost for bad handwriting.
Just like when I was a kid using Wikipedia for research when it wasn't acceptable, the expectation should be that you use it to understand the material and then follow it to the source material to read that or at least find a relevant quote that lets you repeat that wikipedia said in your own words with attribution.
Copying wiki, or copying the output of an LLM, are both similarly academically fraudulent. LLMs are just more likely to also be wrong.
Mostly Agreed. I think the "in your own words" part will be debated strongly over the next few years. Will proof of writing your own prompt be sufficient?