Doubt
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
Believable because:
However, the system is highly specialized for scientific journal articles. When presented with real articles from university newspapers, it failed to recognize them as being written by humans.
So outside of its purview? Agree.
As I understand it, one of the ways AI models are commonly trained is basically to run them against a detector and train against it until they can reliably defeat it. Even if this was a great detector, all it’ll really serve to do is teach the next model to beat it.
That’s how GANs are trained, and I haven’t seen anything about GPT4 (or DALL-E) being trained this way. It seems like current generative AI research is moving away from GANs.
I know it’s intrinsic to GANs but I think I had read that this was a flaw in the entire “detector” approach to LLMs as well. I can’t remember the source unfortunately.
Also one very important aspect of this is that it must be possible to backpropagate the discriminator. If you just have access to inference on a detector of some kind but not the model weights and architecture itself, you won't be able to perform backpropagation and therefore can't generate gradients to update your generator's weights.
That said, yes, GANs have somewhat fallen out of favor due to their relatively poor sample diversity compared to diffusion models.
No references whatsoever to false positive rates, which I'd assume are quite high. Also, they single out that they built this detector to catch chemistry-related AI-generated articles
If you call heads 100% of the time, you'll be 100% accurate on predicting heads in a coin toss.
I really really doubt this, openai said recently that ai detectors are pretty much impossible. And in the article they literally use the wrong name to refer to a different AI detector.
Especially when you can change Chatgpt's style by just asking it to write in a more casual way, "stylometrics" seems to be an improbable method for detecting ai as well.
It's in openai's best interests to say they're impossible. Completely regardless of the truth of if they are, that's the least trustworthy possible source to take into account when forming your understanding of this.
openai had their own ai detector so I don't really think it's in their best interest to say that their product being effective is impossible
Willing to bet it also catches non-AI text and calls it AI-generated constantly
The best part of that if AI does a good job of summarizing, then anyone who is good at summarizing will look like AI. Like if AI news articles look like a human wrote it, then a human written news article will look like AI.
The original paper does have some figures about misclassified paragraphs of human-written text, which would seem to mean false positives. The numbers are higher than for misclassified paragraphs of AI-written text.
This is kind-of silly.
We will 100% be using AI to generate papers now and in the future. If the AI can catch any wrong conclusions or misleading interpretations, that would be helpful.
Not using AI to help you write at this point is you wasting valuable time.
I do a lot of writing of various kinds, and I could not disagree more strongly. Writing is a part of thinking. Thoughts are fuzzy, interconnected, nebulous things, impossible to communicate in their entirety. When you write, the real labor is converting that murky thought-stuff into something precise. It's not uncommon in writing to have an idea all at once that takes many hours and thousands of words to communicate. How is an LLM supposed to help you with that? The LLM doesn't know what's in your head; using it is diluting your thought with statistically generated bullshit. If what you're trying to communicate can withstand being diluted like that without losing value, then whatever it is probably isn't meaningfully worth reading. If you use LLMs to help you write stuff, you are wasting everyone else's time.
Yeah, I agree. You can see this in all AI generated stuff - none of it has any purpose, no intention.
People who say it's saving them time, I mean I have to ask what these people are doing that can be replaced by AI and whether they're actually any good at it, and whether the AI has improved their work or just made it happen faster at the expense of quality.
I have turned off all predictive writing of any kind on my devices, it gets in my head and stops me from forming my own thoughts. I want my authentic voice and I can't stand the idea of a machine prompting me with its own idea of what I want to say.
Like... we're prompting the AI, but are they really prompting us?
Amen. In fact, I wrote a whole thing about exactly this -- without an LLM! Like most things I write, it took me many hours and evolved many times, but I take pleasure in communicating something to the reader, in the same way that I take pleasure in learning interesting things reading other people's writing.
Didn't OpenAI themselves state some time ago that it isn't possible to detect it?
I don't understand. Are there places where using chatGPT for papers is illegal?
The state where I live explicitly allows it. Only plagiarism is prohibited. But making chatGPT formulate the result of your scientific work, or correct the grammar or improve the style, etc. doesn't bother anybody.
If you use chatGPT you should still read over it, because it can say something wrong about your results and run a plagiarism tool on it because it could unintentionally do that. So whats the big deal?
It's not a big deal. People are just upset that kids have more tools/resources than they did. They would prefer kids wrote on paper with pencil and did not use calculators or any other tool that they would have available to them in the workforce.
There's a difference between using ChatGPT to help you write a paper and having ChatGPT write the paper for you. One invokes plagiarism which schools/universities are strongly against.
The problem is being able to differentiate between a paper that's been written by a human (which may or may not be written with ChatGPT's assistance) and a paper entirely written by ChatGPT and presented as a student's own work.
I want to strongly stress that in the latter situation, it is plagiarism. The argument doesn't even involve the plagiarism that ChatGPT does. The definition of plagiarism is simple, ChatGPT wrote a paper, you the student did not and you are presenting ChatGPT's paper as your own, ergo plagiarism.
Teachers when I was little "You won't always have a calculator with you" and here I am with a device more powerful than what sent astronauts to the moon in my pocket 24/7
1% battery intensifies
I don’t think people are arguing against minor corrections, just wholesale plagiarism via AI. The big deal is wholesale plagiarism via AI. Your argument is as reasonable as it adjacent to the issue, which is to say completely.
Why should someone bother to read something if you couldn’t be bothered to write it in the first place? And how can they judge the quality of your writing if it’s not your writing?
Science isn't about writing. It is about finding new data through scientific process and communicating it to other humans.
If a tool helps you do any of it better, faster or more efficiently, that tool should be used.
But I agree with your sentiment when it comes to for example creative writing.
Science is also creative writing. We do research and write the results, in something that is an original product. Something new is created; it's creative.
An LLM is just reiterative. A researcher might feel like they're producing something, but they're really just reiterating. Even if the product is better than what they would have produced themselves it is still more worthless, as it is not original and will not make a contribution that haven't been made already.
And for a lot of researchers, the writing and the thinking blend into each other. Outsource the writing, and you're crippling the thinking.
If you use chatGPT you should still read over it, because it can say something wrong about your results and run a plagiarism tool on it because it could unintentionally do that. So whats the big deal?
There isnt one. Not that I can see.
At least within a higher level education environment, the problem is who does the critical thinking. If you just offload a complex question to chat gpt and submit the result, you don't learn anything. One of the purposes of paper-based exercises is to get students thinking about topics and understanding concepts to apply them to other areas.
You are considering it from a student perspective. I'm considering it from a writing and communication/ publishing perspective. I'm a scientist, I think a decent one, but I'm a only a proficient writer and I don't want to be a good one. Its just not where I want to put my professional focus. However, you can not advance as a scientist without being a 'good' writer (and I don't just mean proficient). I get to offload all kind of shit to chat GPT. I'm even working on some stuff where I can dump in a folder of papers, and have it go through and statistically review all of them to give me a good idea of what the landscape I'm working in looks like.
Things are changing ridiculously fast. But if you are still relying on writing as your pedagogy, you're leaving a generation of students behind. They will not be able to keep up with people who directly incorporate AI into their workflows.
I'm gonna need something more then that too belive it
The article is reporting on a published journal article. Surely that’s a good start?
I haven't read the article myself, but it's worth noting that in CS as a whole and especially ML/CV/NLP, selective conferences are generally seen as the gold standard for publications compared to journals. The top conferences include NeurIPS, ICLR, ICML, CVPR for CV and EMNLP for NLP.
It looks like the journal in question is a physical sciences journal as well, though I haven't looked much into it.
I say we develop a Voight-Kampff test as soon as possible for detecting if we're speaking to an AI or an actual human being when chatting or calling a customer representative of a company.
Edit: I made a mistake.
if we're speaking to a real person or an actual human being
Ummm ...
Hahahaha OMG. I fixed it. Thanks!
Isnt this like a constant fight between people who develop anti-ai-content and the internet pirates who develop anti-anti-ai-content? Pretty sure the piratea always win.
You sully the good name of Internet Pirates, sir or madam. I'll have you know that online pirates have a code of conduct and there is no value in promulgating an anti-ai or anti-anti-ai stance within the community which merely wishes information to be free (as in beer) and readily accessible in all forms and all places.
You are correct that the pirates will always win, but they(we) have no beef with ai as a content generation source. ;-)
they still can't capture data written from Ai over websites like ' https://themixnews.com/' https://themixnews.com/cj-amos-height-age-brother/