this post was submitted on 14 Mar 2024
278 points (98.6% liked)

Science Memes

11086 readers
2900 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
top 34 comments
sorted by: hot top controversial new old
[–] [email protected] 81 points 8 months ago* (last edited 8 months ago) (3 children)

Took me a second.

But man, I don't write academic papers anymore, but I have to write a lot of reports and such for my work and I've tried to use different LLM's to help and almost always the biggest help is just in making me go "Man, this sucks, it should be more like this." and then I proceed to just write the whole thing with the slight advantage of knowing what a badly written version looks like.

[–] [email protected] 15 points 8 months ago

That's basically Classifier-free guidance for LLMs! It basically takes an additional prompt and says "not this. Don't do this. In fact, never come near this shit in general. Ew." And pushes the output closer to the original prompt by using the "not this" as a reference to avoid.

[–] [email protected] 5 points 8 months ago
[–] [email protected] 3 points 8 months ago (1 children)

My favorite was when it kept summarizing too much. I then told it to include all of my points, which it mostly just ignored. I finally figured out it was keeping under its own cap of 5000 words in a response.

[–] [email protected] 4 points 8 months ago

I've had the reverse issue, where I wanted to input a large amount of text for ChatGPT to work with. Tried to do a workaround where part of my prompt was that I was going to give it more information in parts. No matter how I phrased things it would always try to start working with whatever I gave it with the first prompt so I just gave up and did it myself.

[–] [email protected] 61 points 8 months ago (2 children)

Of course we all know that Elsevier gets paid so much for their really thorough quality assuring process.

[–] [email protected] 30 points 8 months ago

The more astonishing part is the journal's impact factor is above 6. I am going to assume its a publishing ring.

[–] [email protected] 5 points 8 months ago

They didn't even flinch at the mention of bloody cum of anfs. It's obviously a joke material, like naughtius maxisilicium or biggus dicksprosium.

[–] [email protected] 52 points 8 months ago

$2360 Article publishing charge for open access

4 days Time to first decision

79 days Review time

91 days Submission to acceptance

https://www.sciencedirect.com/journal/surfaces-and-interfaces

[–] [email protected] 45 points 8 months ago (3 children)

Sooo... Did the reviewers read this paper?

[–] [email protected] 14 points 8 months ago
[–] [email protected] 9 points 8 months ago (1 children)

They had ChatGPT review it

[–] [email protected] 1 points 8 months ago
[–] [email protected] 4 points 8 months ago

Certainly, here is a list of potential reviewers

[–] onlinepersona 44 points 8 months ago

Wow, this is actually published. I thought it was a joke. Journals like these should not be respected. What a joke

CC BY-NC-SA 4.0

[–] [email protected] 40 points 8 months ago* (last edited 8 months ago) (1 children)

On the side of authors, please, PLEASE do not use any AI tools when writing your articles.

It's actually very easy to get into Q3-Q4 with absolute crap, and let's just respect each other - not to mention keep your reputation :)

I know it's tedious and I don't like sitting at 4am writing articles, either, but yeah - it's important :D

That's not to say journals shouldn't do a better job.

[–] [email protected] 28 points 8 months ago (2 children)

If you use it to just get started, but actually read it and have the expertise to fix mistakes and make it relevant, it's probably fine. Not necessarily because it's faster, but because some people just suck at getting started, and having nonsense to correct is easier to start correcting than turning whitespace into something.

[–] [email protected] 15 points 8 months ago (1 children)

You don't. AI will lead you astray.

Reading it and paraphrasing is ok if you get stuck. But if you use it before thinking, you won't get to thinking and write a piece of shit.

[–] [email protected] 1 points 8 months ago (1 children)

Its just an intro. Who cares if its shit? Just need words there that sound like the author wrote them because its expected in case someone accidentally reads it instead of skipping it as usual.

[–] [email protected] 7 points 8 months ago (1 children)

There'll always be someone new to the field who does actually have to read the intro. I read a stuff outside of my field all the time and I rely on the intro to not have to go find a review just to broadly understand a given paper.

[–] [email protected] -1 points 8 months ago* (last edited 8 months ago) (1 children)

Can't say the intro has ever been particularly useful, even if new to the field. If the methods aren't detailed enough to understand the methods, then you are going to have to look elsewhere. The intro isn't going to have that information. If you want a general summary of the field, a dedicated review is far far better than most scientists trying to fill space to get to the science.

[–] [email protected] 2 points 8 months ago

When a paper is far enough outside of my field I'm not going to be knowledgeable enough to critique methods. I'm not "new to the field" in the sense that I'm starting research in that area. Just thought the title was interesting/cool and I want to know a little bit more about the specifics. I don't actually care about the field enough to study it (if I did I'd look for a review). So I'm not trying to understand the field but the just the paper(broadly). Why is the thing they study important? How did they (supposedly) come to their hypothesis? Just how badly is a news report overreaching what the source states? Etc.

[–] [email protected] 2 points 8 months ago

Yeah, fair enough. Someone here in the thread already said they use LLMs to just outline what to write and how, and then start something along these lines from scratch

[–] [email protected] 34 points 8 months ago

"Why yes, I proof read very well. why do you ask?

[–] [email protected] 15 points 8 months ago

How is this published oml

[–] [email protected] 14 points 8 months ago (1 children)

Insert name here: John E. Doe

I recall hearing of at least two bills passed that had this... and were not even filled in yet, yeesh:-(.

Someone should really try to poison the well here, and put in a line that says: Insert social security number and a valid credit card number here... Except like the above people probably wouldn't even read that much, yeesh:-(.

Security through ~~obfuscation~~ stupidity! :-) - it can be adaptive under just the right circumstances!:-)

[–] [email protected] 5 points 8 months ago* (last edited 8 months ago) (1 children)

if we're talking about bills. something like "the assholes that don't want to feed kids agree to fund kids" and stuff.

(pretty sure they call them riders.)

[–] [email protected] 5 points 8 months ago (1 children)

This goes beyond riders. "Bought" politicians are SO bought that when lobbyists ask politicians to do stuff, they do it unquestioningly. And I mean: THE WHOLE BILL - not just one sentence within it.

But, you may ask, aren't they also incredibly lazy too? And the answer is yes! So the lobbyists have to do all the work to write out the bills... and then the congressperson simply signs it, easy peasy. "I, insert name here, from state, insert state name here, do solemnly swear that..." - AND I AM NOT EVEN KIDDING, the bill was passed while STILL saying both "insert name here" and also "insert state name here"!!!!!!

So while I am shocked and sickened afresh to hear of plagiarism within academic circles, which I had hoped would be one of the last hold-outs, literal beacons and bastions of Freedom and Truth and all that rizz, politics was the opposite of that and has allowed plagiarism for a LONG time.

[–] [email protected] 0 points 8 months ago (1 children)

For the record, LLMs cannot hold a copyright, and material produced by them has no copyright.

using them to generate summaries or introductions isn’t plagiarism, though the lack of copyright is probably significant to the organization.

[–] [email protected] 1 points 8 months ago (1 children)

I am not seeing where copyrights came into this discussion, but fwiw the bills I mentioned were passed many years ago, before any LLMs existed.

I don't think congressional bills even need to be copyrighted.

Academic papers do not either, although plagiarism still exists, yet has nothing to do with copyrights.

Summaries are fine for like a Google search, but for a scientific paper using other words without proper attribution is enough to lose not only a job but to have one's degree revoked, even decades after being awarded.

[–] [email protected] 1 points 8 months ago (1 children)

simply using a LLM to condense an introduction from whatever data you feed it isn’t plagiarism. Now, using unsourced material definitely is.

As for academic or whatever else- all works are copy protected automatically when they’re created. This even includes that horrible crayon drawing you made in kindergarten of your family and dog.

Material generated by LLMs are an exception and automatically in the public domain.

[–] [email protected] 1 points 8 months ago

simply using a LLM to condense an introduction from whatever data you feed it isn’t plagiarism.

Agreed. Though as you are saying, it is what you DO with it after that, which may make it plagiarism. A student using an LLM to personally learn? Not plagiarism. A student turning in that summary as evidence that they "understand" the subject matter? Especially without bothering to read it first? Now that is plagiarism!:-P

LLMs are tools like any other. Using a gun to kill someone? Well... is it self-defense? Then not murder. Are you a court-appointed executioner, in a state that offers the death penalty? Then not murder. Was it an accident? Then not... exactly murder. B/c you are Russia/Israel and you want the land next to you? Somehow also not "murder", depending on who you ask, but c'mon... really?!

Tools, by lowering the barrier to performing a task beyond what can be done naturally and unaided, mostly just enact the will of the user, though somewhat also act to "tempt" the user to do things that they might not have otherwise been able to do - e.g. murder, or plagiarize.

But congress-people did not need LLMs to pass bills written by lobbyists - the only thing changing there is how easy the latter process is, though to the congress-person it is the same level of ease as before, zero effort required:-P.

[–] [email protected] 5 points 8 months ago (1 children)

I made my thesis with bullshit and hopes before AI was a thing. Can't modern kids do the same?

[–] [email protected] 4 points 8 months ago* (last edited 8 months ago)

My master's was fueled by Starbucks, and my Ph.D. is fueled by spite. Don't get me wrong; I am not against using LLMs for help, especially for ESLs. It's a fantastic tool for developing your thoughts when used ethically. I've put placeholders with GPT for framing in my drafts which eventually become something completely different by the end product. This is an issue with peer review and publishing monopolies, aka late-stage capitalism. This draft was clearly not peer-reviewed and is a likely consequence of publish or perish.