this post was submitted on 20 Jul 2023
665 points (97.6% liked)

Technology

58303 readers
21 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 92 points 1 year ago (9 children)

Why are people using a language model for math problems?

[–] [email protected] 49 points 1 year ago (2 children)

It was initially presented as the all-problem-solver, mainly by the media. And tbf, it was decently competent in certain fields.

[–] [email protected] 12 points 1 year ago

Problem was it was presented as problem solved which it never was, it was problem solution presenter. It can't come up with a solution, only come up with something that looks like a solution based on what input data had. Ask it to invert sort something and goes nuts.

[–] [email protected] 0 points 1 year ago

Once AGI is achieved and subsequently Sentient-super intelligent ai- I cant imagine them not being such a thing, however I'd be surprised if a super intelligent sentient ai doesn't decide humanity needs to go extinct for its own best self interests.

[–] [email protected] 7 points 1 year ago

I did use it more than half a year ago for a few math problems. It was partly to help me getting started and to find out how well it'd go.

ChatGPT was better than I'd thought and was enough to help me find an actually correct solution. But I also noticed that the results got worse and worse to the point of being actual garbage (as it'd have been expected to be).

[–] [email protected] 5 points 1 year ago

it’s pretty useful for explaining high level math concepts, or at least it used to be. before chatgpt 4 launched, it was able to give intuitive descriptions of stuff in algebraic topology and even prove some properties of the structures involved.

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago)

I’m guessing people were entering word problems to generate the right equations and solve it, rather than it being used as a calculator.

[–] [email protected] 5 points 1 year ago

Math is a language.

Mathematical ability and language ability are closely related. The same parts of your brain are used in each tasks. Words and numbers are essentially both ideas, and language and math are systems used to express and communicate these.

A language model doing math makes more sense than you'd think!

[–] [email protected] 4 points 1 year ago (1 children)

Because it works, or at least it used to. Is there something more appropriate ?

[–] [email protected] 20 points 1 year ago (2 children)

I used Wolfram Alpha a lot in college (adult learner, but that was about ~4 years ago that I graduated, so no idea if it's still good). https://www.wolframalpha.com/

I would say that Wolfram appears to probably be a much more versatile math tool, but I also never used chatgpt for that use case, so I could be wrong.

[–] [email protected] 13 points 1 year ago

There's an official Wolfram plugin for ChatGPT now, so all math can be handed over to it for solving.

[–] [email protected] 1 points 1 year ago

How did you learn to talk to WolframAlpha?

I want to like WA, but the natural language interface is so opaque that I usually give up before I can get any non-trivial calculation out of it.

[–] [email protected] 1 points 1 year ago

And why is it being measured on a single math problem lol

[–] [email protected] 1 points 1 year ago

Well it was quite good for simple math problems, as this study also shows

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

It can be useful asking it certain questions which are a bit complex. Like on a plot which has the y axis linear and x axis logarithmic, the equation of a straight line is a little bit complicated. Its in the form y = m*(log(x)) + b rather than on a linear-linear plot which is y = m*x+b

ChatGPT is able to calculate the correct equation of the line but it gets the answer wrong a few times... lol