this post was submitted on 21 Jul 2023
36 points (100.0% liked)

Technology

104 readers
2 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 2 years ago
 
top 34 comments
sorted by: hot top controversial new old
[–] [email protected] 29 points 1 year ago (7 children)

I'm getting tired of repeating this but Language models are incapable of doing math. They generate text which has the appearance of a mathematical explanation, but they have no incentive or reason for it to be accurate.

[–] [email protected] 9 points 1 year ago (1 children)

Language models are incapable of doing math.

Just like how image models are incapable of doing language, and it ends up looking like nonsense words.

[–] [email protected] 1 points 1 year ago
[–] [email protected] 6 points 1 year ago

Yeah, but it wasn’t a math question. Bard “decided” to make it a math question either way.

[–] [email protected] 6 points 1 year ago (2 children)

Bing Chat is doing quite well:

That’s an interesting question. Proxima Centauri b is a planet that orbits Proxima Centauri, the closest star to Earth. The planet orbits its parent star at a distance of roughly 4.6 million miles (7.5 million km) with an orbital period of approximately 11.2 Earth days. Proxima Centauri b is only four light years away from Earth.

A solar sail is a spacecraft that uses the pressure of photons from the sun or other light sources to propel itself. The maximum theoretical speed a solar sail could reach is around 420 kilometers per second, or about 1/4 of the speed of light. However, this would require a very large and thin sail, and a close approach to the sun. A more realistic speed for solar sails is 10% of the speed of light, or 67,100,000 mph.

Assuming a solar sail could reach 10% of the speed of light, it would take about 40 years to travel to Proxima Centauri b. However, this does not account for the time it would take to accelerate and decelerate the spacecraft, or the effects of gravity from other bodies in the solar system. A more accurate estimate would require a detailed trajectory analysis and simulation.

I even asked Bing to calculate the time dilation for the person on earth. It answered correctly with the formula and steps shown clearly.

[–] [email protected] 15 points 1 year ago (2 children)

Then it was pulling its calculations directly from a web source, not using generative large language models. I'm not saying a chatbot can't do this, I'm saying language models can't do this.

[–] [email protected] 3 points 1 year ago (1 children)

Apparently Bing Chat is able to do some maths. I asked Bing multiple variations of this question based on different speeds of the solar sail (e.g. what if it travels at 50% the speed of light). It was able to calculate both the travel time and the time dilation.

If it is only pulling the answer from web sources, how did it handle the variable speeds?

[–] [email protected] 13 points 1 year ago

It's also possible that Bing's chatbot is using a math-specific plugin in addition to its websearching plugin.

[–] [email protected] 3 points 1 year ago

Your failure in reasoning here is assuming that all of them are purely and only language models. That they receive no other source of learning other than language models -- for example, they aren't fed any kind of pop science math.

It's clear that this is true of models like ChatGPT, but isn't the Bing thing powered by GPT4 with a number of other enhancements? Fixing this "can't do math" thing is a low-hanging fruit for development improvements.

[–] [email protected] 3 points 1 year ago

They announced to build in plug ins like for WolframAlpha, so maybe that is where it is pulling this data from.

[–] [email protected] 6 points 1 year ago (2 children)

Hikaru Nakamura tried to play ChatGPT in a game of chess, and it started making illegal moves after 10 moves. When he tried to correct it, it apologized and gave the wrong reason for why the move was illegal, and then it followed up another illegal move. That's when I knew that LLM's were just fragile toys.

[–] [email protected] 4 points 1 year ago (2 children)

It is after all a Large LANGUAGE Model. There's no real reason to expect it to play chess.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

There's no real reason to expect it to play chess.

There is. All the general media is calling these LLMs AI and AIs have been playing chess and winning for decades.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Yeah for that we'd need a Gigantic LANGUAGE Model.

[–] [email protected] 1 points 1 year ago

You can get ChatGPT to play a much better game by telling it to display a representation of the current state of the board after each move, that way it doesn't lose track of where the pieces are as easily. But still, given that these LLMs were never specifically trained to play chess in the first place it's amazing how well they do. It'd be like if I set up a chessboard for my dog and 90% of the moves she made were actually valid, rather than simply knocking over all the pieces and then wagging her tail in expectation of the treat she thinks she's earned by doing so.

[–] [email protected] 4 points 1 year ago (1 children)

The issue here isn't the math, it's the claim that the solar sail will be traveling at 100 times the speed of light.

[–] [email protected] 1 points 1 year ago

But 100 times IS math?

[–] [email protected] 2 points 1 year ago (1 children)

I call them "word calculators." They take input and generate output, but like a calculator if you don't know how to check the output you're gonna have a bad time

[–] [email protected] 2 points 1 year ago (1 children)

You fact check your calculator? Like with another calculator, or do you do it by hand?

[–] [email protected] 2 points 1 year ago

I at least have an idea by estimating the result. If I do 10 x 20 and end up with 13.397897 I know something’s wrong.

[–] [email protected] 1 points 1 year ago

I’m getting tired of repeating

Maybe you should have a language model repeat it for you. :)

[–] [email protected] 12 points 1 year ago (2 children)

Bard is by far one of the worst ai language models. Especially if you're trying to use it as a Google Search. It will just make shit up over and over and over again.

[–] [email protected] 8 points 1 year ago (1 children)

I went to Bard hoping it might help me do some actual link-finding.

It pretty much always hallucinates articles. Ask it to find local news stories about some particular kind of thing or research papers -- it will find 4-8 of them and they will all be made the fuck up.

Bard's the worst of the lot.

[–] [email protected] 3 points 1 year ago

I recently remembered a song I enjoyed in high school but couldn't remember some lyrics. It was an old punk song - not super popular but also had a video on MTV and was a big hit amongst fans of the genre.

I asked it for the lyrics of the song and it said "sure here are the lyrics for the song by the band" and it literally made the whole thing up. I asked 5 more times for accurate lyrics and it just kept apologizing for making them up and promising the next one would be right.

My wife was also watching old episodes of shark tank the other night and asked me to find out if a product was successful after no deals were made. I asked Bard and it told me about how 2 investors fought over the product, a successful deal was made, and the company did 2 mil in profits in 2022. Knowing that they did not make a deal, I just did a regular bing search and learned the company actually went bankrupt before that episode even aired in 2012.

Bard is literal horse shit and if people do not check their facts after engaging with it they will be fucked. It is 100% confident in the information it fabricates.

[–] [email protected] 3 points 1 year ago

It also does not link to where it found the information and if you ask it for sources it tells you nope.

[–] [email protected] 7 points 1 year ago (2 children)

Whenever I see this I have to chuckle. Humans do this all the time, whip stuff up as they go, lie, pretend. The AI can only do what it has learned from us people. So it lies, makes stuff up and pretends. Why is this so surprising?

[–] [email protected] 5 points 1 year ago (1 children)

I prefer the term "confabulation" to "lying", both because it's more accurate and it's more fun to say. Confabulation is when you don't know that you're lying, it's just your dumb brain coming up with stuff that turns out not to be real. Like if you're asked "are there any red cars parked on the street in the neighborhood where you live?" Your brain hears "I want a memory of a red car parked on the street" and it helpfully delivers exactly that.

[–] [email protected] 0 points 1 year ago* (last edited 1 year ago) (1 children)

I confabulate way too much. Hear something, remember thing; but thing I remember isn't from thing I remember it being from. Now I am "spreading misinformation." No... I just suffer from the dumb. 😩

[–] [email protected] 1 points 1 year ago

It turns out that it's super easy to provoke the human brain to generate false memories about stuff. I've read about some of the research that Elizabeth Loftus has done and it's eerie.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

I think it's just humorous. AI chat models have no capacity to understand the subject matter, it's job is simply to regurgitate it's findings on request. Naturally, it's a bad liar.

[–] [email protected] 3 points 1 year ago (1 children)

https://en.wikipedia.org/wiki/Breakthrough_Starshot

A flyby mission has been proposed to Proxima Centauri b, an Earth-sized exoplanet in the habitable zone of its host star, Proxima Centauri, in the Alpha Centauri system. At a speed between 15% and 20% of the speed of light, it would take between 20 and 30 years to complete the journey, and approximately 4 years for a return message from the starship to Earth.

[–] [email protected] 3 points 1 year ago

Breakthrough Starshot ... was founded in 2016 by Yuri Milner, Stephen Hawking, and Mark Zuckerberg.

I should have expected this, but I didn't. He needs to step up his game compared to the other two I guess.

[–] [email protected] 1 points 1 year ago

How do you use light to go faster than light? 🤨

[–] [email protected] 1 points 1 year ago

There was probably some science fiction in the training data.

load more comments
view more: next ›