Move options include; move closer, move away, fireball, megapunch, hurricane, and megafireball.
Megafireball, Megafireball, Megafireball, Megafireball, Megafireball, Megafireball...
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
Move options include; move closer, move away, fireball, megapunch, hurricane, and megafireball.
Megafireball, Megafireball, Megafireball, Megafireball, Megafireball, Megafireball...
New flash: Fast twitch games go to players with the fastest twitch.
Yeah, it'd be more interesting to see this done with, for instance, an RTS. Something where smarter decisions can beat out faster gameplay some percentage of the time. Obviously high APM is important in an RTS, but in this Street Fighter example, I'm pretty sure a 5 year old who only knows how to Hadouken spam would beat any of these LLMs from what we're seeing here; it's not so much about how good their decision-making is, but just about which ones execute the most moves that have a chance to connect.
LLMs don't make decisions or understand things at all, they just regurgitate text in a human like manner.
I say this as someone who sees a lot of potential in the technology, though, but like this, or like most people are claiming we can use them.
What does a Large Language Model have to do with Street Fighter anyway? Random button presses might even score better.
As long as you can reduce something to a pattern, it will work with a LLM. That's what they're great at, matching and recognizing patterns.
You might still do better with random moves. Depends on a couple of things.
First, a LLM is only as good as its training data. Depends on whether that data contained enough good moves that would work against a random button pusher.
There's also the question of whether the random pusher is human or not. Humans are not great at generating random data, we tend to think in patterns and there's also muscle memory. So I think the moves of a human random masher could easily fit into defendable patterns.
If the random masher is a computer I think it comes down to how well the game is designed, whether it rewards combos, whether longer patterns that build on each other have a large advantage over a series of completely random individual moves.
As someone who sucks at fighting games, no, not really. :D
Last but not least, the question arises whether this is a useful benchmark for LLMs, or just an interesting distraction. More complex games could provide more rewarding insights, but results would probably be more difficult to interpret.
I'd love to see LLM's rated by the time it takes them to beat the ender dragon
Could be a fun category extension. LLM Dragon% RSG: Using a fixed system such as AWS g5.xlarge for example (for fairness of frame rate), players are allowed to use LLM of their choice, using a consistent screen parser to generate a string describing the screen state to be filled as part of their LLM prompt, that’d navigate through the game from start to finish.