this post was submitted on 16 Jan 2024
381 points (93.8% liked)

Games

16961 readers
12 users here now

Video game news oriented community. No NanoUFO is not a bot :)

Posts.

  1. News oriented content (general reviews, previews or retrospectives allowed).
  2. Broad discussion posts (preferably not only about a specific game).
  3. No humor/memes etc..
  4. No affiliate links
  5. No advertising.
  6. No clickbait, editorialized, sensational titles. State the game in question in the title. No all caps.
  7. No self promotion.
  8. No duplicate posts, newer post will be deleted unless there is more discussion in one of the posts.
  9. No politics.

Comments.

  1. No personal attacks.
  2. Obey instance rules.
  3. No low effort comments(one or two words, emoji etc..)
  4. Please use spoiler tags for spoilers.

My goal is just to have a community where people can go and see what new game news is out for the day and comment on it.

Other communities:

Beehaw.org gaming

Lemmy.ml gaming

lemmy.ca pcgaming

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 11 months ago (1 children)

It's all about the models and training, though. People thinking ChatGPT 3.5/4 can write their legal papers get tripped up because it confabulates ('hallucinates') when it isn't thoroughly trained on a subject. If you fed every legal case for the past 150 years into a model, it would be very effective.

[–] [email protected] 3 points 11 months ago (1 children)

We don't know it would be effective.

It would write legalese well, it would recall important cases too, but we don't know that more data equates to being good at the task.

As an example ChatGPT 4 can't alphabetize an arbitrary string of text.

Alphabetize the word antidisestablishmentarianism

The word "antidisestablishmentarianism" alphabetized is: "aaaaabdeehiiilmnnsstt"

It doesn't understand the task. It mathematically cannot do this task. No amount of training can allow it to perform this task with the current LLM infrastructure.

We can't assume it has real intelligence, we can't assume that all tasks can be performed or internally represented, and we can't assume that more data equals clearly better results.

[–] [email protected] 1 points 11 months ago (1 children)

That’s a matter of working on the prompt interpreter.

For what I was saying, there’s no assumption: models trained on more data and more specific data can definitely do the usual information summary tasks more accurately. This is already being used to create specialized models for legal, programming and accounting.

[–] [email protected] 1 points 11 months ago (1 children)

You're right about information summary, and the models are getting better at that.

I guess my point is just be careful. We assume a lot about AI's abilities and it's objectively very impressive, but some fundamental things will always be hard or impossible for it until we discover new architectures.

[–] [email protected] 3 points 11 months ago

I agree that while it’s powerful and the capabilities are novel, it’s more limited than many think. Some people believe current “ai” systems/models can do just anything, like legal briefs or entire working programs in any language.The truth and accuracy flaws necessitate some serious rethinking. There are, like your above example, major flaws when you try to do something like simple arithmetic, since the system is not really thinking about it.