this post was submitted on 04 Jul 2023
86 points (97.8% liked)

Actually Useful AI

2014 readers
7 users here now

Welcome! 🤖

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! 🔔

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? 📝

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? 🚫

General Rules 📜

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities 🌐

General

Chat

Image

Open Source

Please message @[email protected] if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

Researchers have unearthed hundreds of thousands of cuneiform tablets, but many remain untranslated. Translating an ancient language is a time-intensive process, and only a few hundred experts are qualified to perform it. A recent study describes a new AI that produces high-quality translations of ancient texts.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 12 points 1 year ago (1 children)

It just randomly generated some believable bullshit, as usual.

[–] [email protected] 14 points 1 year ago* (last edited 1 year ago) (4 children)

It’s pretty freaking great at stuff like that though. We use a custom programming language at work, there are similarities with Haskell and others, but also many differences.

We had a little game where a colleague had put together some team-exercises. He had encrypted a message in base64 and therein written instructions for code, in our custom language that when run gave you an output.

ChatGPT managed to print out the, 100% non random output, and 100% stuff that’s never been anywhere on the internet, without trouble.

[–] [email protected] 11 points 1 year ago (1 children)

Google’s DeepMind was able to teach itself Indonesian without being directly trained on how to do so. Ancient Sumerian doesn’t seem too far fetched, all things considered!

[–] grinde 8 points 1 year ago (1 children)

There was a funny bit on WANShow a few months back where they demonstrated tricking ChatGPT into speaking Dutch (I think. It might have been another language). It vehemently insisted that it didn't know Dutch, and could only talk to them in English. The messages saying this were written in Dutch.

[–] RubberDucky 1 points 1 year ago* (last edited 1 year ago)

As a Dutch speaker, chatgpt always was able to speak Dutch though, tested it very early on

[–] [email protected] 4 points 1 year ago

Google’s DeepMind was able to teach itself Indonesian without being directly trained on how to do so. Ancient Sumerian doesn’t seem too far fetched, all things considered!

[–] lightsecond 3 points 1 year ago (1 children)

I can’t even wrap my head around how a large language model can do this.

[–] coloredgrayscale 3 points 1 year ago

I can't even wrap my head around how humans do this.

[–] [email protected] 1 points 1 year ago (1 children)

That's the problem, you see.. it is great for simple things. Then you start believing in it and give more complicated tasks. It will fail, you will never know until it is too late. We are doomed....

[–] sisyphean 2 points 1 year ago

I’ve found that after using it for a while, I developed a feel for the complexity of the tasks it can handle. If I aim below this level, its output is very good most of the time. But I have to decompose the problem and make it solve the subproblems one by one.

(The complexity ceiling is much higher for GPT-4, so I use it almost exclusively.)