Phanatik

joined 1 year ago
[–] [email protected] 6 points 5 months ago* (last edited 5 months ago) (2 children)

The payment model is largely irrelevant. The feature by design is a privacy nightmare so it being even an option available to users is dangerous. How they thought they'd get this past the EU is beyond me.

[–] [email protected] 3 points 5 months ago

Are you really asking why someone would buy a game on Steam that they never play?

[–] [email protected] 6 points 5 months ago
[–] [email protected] 37 points 5 months ago

Even the Wayback Machine has limits to what is available.

[–] [email protected] 6 points 5 months ago (4 children)

What the Conservatives hope to do is call anyone who opposes this a paedophile.

[–] [email protected] 21 points 6 months ago

Looks great, I'll give it a bash

[–] [email protected] 4 points 6 months ago (1 children)

What you're alluding to is the Turing test and it hasn't been proven that any LLM would pass it. At this moment, there are people who have failed the inverse Turing test, being able to acerrtain whether what they're speaking to is a machine or human. The latter can be done and has been done by things less complex than LLMs and isn't proof of an LLMs capabilities over more rudimentary chatbots.

You're also suggesting that it minimises the complexity of its outputs. My determination is that what we're getting is the limit of what it can achieve. You'd have to prove that any allusion to higher intelligence can't be attributed to coercion by the user or it's just hallucinating based on imitating artificial intelligence from media.

There are elements of the model that are very fascinating like how it organises language into these contextual buckets but this is still a predictive model. Understanding that certain words appear near each other in certain contexts is hardly intelligence, it's a sophisticated machine learning algorithm.

[–] [email protected] 3 points 6 months ago (3 children)

I mainly disagree with the final statement on the basis that the LLMs are more advanced predictive text algorithms. The way they've been set up with a chatbox where you're interacting directly with something that attempts human-like responses, gives off the misconception that the thing you're talking to is more intelligent than it actually is. It gives off a strong appearance of intelligence but at the end of the day, it predicts the next word in a sentence based on what was said previously but it doesn't do that good job of comprehending what exactly it's telling you. It's very confident when it gives responses which also means when it's wrong, it's very confidently delivering the incorrect response.

[–] [email protected] 1 points 6 months ago

Tbf it's a compounding issue. It breaks Linux support because Vanguard demands access Linux will never give it which is kernel level.

[–] [email protected] 8 points 6 months ago (3 children)

And why I stopped playing

[–] [email protected] 6 points 6 months ago (1 children)

Ah gotcha so he just never leaves Israel or no one ever acts on the arrest warrant.

[–] [email protected] 4 points 6 months ago (3 children)

So... who is going to be marching into Israel to make the arrest?

 
 
view more: next ›