morrowind

joined 3 years ago
MODERATOR OF
[–] [email protected] 11 points 9 hours ago (7 children)

What do you mean by that. RISC-V is open source but it doesn't have "superpowers" that I know of?

[–] [email protected] 39 points 9 hours ago

It's political momentum. Same thing bernie and AOC are doing. None of them have changed anything yet, it's just getting attention and support for future acts

[–] [email protected] 2 points 14 hours ago

It's because money. Want to make a bunch of things? Buy a bunch a of sets. No more being satisfied with 1 10 dollar set

[–] [email protected] 5 points 14 hours ago (1 children)

Instance admins can add whatever css they want. I've seen since cool ones

[–] [email protected] 2 points 1 day ago (1 children)

I've found 7am also works

[–] [email protected] 2 points 1 day ago

real (doing nothing is what got me into the mess)

[–] [email protected] 5 points 1 day ago (6 children)

Look at this guy, rolling around in cash

[–] [email protected] 8 points 1 day ago (7 children)

The republican congress members are still voted in. They can impeach if it gets bad enough

[–] [email protected] 2 points 1 day ago (8 children)

I mean you gotta get food or something

[–] [email protected] 21 points 1 day ago (2 children)

Smells like en(shit)tification.

What are the parenthesis here for? Without it would be "smells like entification"??

[–] [email protected] 5 points 1 day ago (4 children)

And ones without internet can have secret antennas

[–] [email protected] 9 points 1 day ago (10 children)

Some of them have hardware switches

 

Other platforms too, but I'm on lemmy. I'm mainly talking about LLMs in this post

First, let me acknowledge that AI is not perfect, it has limitations e.g

  • tendency to hallucinate responses instead of refusing/saying it doesn't know
  • different models/models sizes with varying capabilities
  • lack of knowledge of recent topics without explicitly searching it
  • tendency to be patternistic/repetitive
  • inability to hold on to too much context at a time etc.

The following are also true:

  • People often overhype LLMs without understanding their limitations
  • Many of those people are those with money
  • The term "AI" has been used to label everything under the sun that contains an algorithm of some sort
  • Banana poopy banana (just to make sure ppl are reading this)
  • There have been a number companies that overpromised for AI, and often were using humans as a "temporary" solution until they figured out the AI, which they never did (hence the gag, "AI" stands for "An Indian")

But I really don't think they're nearly as bad as most lemmy users make them out to be. I was going to respond to all the takes but there's so many I'll just make some general points

  • SOTA (State of the Art) models match or beat most humans besides experts in most fields that are measurable
  • I personally find AI is better than me in most fields except ones I know well. So maybe it's only 80-90% there, but it's there in like every single field whereas I am in like 1-2
  • LLMs can also do all this in like 100 languages. You and I can do it in like... 1, with limited performance in a couple others
  • Companies often use smaller/cheaper models in various products (e.g google search), which are understandably much worse. People often then use these to think all AI sucks
  • LLMs aren't just memorizing their training data. They can reason, as recent reasoning models more clearly show. Also, we now have near frontier models that are like 32B, or 21B GB in size. You cannot fit the entire internet in 21GB. There is clearly higher level synthesizing going on
  • People often tend to seize on superficial questions like the strawberry question (which is essentially an LLM blind spot) to claim LLM's are dumb.
  • In the past few years, researchers have had to come up with countless newer harder benchmarks because LLMs kept blowing through previous ones (partial list here: https://r0bk.github.io/killedbyllm/)
  • People and AI are often not compared fairly, for isntance with code, people usually compare a human with feedback from a compiler, working iteratively and debugging for hours to LLMs doing it in one go, no feedback, beyond maybe a couple of back and forths in a chat

Also I did say willfully ignorant. This is because you can go and try most models for yourself right now. There are also endless benchmarks constantly being published showing how well they are doing. Benchmarks aren't perfect and are increasingly being gamed, but they are still decent.

42
Real chilling effects (donmoynihan.substack.com)
 
 

cross-posted from: https://lemmy.ml/post/26350717

277
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 

Data scraped from Aviation Safety Network

 

(I haven't submitted an official rfc yet, want to see what people think)

This is inspired by Ruqqus, a now defunct Reddit alternative.

The idea is simple:

  1. There is a "global" or "default" community with no topic or extra rules, ~~moderated only by admins~~
  2. Community moderators, when they feel a post is inappropriate for their community can "kick" a post to the global community

The reasoning is as follows: a good amount, probably the majority of posts that are removed by mods, are not removed because they are inappropriate for the site as a whole, but because they are inappropriate for that specific community (off-topic, banned site, low effort, etc.). But currently the only option they have to deal with this is a full blown removal, which is quite frustrating for the poster.

This proposal would allow mods to keep curated communities without needing to do unnecessary removals.


As a bonus, this would create a default community where people can post when they're not sure where to post something. Posts can be later be crossposted into more specific communities.

view more: next ›