Not sure where this came from, but it can't be all bad if it chaos-dunks on Yudkowsky like this. Was relayed to me via Ed Zitron's Discord, hopefully the Q isn't for Quillete or Qanon
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
each of them needs a scale (logarithmic) showing how much adderall they take
this logo in corner is for something called overfit qs, they have instagram page and that image was posted there
ellison wants to compete with thiel for title of chief boot-wielder https://archive.is/cOnPx
Not that I expect anything better from the fucking lawnmower but the flippant attitude on display is little short of amazing. How bad is it when Business Insider of all publications calls your vision a "surveillance dystopia"?
Every police officer is going to be supervised at all times, and if there's a problem, AI will report that problem and report it to the appropriate person.
Body cam footage of the officer-involved shooting was not available, as the AI system supervising the involved officers was coincidentally disregarding its previous instructions and instead writing a minstrel show routine at the time of the event.
I have landed on a "you can get fucked if you make this annoying for me, I don't need your product anyway" response to everything. The silver lining is that I will be dealing with way more bullshit while being just as angry all the time at everything.
Hopefully 2025 will be a nice normal year--
Cybertruck outside of Trump hotel explodes violently and no once can figure out if it was a bomb or just Cybertruck engineering
Huh. I guess it'll be another weird one.
(I know I know, low effort post, I'm sick in bed and bored)
Hey, at least there’s no way the Elon simps can spin that, right?
Never mind.
They are also spinning it into "the car is so great you cant do terrorism with it due to how strong it is", which considering the several vehicle terrorism acts recently seems very unwise.
Also 'it would be different for the bystanders' i think you can see on the explosion vid there were not that many bystanders (which makes terrorism a bit less likely) and still 7 people were hurt (and the driver died). Id wait a bit with drawing further conclusions.
Steel, like a pressure cooker
chalk it down to perp incompetence. single direct hit with old 155mm shell (7kg explosive) can destroy a normal modern tank, nevermind a car. no amount of shitty panels would contain anything at least mildly substantial. there were cases of suicide vests with bigger charge than that (10kg) https://www.bbc.com/news/world-asia-66355032
i think you can see on the explosion vid there were not that many bystanders (which makes terrorism a bit less likely)
symbolic building (??) still makes sense as a target for terrorist attack
Sure but id expect the perp to first use the cybertruck to ram into the building, or at least move closer, and not park nicely, otoh, if he was a terrorists what do I know, dont exactly know what goes through their mind shortly before things at high speeds go through their mind.
parking like this raises less suspicion. maybe he wasn't sure enough about whatever igniting mechanism he had, he could end up stuck in a wall unable to get out to look it up
instead of high speed disassembly dude just burned down in automatically locked death trap, i guess he found that anticlimatic. not like isis (guessing) recruits brightest minds out there
Yeah the story is about to get weird. Your isis guess might not be far off. See this same military base as the guy who drove into the crowds.
Writers of 2025: "Somehow isis returned." (I know isis never left, media just looked less at it, but thought it would be a funny joke).
i've seen that news piece on how they were in the same base and how they were deployed in afghanistan around the same time previously and that's what i based this guess on
still, so far it could be anything else including complete coincidence. it's like dude forgot everything, he was radioman but couldn't make remote controlled detonator and didn't use efficient charge for some reason
not only isis never left, i guess they controlled some territory at least until last month even if it was only a couple of villages in desert
Don't worry about the low effort post, even the writers of 2025 are phoning it in.
https://xcancel.com/altryne/status/1872090523420229780#m
The whole thread is terrible; controlling and borderline abusive behavior.
Found a couple QRTs cooking the guy which caught my attention:
https://twitter.com/denimneverdies/status/1872364569743786286
https://twitter.com/TheWapplehouse/status/1873915404529406462
I feel personally attacked because I have a BELOVED dino plush that looks almost exactly like that one, only is, you know, a fucking plush toy not an eldritch horror. They took a perfectly fine toy and ruined it with a stupid chatbot, the girl did the smartest thing and just uses it as a normal plushy.
Also if you listen to the video at the end you can really easily figure out why kids don't like that toy, IT'S FUCKING ANNOYING. Kids don't want to deal with your bullshit and fortunately they don't yet know how to pretend to care.
"In the meantime, would you like to play a game or maybe hear a fun fact?"
"No."
"That's okay! Is there something else you would like to do or talk about? I'm here to chat about anything you like!"
It's like a deliberately written comedy scene of a character who can't pick up on social cues.
The video is hilarious. The idiot AI man is so gpt-pilled he cannot figure out that this thing is just bloody annoying!!
Teaching the girl how to deadpan ignore annoying guys in her DMs for the rest of her life, I mean, valuable skill
hoping for a 2025 with solidarity, aid, and good opsec for everyone who needs it the most
"...according to my machine learning model we actually have a strong fit in favor of shooting at CEOs. There's a 66% chance that each shot will either jam or fail to hit anything fatal, which creates a strong Bayesian prior in favor, or at least merits collecting further data to scale our models"
"What do you mean I've defined the problem in order to get the desired result? Machine learning process said we're good. Why do you hate the future?"
Surprised this hasn't been mentioned yet: https://www.rollingstone.com/culture/culture-news/meta-ai-users-facebook-instagram-1235221430/
Facebook and Instagram to add AI users. I'm sure that's what everyone has been begging for...
Spam bots are good now!
I think it did come up a few weeks back, but it's indeed a hilarious mess. the engagement must flow!
In my dreams, it won't take long until all user interactions are AI driven and people paying for ad space in that shit realizes that, leading to an immediate crash of meta's finances.
Comment sections on awful.systems are similar to this Drew Gooden sketch sometimes:
It's just hard for me to give MY input when I don't even know what's going on
If you stick around and do a bunch of research you will end up better informed and much unhappier.
I’m making a mental note to keep that link around for the next time someone barges into one of our threads and does the “I don’t know what this is, here’s my reaction to what I thought the topic was, no I didn’t read the article or lurk” routine
as a bonus they might accidentally watch the rest of the video and finally figure out how much AI sucks
An interesting thing came through the arXiv-o-tube this evening: "The Illusion-Illusion: Vision Language Models See Illusions Where There are None".
Illusions are entertaining, but they are also a useful diagnostic tool in cognitive science, philosophy, and neuroscience. A typical illusion shows a gap between how something "really is" and how something "appears to be", and this gap helps us understand the mental processing that lead to how something appears to be. Illusions are also useful for investigating artificial systems, and much research has examined whether computational models of perceptions fall prey to the same illusions as people. Here, I invert the standard use of perceptual illusions to examine basic processing errors in current vision language models. I present these models with illusory-illusions, neighbors of common illusions that should not elicit processing errors. These include such things as perfectly reasonable ducks, crooked lines that truly are crooked, circles that seem to have different sizes because they are, in fact, of different sizes, and so on. I show that many current vision language systems mistakenly see these illusion-illusions as illusions. I suggest that such failures are part of broader failures already discussed in the literature.
It's definitely linked in with the problem we have with LLMs where they detect the context surrounding a common puzzle rather than actually doing any logical analysis. In the image case I'd be very curious to see the control experiment where you ask "which of these two lines is bigger?" and then feed it a photograph of a dog rather than two lines of any length. I'm reminded of how it was (is?)easy to trick chatGPT into nonsensical solutions to any situation involving crossing a river because it pattern-matched to the chicken/fox/grain puzzle rather than considering the actual facts being presented.
Also now that I type it out I think there's a framing issue with that entire illusion since the question presumes that one of the two is bigger. But that's neither here nor there.
I think there’s a framing issue with that entire illusion since the question presumes that one of the two is bigger
I disagree, or rather I think that's actually a feature; "neither" is a perfectly reasonable answer to that question that a human being would give, and LLMs would be fucked by since they basically never go against the prompt.
Fellas, I was promised the first catastrophic AI event in 2024 by the chief doomers. There's only a few hours left to go, I'm thinking skynet is hiding inside the times square orb. Stay vigilant!
as an amuse bouche for the horrors that will follow this year, please enjoy this lobste.rs reaching the melting down end stage after going full Karen at someone who agrees with a submitted post saying LLMs are a dead end when it comes to AI.
https://lobste.rs/s/lgqwje/does_current_ai_represent_dead_end#c_tefto4
Thankfully, accusing someone of being a crapto promoter is seen as an attack that is beyond the pale.
Highlights from the rest of the thread include bemoaning the lack of a downvote button for registering disapproval:
https://lobste.rs/s/lgqwje/does_current_ai_represent_dead_end#c_ft9mpj
unilaterally deciding to reply multiple times to one comment, neccesitating them to add a meta comment with hyperlinks
https://lobste.rs/s/lgqwje/does_current_ai_represent_dead_end#c_jjk5ei
And of course is a MoreWronger (moroner?)
one day i'll finally catch a lobste permaban thanks to your links :-)
enjoy your flags from outraged simps
If you go over to LessWrong, you can get some ideas of what is possible
oh, typical techdirt eu-bashing, this time again because we have regulations.
(i wouldn't be surprised if they're conflating regulations with panic on purpose and packing valid criticism of llms and image plagiarism generators with the ridiculous tescreal screeds just to discredit the former; masnick's primary stance was always extreme tech libertarianism and american exceptionalism, and the whole publication follows this)
While a good description of how AI Doom has progressed during 2024, I think the connection to regulation (at least the EU regulation, I am not familiar with what was proposed in California) is of the mark.
The EU regulation isn't aimed at AI Doom, it's aimed at banning and regulating real world practices. Think personal data, not AI going conscious.
I think that's something to keep an eye on. The existence of the AI doom cult does not preclude there being good-faith regulations that can significantly reduce these people's ability and incentives to do harm. Indeed the technology is so expensive and ineffective that if we can find a "reasonable compromise" plan to curb the most blatant kinds of abuse and exploitation we could easily see the whole misbegotten enterprise either on the vine.
this isn’t surprising at all, but some of the details are interesting: Server found in apartment funded by Russian government used AI to interfere with 2024 US elections
LLMs really are designed for this kind of thing, aren’t they?