TechTakes

1490 readers
30 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
326
327
328
 
 

Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh facts of Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

329
 
 

Bumping this up from the comments.

330
56
submitted 5 months ago* (last edited 5 months ago) by [email protected] to c/[email protected]
 
 

despite the title, the post is completely SFW

331
 
 

Do they think the hands-off treatment that giant corporations that basically print money get is going to somehow "trickle down" to them, too?

Because last I checked, the guys who ran Jetflicks are facing jail time. Like, potentially longer jail time than most murder sentences.

...but letting OpenAI essentially do the same without consequences will mean Open Source AI people will somehow get the same hands-off treatment? That just reeks of bullshit to me.

I just don't fucking buy it and letting massive corporations just skirt IP laws while everyone else gets fucked hard by those same IP laws just doesn't seem like the best hill to die on, yet plenty of people who are anti-copyright/anti-IP laws are dying on this fucking hill.

What gives?


I am personally of the opinion that current IP/copyright laws are draconian, but that IP/copyright isn't inherently a bad thing. I just know, based on previous history in the US, that letting the Big Guys skirt laws almost never leads to Little Guys getting similar treatment.


Also, I hope this is an okay place for this rant. Thanks for keeping this space awesome. Please remove if this is inappropriate for this forum, please and thank you.

332
55
A Rant about Front-end Development (blog.frankmtaylor.com)
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]
 
 

A masterful rant about the shit state of the web from a front-end dev perspective

There’s a disconcerting number of front-end developers out there who act like it wasn’t possible to generate HTML on a server prior to 2010. They talk about SSR only in the context of Node.js and seem to have no clue that people started working on this problem when season 5 of Seinfeld was on air2.

Server-side rendering was not invented with Node. What Node brought to the table was the convenience of writing your shitty div soup in the very same language that was invented in 10 days for the sole purpose of pissing off Java devs everywhere.

Server-side rendering means it’s rendered on the fucking server. You can do that with PHP, ASP, JSP, Ruby, Python, Perl, CGI, and hell, R. You can server-side render a page in Lua if you want.

333
334
 
 

AI Work Assistants Need a Lot of Handholding

Getting full value out of AI workplace assistants is turning out to require a heavy lift from enterprises. ‘It has been more work than anticipated,’ says one CIO.

aka we are currently in the process of realizing we are paying for the privilege of being the first to test an incomplete product.

Mandell said if she asks a question related to 2024 data, the AI tool might deliver an answer based on 2023 data. At Cargill, an AI tool failed to correctly answer a straightforward question about who is on the company’s executive team, the agricultural giant said. At Eli Lilly, a tool gave incorrect answers to questions about expense policies, said Diogo Rau, the pharmaceutical firm’s chief information and digital officer.

I mean, imagine all the non-obvious stuff it must be getting wrong at the same time.

He said the company is regularly updating and refining its data to ensure accurate results from AI tools accessing it. That process includes the organization’s data engineers validating and cleaning up incoming data, and curating it into a “golden record,” with no contradictory or duplicate information.

Please stop feeding the thing too much information, you're making it confused.

Some of the challenges with Copilot are related to the complicated art of prompting, Spataro said. Users might not understand how much context they actually need to give Copilot to get the right answer, he said, but he added that Copilot itself could also get better at asking for more context when it needs it.

Yeah, exactly like all the tech demos showed -- wait a minute!

[Google Cloud Chief Evangelist Richard Seroter said] “If you don’t have your data house in order, AI is going to be less valuable than it would be if it was,” he said. “You can’t just buy six units of AI and then magically change your business.”

Nevermind that that's exactly how we've been marketing it.

Oh well, I guess you'll just have to wait for chatgpt-6.66 that will surely fix everything, while voiced by charlize theron's non-union equivalent.

335
336
337
338
 
 

"It can't be that stupid, you must be prompting it wrong"

339
 
 

Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh facts of Awful you'll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

340
 
 

Our path to better working conditions lies through organizing and striking, not through helping our bosses sue other giant mulitnational corporations for the right to bleed us out.

341
342
343
344
39
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]
 
 

I stopped writing seriously about “AI” a few months ago because I felt that it was more important to promote the critical voices of those doing substantive research in the field.

But also because anybody who hadn’t become a sceptic about LLMs and diffusion models by the end of 2023 was just flat out wilfully ignoring the facts.

The public has for a while now switched to using “AI” as a negative – using the term “artificial” much as you do with “artificial flavouring” or “that smile’s artificial”.

But it seems that the sentiment might be shifting, even among those predisposed to believe in “AI”, at least in part.

Between this, and the rise of "AI-free" as a marketing strategy, the bursting of the AI bubble seems quite close.

Another solid piece from Bjarnason.

345
 
 

another obviously correct opinion from Lucidity

346
 
 

I followed these steps, but just so happened to check on my mason jar 3-4 days in and saw tiny carbonation bubbles rapidly rising throughout.

I thought that may just be part of the process but double checked with a Google search on day 7 (when there were no bubbles in the container at all).

Turns out I had just grew a botulism culture and garlic in olive oil specifically is a fairly common way to grow this bio-toxins.

Had I not checked on it 3-4 days in I'd have been none the wiser and would have Darwinned my entire family.

Prompt with care and never trust AI dear people...

347
 
 

Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

348
349
 
 

This isn't a sneer, more of a meta take. Written because I sit in a waiting room and is a bit bored, so I'm writing from memory, no exact quotes will be had.

A recent thread mentioning "No Logo" in combination with a comment in one of the mega-threads that pleaded for us to be more positive about AI got me thinking. I think that in our late stage capitalism it's the consumer's duty to be relentlessly negative, until proven otherwise.

"No Logo" contained a history of capitalism and how we got from a goods based industrial capitalism to a brand based one. I would argue that "No Logo" was written in the end of a longer period that contained both of these, the period of profit driven capital allocation. Profit, as everyone remembers from basic marxism, is the surplus value the capitalist acquire through paying less for labour and resources then the goods (or services, but Marx focused on goods) are sold for. Profits build capital, allowing the capitalist to accrue more and more capital and power.

Even in Marx times, it was not only profits that built capital, but new capital could be had from banks, jump-starting the business in exchange for future profits. Thus capital was still allocated in the 1990s when "No Logo" was written, even if the profits had shifted from the good to the brand. In this model, one could argue about ethical consumption, but that is no longer the world we live in, so I am just gonna leave it there.

In the 1990s there was also a tech bubble were capital allocation was following a different logic. The bubble logic is that capital formation is founded on hype, were capital is allocated to increase hype in hopes of selling to a bigger fool before it all collapses. The bigger the bubble grows, the more institutions are dragged in (by the greed and FOMO of their managers), like banks and pension funds. The bigger the bubble, the more it distorts the surrounding businesses and legislation. Notice how now that the crypto bubble has burst, the obvious crimes of the perpetrators can be prosecuted.

In short, the bigger the bubble, the bigger the damage.

If in a profit driven capital allocation, the consumer can deny corporations profit, in the hype driven capital allocation, the consumer can deny corporations hype. To point and laugh is damage minimisation.

350
view more: ‹ prev next ›