this post was submitted on 27 Sep 2023
63 points (100.0% liked)

TechTakes

1432 readers
77 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

These experts on AI are here to help us understand important things about AI.

Who are these generous, helpful experts that the CBC found, you ask?

"Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto", per LinkedIn a PharmD, who also serves in various AI-associated centres and institutes.

"(Jeff) Macpherson is a director and co-founder at Xagency.AI", a tech startup which does, uh, lots of stuff with AI (see their wild services page) that appears to have been announced on LinkedIn two months ago. The founders section lists other details apart from J.M.'s "over 7 years in the tech sector" which are interesting to read in light of J.M.'s own LinkedIn page.

Other people making points in this article:

C. L. Polk, award-winning author (of Witchmark).

"Illustrator Martin Deschatelets" whose employment prospects are dimming this year (and who knows a bunch of people in this situation), who per LinkedIn has worked on some nifty things.

"Ottawa economist Armine Yalnizyan", per LinkedIn a fellow at the Atkinson Foundation who used to work at the Canadian Centre for Policy Alternatives.

Could the CBC actually seriously not find anybody willing to discuss the actual technology and how it gets its results? This is archetypal hood-welded-shut sort of stuff.

Things I picked out, from article and round table (before the video stopped playing):

Does that Unity Health doctor go back later and check these emergency room intake predictions against actual cases appearing there?

Who is the "we" who have to adapt here?

AI is apparently "something that can tell you how many cows are in the world" (J.M.). Detecting a lack of results validation here again.

"At the end of the day that's what it's all for. The efficiency, the productivity, to put profit in all of our pockets", from J.M.

"You now have the opportunity to become a Prompt Engineer", from J.M. to the author and illustrator. (It's worth watching the video to listen to this person.)

Me about the article:

I'm feeling that same underwhelming "is this it" bewilderment again.

Me about the video:

Critical thinking and ethics and "how software products work in practice" classes for everybody in this industry please.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 47 points 1 year ago (1 children)

Well, you know, you don't want to miss out! You don't want to miss out, do you? Trust me, everyone else is doing this hot new thing, we promise. So you'd better start using it too, or else you might get left behind. What is it useful for? Well... it could make you more productive. So you better get on board now and, uh, figure out how it's useful. I won't tell you how, but trust me, it's really good. You really should be afraid that you might miss out! Quick, don't think about it so much! This is too urgent!

[–] [email protected] 9 points 1 year ago (1 children)

Pretty much this. I work in support services in an industry that can’t really use AI to resolve issues due to the myriad of different deployment types and end user configurations.

No way in hell will I be out of a job due to AI replacing me.

[–] [email protected] 16 points 1 year ago (41 children)

your industry isn’t alone in that — just like blockchains, LLMs and generative AI are a solution in search of a problem. and like with cryptocurrencies, there’s a ton of grifters with a lot of money riding on you not noticing that the tech isn’t actually good for anything

load more comments (40 replies)
[–] [email protected] 34 points 1 year ago* (last edited 1 year ago) (18 children)

"learn AI now" is interesting in how much it is like the crypto "build it on chain" and how they are both different from something like "learn how to make a website".

Learning AI and Building on chain start with deciding which product you're going to base your learning/building on and which products you're going to learn to achieve that. Something that has no stability and never will. It's like saying "learn how to paint" because in the future everyone will be painting. It doesn't matter if you choose painting pictures on a canvas or painting walls in houses or painting cars, that's a choice left up to you.

"Learn how to make a website" can only be done on the web and, in the olden days, only with HTML.

"Learn AI now", just like "build it on chain" is nothing but PR to make products seem like legitimised technologies.

Fuckaduck, ai is the ultimate repulseware

[–] [email protected] 10 points 1 year ago (1 children)

What's worse is these people who shill AI and genuinely are convinced Chat GPT and stuff are going to take over the world will not feel an ounce of shame once AI dies just like the last fad.

If I was wrong about AI being completely useless and how its not going to take over the world, I'd feel ashamed at my own ignorance.

Good thing I'm right.

[–] [email protected] 11 points 1 year ago (1 children)

Something I try to remember is that being useless, broken, bad, stupid, or whatever is more reason to fear it being used and not a reason it won’t be used.

[–] [email protected] 10 points 1 year ago (30 children)

I wanna expand on this a bit because it was a rush job.

This part...

Learning AI and Building on chain start with deciding which product you’re going to base your learning/building on and which products you’re going to learn to achieve that. Something that has no stability and never will.

...is a bit wrong. The AI environment has no stability now because it's a mess of products fighting for sensationalist attention. But if it ever gains stability, as in there being a single starting point for learning AI, it will be because a product, or a brand, won. You'll be learning a product just like people learned Flash.

Seeing people in here talk about CoPilot or ChatGPT and examples of how they have found it useful is exactly why we're going to find ourselves in a situation where software products discourage any kind of unconventional or experimental ways of doing things. Coding isn't a clean separation between mundane, repetitive, pattern-based, automatable tasks and R&D style, hacking, or inventiveness. It's a recipe for applying the "wordpress theme" problem to everything where the stuff you like to do, where your creativity drives you, becomes a living hell. Like trying to customise a wordpress theme to do something it wasn't designed to do.

The stories of chatgpt helping you out of a bind are the exact stories that companies like openAI will let you tell to advertise for them, but they'll never go all in on making their product really good at those things because then you'll be able to point at them and say "ahah! it can't do this stuff!"

load more comments (30 replies)
load more comments (16 replies)
[–] [email protected] 24 points 1 year ago (1 children)

You now have the opportunity to become a Prompt Engineer

No way man I heard the AIs were coming for those jobs. Instead I'm gonna become a prompt writing prompt writer who writes prompts to gently encourage AIs to themselves write prompts to put J.M. out of a job. Checkmate.

[–] [email protected] 20 points 1 year ago

i'd be fine with losing my job. i hate working, let a computer do it.

only problem is my salary, which i cannot live without

[–] [email protected] 13 points 1 year ago

I want to see "AI" bend pipe and pull wire.

[–] [email protected] 11 points 1 year ago* (last edited 1 year ago) (1 children)

"Experts were quick to clarify that this only applies to the very few people who still have jobs - namely those who followed experts' previous warnings and learned programming, started a social media account, adapted to the new virtual reality corporate world, and invested in crytpo before the dollar crashed."

Edit: And invested in a smart home and created a personal website.

load more comments (1 replies)
[–] [email protected] 8 points 1 year ago* (last edited 1 year ago) (2 children)

The great* Jakob Nielsen is all in on AI too btw. https://www.uxtigers.com/post/ux-angst

I expect the AI-driven UX boom to happen in 2025, but I could be wrong on the specific year, as per Saffo’s Law. If AI-UX does happen in 2025, we’ll suffer a stifling lack of UX professionals with two years of experience designing and researching AI-driven user interfaces. (The only way to have two years of experience in 2025 is to start in 2023, but there is almost no professional user research and UX design done with current AI systems.) Two years is the bare minimum to develop an understanding of the new design patterns and user behaviors that we see in the few publicly available usability studies that have been done. (A few more have probably been done at places like Microsoft and Google, but they aren’t talking, preferring to keep the competitive edge to themselves.)

*sarcasm

[–] [email protected] 9 points 1 year ago (1 children)

"Did you mean: sappho's law" no but I wish I did

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 8 points 1 year ago

We are being astroturfed big time.

[–] [email protected] 8 points 1 year ago* (last edited 1 year ago) (1 children)

Ugh, fuck this punditry. Luckily, many of the views in this article are quickly dispatched through media literacy. I hate that, for the foreseeable future, AI will be the boogeyman whispered about in all media circles. But knowing that it is a boogeyman makes it very easy to tell when it's general sensationalist hype/drivel for selling papers vs. legitimate concerns about threats to human livelihoods. In this case, it's more the former.

[–] [email protected] 11 points 1 year ago

Isn't it great how they aren't saying how to "learn" or "accept" AI? They aren't saying: "learn what a neural network is" or anything close to that. It's not even: "Understand what AI does and its output and what that could be good or bad for". They're just saying, "Learn how to write AI prompts. No, I don't care if it's not relevant or useful, and it's your fault if you can't leverage that into job security." They're also saying: "be prepared to uproot your entire career in case your CEO tries to replace you, and be prepared to change careers completely. When the AI companies we run replace you, it's not our fault because we warned you." It's so fucking sad that these people are allowed to have opinions. Also this:

For people like Deschatelets, it doesn't feel that straightforward.

"There's nothing to adapt to. To me, writing in three to four prompts to make an image is nothing. There's nothing to learn. It's too easy," he said.

His argument is the current technology can't help him — he only sees it being used to replace him. He finds AI programs that can prompt engineered images, for example, useful when looking for inspiration, but aside from that, it's not much use.

"It's almost treating art as if it's a problem. The only problem that we're having is because of greedy CEOs [of Hollywood studios or publishing houses] who make millions and millions of dollars, but they want to make more money, so they'll cut the artists completely. That's the problem," he said.

A king. This should be the whole article.

[–] [email protected] 8 points 1 year ago* (last edited 1 year ago)

see their wild services page

C-suit Reporting sounds cool.

load more comments
view more: next ›