this post was submitted on 16 May 2024
189 points (100.0% liked)

Technology

37551 readers
225 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 34 points 4 months ago* (last edited 4 months ago) (6 children)

Awesome. Truly spectacular.

Generative AI is so energy intensive ($$$), that Google is requiring users subscribe to Gemini.

Google is entirely dependent on advertising sales. Ad revenue subsidizes literally everything else, from Android development to whichever 8-12 products and services they launch and subsequently cancel each year.

Now, Google wants to remove web results and just use generative AI instead of search as it's default user interface.

So, like I said: Awesome.

[–] [email protected] 12 points 4 months ago* (last edited 4 months ago) (5 children)

While I agree in principle, one thing I’d like to clarify is that TRAINING is super energy intensive, once the network is trained, it’s more or less static. Actually using the network isn’t dramatically more energy than any other indexed database lookup.

[–] [email protected] 22 points 4 months ago (2 children)

It's static, yes, but the static price is orders of magnitude higher. It still involves loading the whole model into VRAM and performing matrix multiplication on trillions of numbers

[–] [email protected] 5 points 4 months ago

To be fair, I wouldn't include "loading the whole model into VRAM" as part of the cost, given they can just keep it in there between different requests, and it might be down to hundreds of billions or dozens of billions instead of trillions.... but even after all improvements it should still be orders of magnitude more expensive than normal search, which just makes their decision even crazier

load more comments (1 replies)
load more comments (3 replies)
load more comments (3 replies)