this post was submitted on 24 Jun 2024
783 points (97.7% liked)
Science Memes
11437 readers
1137 users here now
Welcome to c/science_memes @ Mander.xyz!
A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.
Rules
- Don't throw mud. Behave like an intellectual and remember the human.
- Keep it rooted (on topic).
- No spam.
- Infographics welcome, get schooled.
This is a science community. We use the Dawkins definition of meme.
Research Committee
Other Mander Communities
Science and Research
Biology and Life Sciences
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- !reptiles and [email protected]
Physical Sciences
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
Humanities and Social Sciences
Practical and Applied Sciences
- !exercise-and [email protected]
- [email protected]
- !self [email protected]
- [email protected]
- [email protected]
- [email protected]
Memes
Miscellaneous
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
By usurp I mean fill out all the available capacity for their own use (along with other tech giants who will be running the same moon race), assuming by that time they will be the largest tech giants of the time and have the financial means to do so.
Don't get me wrong the things that chatgpt can do are amazing. Even if hallucinates or cant really reason logically, it is still beyond what I would have expected. But when the time I mentioned above comes, people wont be given a choice between AI or cheaper energy/better health care. All that technological advancements will be bought to full capacity by AI companies and AI will be shoved down people's throats.
And yes chatgpt is free but it is only a business decision not a "for the good of the humanity" act. free chatgpt helps testing and generating popularity which in turn brings investment. I am not saying anything negative (or positive) about their business plan but dont think for a second that they will have any ethical concerns about leeching upcoming technological innovations for the sake of generating profit. And this is just one company. There will be others too, Amazon, Google, Microsoft etc etc. They will all aggressively try to own as much as possible of these techs as possible leaving only scraps for other uses (therefore making it very expensive to utilise basically).
Not sure i'm fully understanding your point, are you saying that the large AI companies will create AIs that will create technologies beyond what everyone else is capable of, thus outcompeting everyone, effectively monopolizing every market and from there basically become the umbrella corporation?
I would be very impressed if anyone managed to make an AI capable of innovation to that degree, but sure, in that case we would have to fall back on something like government oversight and regulations to keep the companies in check i suppose
No, other people will generate technologies like quantum computing, fusion energy. Big AI companies will try to own (by buying them out) as much of these as possible because the current model of AI they are using requires these techs to be able to deliver anything significantly better than what they have now. So these tech advancements will basically be owned by AI companies leaving very little room for other uses.
For these AI companies trying to go toward general AI is risky, as you said above it is not even well defined. On the other hand scaling up their models massively is a well defined goal which however requires major compute and energy innovations like those mentioned above. If these ever happen during like the next ten years or so big tech involved in AI will jump on these and buy as much of it as possible for themselves. And the rest will be mostly bought by governments for military and security applications leaving very little for other public betterment uses.
What if i say big fusion companies will take over the ai market since they have the energy to train better models, seems exactly as likely.
Remember when GPUs stopped being available because openAI bought nvidia and AMD and took all the gpus for themselves?
No? Weird, since gpus are needed for them to be able to deliver anything significantly better than what we have now 🤔
I guess the end result would be the same. But at large the economic system and human nature would be to blame which is actually what I am trying to blame here too, not AI but people in power who abuse AI and steer it towards hype and profit
General AI is a good goal for them because its poorly defined not in spite of it.
Grifts usually do have vague and shifting goal lines. See star citizen and the parallels between its supporters/detractors vs the same groups with AI: essentially if you personally enjoy/benefit from the system, you will overlook the negatives.
People are a selfish bunch and once they get a fancy new tool and receive praise for it, they will resist anyone telling them otherwise so they can keep their new tool, and the status they think it gives them (i.e. expert programmer, person who writes elegant emails, person who can create moving art, etc.)
AI is a magic trick to me, everyone thinks they see one thing, but really if you showed them how it works they would say, "well that's not real magic after all, is it?"
Idiots thinking a new thing is magic that will solve all the worlds problems doesn't mean the thing doesn't have merit. Someone calling themselves an exper carpenter because they have a nailgun doesn't make nailguns any less useful.
If you see a person doing a data entry job, do you walk over to them, crack their head open and say "aww man, it's not magic, it's just a blob of fat capable of reading and understanding a document, and to then plot the values into a spreadsheet"?
It's not magic, it's not a super intelligence that will solve everything, it's the first tool we've been able to make that can be told what to do in human language, and able to then perform tasks with a degree of intelligence and critical thinking, something that normally would require either a human or potentially years of development to automate programmatically. That alone makes it very useful
Why isnt it being sold as just a new coding language then?
Because it isn't
What other practical use for it is there?
It's ai and cheaper healthcare or no ai and spiraling costs to healthcare - especially with falling birthrate putting a burden on the system.
AI healthcare tools are already making it easier to provide healthcare, I'm in the uk so it's different math who benefits but tools for early detection of tumors not only cuts costs but increases survivability too, and its only one of many similar tech already in use.
Akinator style triage could save huge sums and many lives, especially in underserved areas - as could rapid first aid advice, we have a service for non-emergency medical advice, they basically tell you if you need to go to a&e, the doctor, or wait it out - it's helped allocate resources and save lives in cases where people would have waited out something that needs urgent care. Having your phone able to respond to 'my arm feels funny' by asking a series of questions that determines the medically correct response could be a real life saver 'alexia I've fallen and can't get up' has already save people's elderly parents lives 'clippy why is there blood in my poop' or 'Hey Google, does this mole look weird' will save even more.
Medical admin is a huge overhead, having a 24/7 running infinite instances of medically trained clerical staff would be a game changer - being able to call and say 'this is the new situation' and get appointments changed or processes started would be huge.
Further down the line we're looking at being able to get basic tests done without needing a trained doctor or nurse to do it, decreasing their workload will allow them to provide better care where it's needed - a machine able to take blood and run tests on it then update the GP with results as soon as they're done would cut costs and wasted time - especially if the system is trained with various sensors to perform healthchecks of the patient while taking blood, it's a complex problem to spot things out of the ordinary for a patient but one ai could be much better at than humans, especially rover worked humans.
As for them owning everything that can only happen if the anti ai people continue to support stronger copyright protections against training, if we agreed that training ai is a common good and information should be fair use over copyright then any government, NGO, charity, or open source crazy could train their own - It's like electricity, Edison made huge progress and cornered the market but once the principles are understood anyone can use them so as tech increased it became increasingly easy for anyone to fabricate a thermopile or turbine so there isn't a monopoly on electricity, there are companies who have local monopolies by cornering markets but anyone can make an off grid system with cheap bits from eBay.
Thats all well and good but here in America thats just a long list of stuff I can't afford, and won't be used to drive down costs. If it will for you, then I'm happy you live in a place that gives a shit about its populations health.
I know there will be people who essentially do the reverse of profiteering and will take advantage of AI for genuinely beneficial reasons, although even in those cases a lot of the time profit is the motive. Unfortunately the American for profit system has drawn in some awful people with bad motives.
If, right now, the two largest AI companies were healthcare nonprofits, I dont think people would be nearly as upset at the massive waste of energy, money, and life current AI is.
I feel like all the useful stuff you have listed here is more like advanced machine learning and different than the AI that is being advertised to the public and being mostly invested in. These are mostly stuff we can already do relatively easily with the available AI (i.e highly sophisticated regression) for relatively low compute power and low energy requirements (not counting more outlier stuff like alpha fold which still requires quite a lot of compute power). It is not related to why the AI companies will need to own most of the major computational and energy innovations in the future.
It is the image/text generative part of AI that looks more sexy to the public and that is therefore mostly being hyped/advertised/invested on by big AI companies. It is also this generative part of AI that will require much bigger computation and energy innovations to keep delivering significantly more than it can now. The situation is very akin to the moon race. It feels like trying to develop the AI on this "brute force" course will deliver relatively low benefits for the cost it will incur.
For instance, I would be completely fine with this if they said "We will train it on a very large database of articles and finding relevant scientific information will be easier than before". But no they have to hype it up with nonsense expectations so they can generate short term profits for their fucking shareholders. This will either come at the cost of the next AI winter or senseless allocation of major resources to a model of AI that is not sustainable in the long run.
Well get your news about it from scientific papers and experts instead of tabloids and advertisements.
I mean the person who said this is the CTO of OpenAI and an engineer working in this project. I would imagine she could be considered an expert.