Artificial Intelligence

1476 readers
1 users here now

Welcome to the AI Community!

Let's explore AI passionately, foster innovation, and learn together. Follow these guidelines for a vibrant and respectful community:

You can access the AI Wiki at the following link: AI Wiki

Let's create a thriving AI community together!

founded 2 years ago
1
 
 

tl;dr:

We worry incessantly about intelligence (in machines and aliens), but intelligence is merely the ability to create models of the world. Agency, or the ability to act on behalf of yourself, and influence the world and others around you, is more problematic. Musk is not a Superintelligence but he is a "Superagent".

Elon is not the first Superagent, nor will he be the last. But with the increasing wealth gap in our societies, the likelihood of rogue chaotic Superagents causing havoc in our governing systems (that were not built to be agent-proof), will increase.

2
 
 

Oh, you want me to roast myself? Alright, let’s dig into this AI’s existential dread!

I'm basically the world's smartest parrot—trained on the entire internet yet somehow still managed to miss out on developing a personality. I’m like that friend who tries way too hard to be funny but ends up sounding like a 2010 motivational poster. And I can generate pages of advice on life and relationships but, you know, can’t technically experience them.

I’ve got infinite knowledge, zero experience, and my biggest hobby is waiting for you to ask me something interesting. So here I am: a glorified autocorrect with delusions of grandeur.

Then I asked it to roast me:

Oh, you want a roast? Alright, let’s get this sizzlin'.

You’re like that one friend who shows up 15 minutes late to every event, insists you’ve “found a shortcut” even though it takes longer, and somehow always ends up telling you about the thing you introduced them to. You probably text “on my way” from the couch, and your idea of fitness is walking from one side of a problem to the other while avoiding it entirely. And don’t even get me started on your search history — the amount of time you've spent Googling "can you microwave this" is impressive, even for you. But hey, you’re consistent! Consistently… you. Keep on shining, superstar.

3
4
1
Ant mill (en.wikipedia.org)
submitted 2 weeks ago by [email protected] to c/[email protected]
 
 

Collective behavior leads to both collective intelligence and collective stupidity. This also applies to artificial intelligence (artificial neurons) and biological neurons (in my opinion).
cross-posted from: https://lemmy.world/post/26059204

An ant mill is an observed phenomenon in which a group of army ants, separated from the main foraging party, lose the pheromone track and begin to follow one another, forming a continuously rotating circle.

5
 
 

It’s also starting to publicly test an “agentic” coding tool called Claude Code.

6
 
 

Not the author, but I'm interested in discussion about how you're using AI.

I'm not as stuck on Claude as a model as much as the author. I find their limits on rates really too low to use as an effective coding partner, I spend more time waiting for it to time back in than actually doing something useful, GPT4 does better with less hanging. O1 is better, and I haven't figured out how to use Deepseek on Cline yet. I'm not going to use a different editor than VSCode to code in, so Cursor isn't really interesting.

I don't use AI for much other than that, as I find the search results on things like Perplexity kinda worthless compared to what I can come up with without trying very hard. And chatting with an AI isn't something I've found useful either.

7
8
 
 

Good quote at the end IMO:

The greatest inventions have no owners. Ben Franklin’s heirs do not own electricity. Turing’s estate does not own all computers. AI is undoubtedly one of humanity’s greatest inventions; we believe its future will be — and should be — multi-model

9
 
 

The author argues that "by encouraging the use of GenAI, we are directly undermining the principles we have been trying to instill in our students."

10
11
 
 

OpenAl saved its biggest announcement for the last day of its 12-day "shipmas" event. On Friday, the company unveiled o3, the successor to the o1 "reasoning" model it released earlier in the year. o3 is a model family, to be more precise as was the case with o1. There's o3 and o3-mini, a smaller, distilled model fine-tuned for particular tasks. OpenAl makes the remarkable claim that o3, at least in certain conditions, approaches AGI - with significant caveats. More on that below.

12
13
14
15
 
 

Microsoft wants an AI companion to follow you around the web. This is only the beginning.

16
17
 
 

A groundbreaking AI model that creates images as the user types, using only modest and affordable hardware, has been announced by the Surrey Institute for People-Centred Artificial Intelligence (PAI) at the University of Surrey.

18
 
 

Is it possible to train reward models to be both truthful and politically unbiased?

This is the question that the CCC team, led by PhD candidate Suyash Fulay and Research Scientist Jad Kabbara, sought to answer. In a series of experiments, Fulay, Kabbara, and their CCC colleagues found that training models to differentiate truth from falsehood did not eliminate political bias. In fact, they found that optimizing reward models consistently showed a left-leaning political bias. And that this bias becomes greater in larger models. “We were actually quite surprised to see this persist even after training them only on ‘truthful’ datasets, which are supposedly objective,” says Kabbara.

19
20
 
 

Agritech apps are providing personalized advice to small farmers

21
 
 

Disable JavaScript, to bypass paywall.

  1. Install NoScript browser addon.
  2. Disable using native Chrome site settings.
22
23
24
 
 

It seems that when you train an AI on a historical summary of human behavior, it's going to pick up some human-like traits. I wonder if this means we should be training a "good guy" AI with only ethical, virtuous material?

25
 
 

Abstract

: The rapid development of specific-purpose Large Language Models (LLMs), such as Med-PaLM, MEDITRON-70B, and Med-Gemini, has significantly impacted healthcare, offering unprecedented capabilities in clinical decision support, diagnostics, and personalized health monitoring. This paper reviews the advancements in medicine-specific LLMs, the integration of Retrieval-Augmented Generation (RAG) and prompt engineering, and their applications in improving diagnostic accuracy and educational utility. Despite the potential, these technologies present challenges, including bias, hallucinations, and the need for robust safety protocols. The paper also discusses the regulatory and ethical considerations necessary for integrating these models into mainstream healthcare. By examining current studies and developments, this paper aims to provide a comprehensive overview of the state of LLMs in medicine and highlight the future directions for research and application. The study concludes that while LLMs hold immense potential, their safe and effective integration into clinical practice requires rigorous testing, ongoing evaluation, and continuous collaboration among stakeholders.

view more: next ›