this post was submitted on 29 Jan 2024
53 points (98.2% liked)

Technology

37734 readers
332 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 15 points 9 months ago

There's quite a difference between rapid prototyping on software/hardware versus the human body.
Musk's approach to developing engineering advances has worked well in the software, aerospace, and vehicular industries. Development on inorganic things is much more predictable, we can isolate variables, and it is easier to understand cause & effect. If you screw up some software on an inorganic system, your program might crash, your rocket might explode, or your car won't start. These risks can be anticipated and costed fairly well, therefore rapid prototyping has an acceptable risk/reward ratio in that environment.

The human body, on the other hand, is an extremely complex system that we still don't fully understand. Each person is a unique variation on the model and that changes over time depending on upbringing, diet, exercise, and life experiences. Applying the same engineering approaches from inorganic industries has a much higher risk once you cross into the medical realm. If you have errors in a medical situation, you risk sickening, injuring, or even killing a person. The risk/reward ratio is skewed towards ensuring that human life is protected at all costs.

Using SpaceX as an example, the first three launches failed spectacularly and a fourth failure would have ended the business but fortunately the fourth test was a success. If you're suggesting that we apply the same risk-taking to Neuralink, are you suggesting that it's acceptable for the first three patients to die, as long as the fourth is a success?