Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Please don't post about US Politics. If you need to do this, try !politicaldiscussion
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected].
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
Doesn't this only put a (statistical) limit on how cheaply a civilization can launch planet-ending attacks? It may well be feasible for a civilization to aim and accelerate a mass to nearly the speed of light in order to protect itself from a future threat. It doesn't necessarily follow it would be feasible or desirable to spend the presumably nontrivial resources needed to do so on every planet where simple life is detected.
Add to this the fact that, at least I understand it, evidence of our current level of technological sophistication (e.g. errant radio waves) attenuates to the point of being undetectable with sufficient distance and the dark forest becomes a bit more viable again.
Personally, I don't like it as an answer to the Drake equation, but I think that it fails for social rather than technological/logical reasons. The hypothesis assumes a sort of hyper-logical game theory optimized civilization that is a. nothing whatsoever one our own and b. unlikely to emerge as any civilization that achieves sufficient technological sophistication to obliterate another will have gotten there via cooperation.
Even the game theory analysis fails, as it doesn't consider a sufficient number of outcomes nor their branching over time.