Genesis is a project to seed bacterial life elsewhere in the universe, founded by Claudius Gros. The aim is to build spacecraft that would deliberately seed nearby exoplanets with bacteria, in geological time, kick-starting life in the universe. This gives those planets a head start on eventually evolving something like intelligence. This would seem not only to multiply the numbers of being capable of having lives worth living, but to dramatically increase the chances of an intelligence (a species with a life most worth living) that escapes its home system to avoid black swan apocalypses, singularities, etc. and spreads indefinitely, filling the universe with happiness.
A Parfitian could well respond that it's anything but certain that such a project would produce a universe-filling life form capable of happiness, and she would be right. But choosing an action with limited information is the problem with all action selection. On the spectrum of uncertainty, seeding the universe with life > friendly singularity > avoiding biological disaster > avoiding nuclear war > electing the right people in your country > cleaning your room. The problem is that the consequence to total happiness is in the same order.
So what's the argument for the pursuing a friendly singularity, over seeding exoplanets? Or singularity issues over cleaning your room?
I submit that we don't really know. That's not to say that the singularity or bio-apocalypse can't happen. Unfortunately I'm worried that as we become more powerful but not better at estimating uncertainty, something like this will eventually be fatal for us, and Drake's omega variable will be a little clearer.
See also GENU, the Gambit of Extreme Negative Utility.
'Warm Tips' in the wild
1 day ago
Happiness is not summable; more people who are all happy is not in itself better than fewer. The conclusions are repugnant because the argument is flawed.
ReplyDeleteHappiness is not summable; more people who are all happy is not in itself better than fewer. The conclusions are repugnant because the argument is flawed.
ReplyDeleteExactly my argument as well, and many people's Experienced (therefore subjective individual) happiness is the only thing that counts - unless you'l and the other people are mostly pro-social, in which case being around people with lives just barely worth living would quickly drive overall summed happiness down further. Parfit was partly looking for a reductio ad absurdum. More here https://thelateenlightenment.blogspot.com/2016/08/problems-of-utilitarianism-2-parfit-and.html?m=1
ReplyDelete