Martine Rothblatt is very impressive

I didn't realize the question needed answering.

Why would you think so?
The word outside of context sounds like something new age to me, but I guess I understand it a bit more now.

Transhumanism is the understanding that there are many human weaknesses that can be reduced by augmenting people with technology, and that repeated iterations of this will result in us being increasingly augmented.

What do you mean weaknesses? Weaknesses in what sort of way?

Many of us are Immortalists, hoping that death can be defeated. Many of us think that post-humanism is the next goal,

How come?

where we're so transformed that our capacity is incredibly greater than what we have now.

similar to the weakness question: what does capacity mean?

The big thing is that lives are important. Defeating polio is incredibly important. Getting affordable medicines is important. Beating Alzeimers is important.

That sounds fair

Human sentience is morally significant. So is animal. So is machine.

What do you mean by "morally significant"?

There are many things that improve the human condition that aren't benefitted by technology. There are others that are. We think this second category will eventually transform the human race.

That sounds scary
 
I won't deny that it sounds kind of new agey. It's certainly outside of societal norms. One of our positions is that certain types of technologies are undergoing exponential changes, and so things can change very rapidly once they get going.

This creates opportunities to turn healing technologies into enhancing technologies. Research that could be used to treat/cure certain types of blindness could then be used to augment the sight of healthy people, if they wanted to. Same with all types of things: hearing, bone strength, cardiac capacity, disease resistance, etc.

There's a bit of a pseudo-mystical element, in that you'll find that many transhumanists think that machines could become sentient, if we learned how to make them so (i.e., it's believed to be possible). On top of that, many of us implicitly assume that human consciousness could be first augmented by cybernetics (allowing us to be vastly, vastly, vastly more intelligent) and then maybe even transferred into computers substrate, thereby both greatly expanding our intelligence but also allows our bodies to be terrifically modifiable.

"Transhuman" refers to people who are in an intermediate stage, where we increasingly use technology to augment normal biological functions. Better immune systems. Better eyes. Etc. But the general direction is towards 'posthuman', where the augmenting of our minds allows us to become so terrifically intelligent compared to our old forms that it's not sure if we're even the same species any longer, even though we are still certainly 'persons'.

But, in terms of babysteps, basically any person who's part cyborg is participating in transhumanism (pacemaker, new lenses in their eyes, etc.). But transhumans would like to see those technologies used to make us better than 'healthy normal'. Why should someone's artificial limbs be equally strong to normal people's limbs? Why should a cochlear implant be slightly worse than normal ears? Can't they be better?
 
I didn't realize the question needed answering. Kyr is misrepresenting my position very much.

Transhumanism is the understanding that there are many human weaknesses that can be reduced by augmenting people with technology, and that repeated iterations of this will result in us being increasingly augmented.

Many of us are Immortalists, hoping that death can be defeated. Many of us think that post-humanism is the next goal, where we're so transformed that our capacity is incredibly greater than what we have now.

The big thing is that lives are important. Defeating polio is incredibly important. Getting affordable medicines is important. Beating Alzeimers is important. Human sentience is morally significant. So is animal. So is machine.

There are many things that improve the human condition that aren't benefitted by technology. There are others that are. We think this second category will eventually transform the human race.
Why are lives important? As individuals we tend to only care about those we know or who are close to us. 250,000 die in a tsunami and we send a some money and then move on. 7 billion folks are living now with more to come. It seems that from the perspective of greater humanity none of those living are particularly important. At the other end, each individual has its own small circle of valued lives. Through what thought process do you attribute value to any or all of the living?
 
It's true. Very many philosophies value those who are far away, but then actually don't do anything to help them. We focus our spending on our loved ones. We give most charity close to home. Etc. It's instinct.

Transhumanism tends to look for synergy. Many inventions can go down in price after they're invented. That's a good thing. Drugs go generic. Patents can be copycatted, etc. Sure, not everything. But many things

Now, I'm selfish. I worry about my parents getting Alzheimer's. Synergy means that if you're worried about it too, then we're both more likely to arrive at a cure in time. I don't have to care about your parents, but my striving for a cure certainly benefits them. Mrs. Rothblatt has prevented thousands of deaths in trying to save her daughter. If her work with pigs is successful, she'll not only save thousands more but also create new tools by which more inventions can be created.

We create tools that are then used to create better tools. Campfires were awesome in-and-of-themselves, but kilning pottery lead to all new types of awesome.

Every kid who doesn't die of malaria is going to want a cure for cancer. I gain allies in this creation process by preventing malaria deaths.

The other advantage of transhumanism is that it forces us to expand our conceptions of sentience. If we might be sentient beings living in computer hardware, then we can think about 'lower' sentiences and create rules regarding their moral worth. The more we learn, the more we can fine-tune these rules.
 
Synergy means that if you're worried about it too, then we're both more likely to arrive at a cure in time. I don't have to care about your parents, but my striving for a cure certainly benefits them. Mrs. Rothblatt has prevented thousands of deaths in trying to save her daughter. If her work with pigs is successful, she'll not only save thousands more but also create new tools by which more inventions can be created.

Well that just sounds to me like what the government does anyway. It's using external levers and incentives because it knows humans are dumb, panicky, selfish animals to provide for the less advantaged or unlucky.

You're also counting on people who will have the same values as you do, but given the nature of our universe you're just as likely to create warlords and people who war against their circumstances simply to have more power, to make their own rules, to be the system, to be in control. New technologies are going to help and transform them as much as your group, if not more. They might just opt to wipe your group out because of competing goals and reclaim your biomass.
 
Before antibiotics, I had both microbes AND Ghengis Khan trying to kill me and take my stuff. After antibiotics, it was just then old age and Ghengis Khan.

I won't deny there's a threat from power migrating upwards. But that's usually been a political problem, we've always had tyrants trying to steal our stuff. It exists whether or not we invent super-human eyes, and so it's a problem worth working on regardless.
 
Before antibiotics, I had both microbes AND Ghengis Khan trying to kill me and take my stuff. After antibiotics, it was just then old age and Ghengis Khan.

I won't deny there's a threat from power migrating upwards. But that's usually been a political problem, we've always had tyrants trying to steal our stuff. It exists whether or not we invent super-human eyes, and so it's a problem worth working on regardless.

Of course anyone can strive personally for (or fund) anything they like or are interested in for whatever reason. That is one thing, but quite another thing is if they mean to argue that 'it all is for the greater human good'.
I mean you may want to have better eyes (you had mentioned an issue with color-blind type), another may want to have a better stomach, a third one to have new legs, a fourth to have a taller body, another more hair, others better metabolism, and so on and so on and so on. All those are personal wishes, and obviously entirely human.
But it sort of is another issue to call for altering humans from biological beings to biological-mechanical hybrids, just because 'we now have limited abilities (it is not as if we even know what abilities we have even mentally) and getting mecha will make us better'. This is not evidently correct, let alone that it is not for a greater good much more than any other highly personal view which would be twisted to be presented as pan-human or majority-oriented. So it is inherently about a small population, and potentially a clique distinguished either by money or some chance events having little to nothing to do with 'worth' in any way defined. (not that 'worth' or even worth would allow for killing off most people and leaving a few to be immortal).
 
Transhumanism is very rational. Who in their right mind wants to die? Not me for shizzle! I want to be in an afterlife of somekind more than I want to believe in anything but there is zero evidence for it. I think I'm pretty awesome & nonexistance is terrifying. Therefore Cyborg Narz banging hotties with his forever young dong of the future!

Note : Ray Kurzweil, the founder (I think) of this movement is a bit of a weird guy, super overoptimistic IMO & honestly its a bit creepy to hear him talk about trying to make a robot version of his dad based on old recordings & such (I watched a documentary about him) but his vision of eliminating the indignity of feebleness & ultimately death is noble, albeit unrealistic in his lifetime (which makes it all the more noble).
 
(not that 'worth' or even worth would allow for killing off most people and leaving a few to be immortal).
This is the second time you've mentioned killing people off. Non-sequiter aside, it seems to misrepresent us.

There is a concentration of these technologies. While there are many opportunities for synergy, the technologies are first going to be used to fix deficits. And then some people will push for new versions of the technology to transcend limits.

Some of us explicitly want those limits pushed. In the meantime, cures for many ailments will be invented and made available to people.

Transhumanism is very rational. Who in their right mind wants to die? Not me for shizzle! I want to be in an afterlife of somekind more than I want to believe in anything but there is zero evidence for it. I think I'm pretty awesome & nonexistance is terrifying. Therefore Cyborg Narz banging hotties with his forever young dong of the future!

Note : Ray Kurzweil, the founder (I think) of this movement is a bit of a weird guy, super overoptimistic IMO & honestly its a bit creepy to hear him talk about trying to make a robot version of his dad based on old recordings & such (I watched a documentary about him) but his vision of eliminating the indignity of feebleness & ultimately death is noble, albeit unrealistic in his lifetime (which makes it all the more noble).

He's not a founder, he's a participant though. The movement is actually quite old. He's a futurist, which means he's very likely wrong in many silly ways. He does a reasonable job explaining exponential growth, so I'll give him that. He's not a prophet, though he tends to have a bit of a following.

And yeah, we're all going to be weird in our own little ways. We all envision the future turning out some specific way, and then try to figure out how we can fit nicely into that future. I happen to think that lie-detection technology will transform society, and have factored that into my long-term thinkin. I also am hoping that our cognition can be cybernetically enhanced, and look for opportunities to push technologies that way. In the meantime, I think that iteratively tackling things that cause death & suffering is a worthwhile cause, made even moreso when each tool allows us to create new tools.
 
It's true. Very many philosophies value those who are far away, but then actually don't do anything to help them. We focus our spending on our loved ones. We give most charity close to home. Etc. It's instinct.

Transhumanism tends to look for synergy. Many inventions can go down in price after they're invented. That's a good thing. Drugs go generic. Patents can be copycatted, etc. Sure, not everything. But many things

Now, I'm selfish. I worry about my parents getting Alzheimer's. Synergy means that if you're worried about it too, then we're both more likely to arrive at a cure in time. I don't have to care about your parents, but my striving for a cure certainly benefits them. Mrs. Rothblatt has prevented thousands of deaths in trying to save her daughter. If her work with pigs is successful, she'll not only save thousands more but also create new tools by which more inventions can be created.

We create tools that are then used to create better tools. Campfires were awesome in-and-of-themselves, but kilning pottery lead to all new types of awesome.

Every kid who doesn't die of malaria is going to want a cure for cancer. I gain allies in this creation process by preventing malaria deaths.

The other advantage of transhumanism is that it forces us to expand our conceptions of sentience. If we might be sentient beings living in computer hardware, then we can think about 'lower' sentiences and create rules regarding their moral worth. The more we learn, the more we can fine-tune these rules.
OK, but again, why are lives important? Are they merely the playing field for further advancement? Are 7 billion people needed, or would 2 billion suffice for technological advancement? What are the transhumanism rules of moral worth regarding humans?
 
They're not merely the playing field for more advancement, that just happens to be a desirable coincidence.

I'm not sure why the human lives are important. Perusing various writings, it just seems to be axiomatic. i.e., a transhumanist is someone who already values the improvement of the human condition (though this needn't be a universal trait, since some people might desire the endgoals without carrying about who is hurt in the process). I find that common empathy is one of my major drivers, and transhumanist thinking has caused me to expand the number of beings to whom I have empathy.

Unnecessary human suffering and human limitations are perceived to be 'bad things' and such should be tackled. The belief appears to be pre-existing. It's like the more common humanism that way, I guess.
 
^But given we all are different despite being of the same species, it is obvious that there always will be people better than others in any given trait. Afterall being human is not inherently a contest (or as noted in a rather bad english text, 'a war of all against all'), but more of an open game where co-operation can also be far more potent in the future.

Either way, though, we won't all be of 'the same level', be it more or less objective or subjective. That much is impossible. I mean we can argue that 'all mice' are 'the same', but this is viewed due to the scope we have of observing something as foreign, and not out of some natural progression leading even those (clearly less intelligent) creatures to reach the same level for every member of their species.
 
Who said we'd be at the same level?

Reducing deficits increases equality, but augmentation has the potential to decrease it. Depends how fast the progress is upwards vs outward. i.e., do the technologies come down in price faster than new augments are invented
 
Before antibiotics, I had both microbes AND Ghengis Khan trying to kill me and take my stuff. After antibiotics, it was just then old age and Ghengis Khan.

I won't deny there's a threat from power migrating upwards. But that's usually been a political problem, we've always had tyrants trying to steal our stuff. It exists whether or not we invent super-human eyes, and so it's a problem worth working on regardless.

The fact is that microbes are still killing people just that we have been able to limit which one can do damage, but we won't be able to eliminate death overall. To do that you will have to be able to eliminate genetic mutations from our genes, and that is impossible.
 
Well, more correctly, you need to fix the damages done to metabolism by any specific mutation. But there's no doubt it will be hard. I'm not so sure 'impossible' is the right word.
 
I feel somewhat ambivalent about like augmenting physical strength and whatnot, but overall there isn't really any good argument against it (except like beauty maybe, I really don't know though).

However, I'm really sceptical about this "enhancing intelligence" deal.
What does enhancing intelligence mean? It's not like intelligence is a size that can be meassured. And it seems like this could end up "taking the humanity away from humans"
 
I agree that 'enhancing intelligence' is a tough-to-define target. Maybe 'cognitive enhancement'? The thing is, you know it when you see it. Faster thinking, multi-factorial thinking, greater alertness, greater concentration, increased lateral thinking, etc. All of these things we can recognize (along some dimensions) as 'good things'. If you drink coffee to perk up or if you endeavour to get a good night's sleep before an important event, or if you use memory aids or mnemonics in order to learn something faster (or better), you at least get the gist of what's being sought.

These are clearly further down the road. I mean, right now I'm happy to target intelligence degeneration. Childhood sickness. Dementia. Stoke. Concussion injury. etc., all of these things are worth fighting. It's not like we can very easily target 'enhancing cognition', but it's still worth keeping an eye out for opportunities.

These tools are iterative, as well. Any specific efforts to increase intelligence (or reduce a decrease) increases the amount of brainpower we have available to develop the next tool.
 
^We aren't a hive, though :D

Personally i am not of the view that more people= more scientific/other breakthrough ideas.

Nor people living for longer= improvement in that scope. Of course for localised and personal points of view those can be obvious and valid goals. Not seeing it in the scope of humanity as a whole/nor do i find it viable without huge population changes/losses.
 
Well, keep in mind I didn't say "more people = more breakthroughs" (though it's a question worth mulling). I more said "increasing average intelligence = more breakthroughs". Now, I'll agree it's controversial, but it's not unbelievable.

And I'm not sure you can really disagree that increasing people's healthspan is a good thing. Given how many people struggle to do so, and then how many people regret their failures, it seems to be true. How many people look at their emphysemic lung scans and say "well, I sure wish I'd smoked more"?

I won't deny that there are social concerns. I'm not sure "have people degenerate mentally and physically over a couple decades, and then die" is really the best solution to those problems. The median age will creep upwards over time, somewhat slowly. So, we'll have time to adapt to the horror of people suffering less.
 
Back
Top Bottom