Futurists

by rrusczyk, Nov 27, 2009, 5:50 PM

When I read futurist musings such as those Robin Hanson posts at overcoming bias, I almost always go through the following thought process:

Step 1: Nothing I'm doing at AoPS (or that anyone other than the people working on these technologies) really matters. Robots, or enhanced humans, will take over from we mere regular humans in the not-too-distant future, and the idea of "teaching" humans will be rather passe. If this event were far in the future, it really wouldn't matter much to my evaluation of the value of my efforts. But if this event is nearer, then, well, what's the point of doing what we're doing.

Step 2: Most futurists wildly overreach in their timeline, if not their expectations of what is possible.

Step 3: So, I guess I'm working on the back-up plan in the meantime; helping the smart people who will improve the future become even smarter, since the robot aren't yet here to do everything. Working on the back-up plan isn't exactly sexy, but, well, it pays the bills (finally).

Step 4: But making the smart students smarter means that we're improving the lot who are working on these futurist dreams. Maybe we're not working on the back-up plan at all. (Of course, some might argue that this thought means that we should stop working altogether.)

Step 5: Time to stop thinking about this silly stuff and get back to work. [Edit: Maybe "silly" is the wrong word. My future robot overlords will be very disappointed with it, at the very least.]

Comment

3 Comments

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
I for one welcome our robot / cyborg overlords. :-)

But I don't think this future culture, whenever it emerges, will develop ex nihilo. It will inherit the good and bad values and philosophies of its ancestors (just as we have done).

Educating the next (last?) human generation in problem solving, work ethic and scientific skepticism are both critical and noble pursuits in my book.

by djcordeiro, Nov 28, 2009, 4:24 PM

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
I'm not as confident that our robot overlords will share human values. (Plus, I'm a little unclear on what "human values" are -- there's considerable variation across our species. At least in what the stated values are ;) )

by rrusczyk, Nov 28, 2009, 5:01 PM

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
That's just it, as an Existentialist I'm not inclined to think of a category of platonic values (human or otherwise). There are just values as expressed in our choices and actions.

In some ways I hope that the values of these trans-humans will not be "human." I particularly like the Jaynes book you recommended because it suggests the ability for intelligence to change dramatically in a matter of just a few generations.

I am also a big fan of Nietzsche for the idea that current humanity is not the end point of evolution. His frequently misunderstood concept of the Uebermensch might be a good proxy for the next (potential) stage of human development.

I guess I'm not satisfied with the current state of human nature, but I reject the popular idea that this nature can be improved by the social engineering of other humans. I prefer to think that any profound change will emerge unpredictably out of the chaos of our culture, economy and technology.

by djcordeiro, Nov 28, 2009, 8:26 PM

Come Search With Me

avatar

rrusczyk
Archives
+ December 2011
+ September 2011
+ August 2011
+ March 2011
+ June 2006
AMC
Tags
About Owner
  • Posts: 16194
  • Joined: Mar 28, 2003
Blog Stats
  • Blog created: Jan 28, 2005
  • Total entries: 940
  • Total visits: 3309452
  • Total comments: 3879
Search Blog
a