My reading of the last few months has been so novel as to make me change a few of my fundamental views on things.
The books were The Singularity is Near by Ray Kurzweil, Accelerando by Charles Stross (who'd obviously either read Kurzweil or the same sources, and Consciousness by Susan Blackmore. The first two very much majored on GNR (Genes-Nanotech-Robotics) and the singularity, the last on the consciousness aspects of humans, animals and AI. In fact I prefer to think of GNR as GNA – Genes-Nano-Artificiality, encompassing both AI and VR.
The changes of viewpoint are largely around a) SF and the future of space exploration, and b) AIs .
SF appears to have been through at least 3 stages. There was the Golden Age when physics meant nothing and chrome lines meant everything. I arrived at SF somewhat at the end of this, with the stories of EE Doc Smith. Then there was “reality” SF, typified by Traveller, Star Wars and even Star Trek. The universe and its equipment looked lived in. Physics was consistent (at least within each world). Humans like us roamed the galaxy, aliens were everywhere, and we traveled through hyperspace or at “warp speed” to circumvent speed of light issues.
Then in the 80s and 90s came Cyberpunk, Neuromancer, Snow Crash, Gibson, Stephenson and others. The horizon was shorter now (10, 20 years), the adventures more head, VR and earth bound, everyone wore mirrorshades with neural implants.
My own take on SF, and by extension of the future was some cross between Traveller and Cyberpunk. Jump-2 ships but with a wetware hookup to something more exciting than a Computer/2bis.
The message from Kurzweil though is that we will never get to such a “quaint” view of the future. In the next 50 – 100 years GNA technologies will really take hold – well before we actually reach out to the stars. So by the time we do reach the stars it will be as post-humans. Wetware will be nowhere near it. Indeed as Stross shows our first interstellar spacecraft may be no bigger than a bottle of coke, a dense supercomputer running tens or hundreds of human or machine derived AI.
Goodbye Free Trader Beowulf, farewell Hans Solo. Hello Halo. And that of course is even assuming that the singularity doesn't bite.
The other aspect of this is the impact on ? Paradox. In the past I, and many others, have thought of this in human-chauvanist terms. But if Kurzweil is right then there are a number of implications.
First an alien culture may have very little time at TL7/8 before it becomes “post-alien” or the singularity hits. As a result “conventional” detection time could be very short.
Second, a post-alien culture (PAC) with GNA running full throttle could expand very rapidly. Even assuming no hyperspace (so say a c/10 or c/5 max speed at most) then such a culture could cover the galaxy in 1 million years (100,000 ly diameter x 10 years per light year) even assuming a starting point diametrically opposite us. That makes the “why aren't they here yet” question even more salient – we either need a good answer or we are indeed alone.
One answer is given by Stross, bandwidth and computing demands mean that such cultures are locked up in Matryoshka brains around there stars, or have seriously transcended. Another valid one is the “star-fleet directive” and that the trigger point for contact may well be us becoming post-human or encountering our own singularity. A third is that contact has indeed been made and that we are the result – the 2001 option.
Perhaps though the answer is in that 1m years. Even assuming a closer location keeping any system going for 500k years is a challenge. It may not be a case of things breaking but of things evolving or transcending.
The bandwidth argument could also help explain why SETI fails. If we are talking about PACs then bandwidth will always be the key issue, so the broadcast type signals that SETI hopes to find would be of no use. Communications would only be over fibred or laser light, EHF or higher, or more esoteric methods unknown to use (quantum entanglement anyone?).
A key issue for me with GNA is just how does it reach the whole human race? Will it be the preserve of the rich and powerful, or will be be available to anyone. Each has the potential for low/no cost routes to plentiful supply (self-replication can do wonders for economics), but what political, social, cultural and commercial forces might mitigate against that? What does GNA mean for global poverty, or global terrorism?
With AI inherting the universe the matter of what AI ( and even I) actually is becomes ever more important. Firstly I've found it increasingly useful to distinguish between Artificial Intelligence (which means little as we can't even agree on what “Intelligence” is), and Artificial Sentience (which is actually what most of us are really interested in, the ghost in the machine).
At the very end of Susan Blackmore's book, after surveying several different theories of consciousness she comes up with the concept of inward looking conversations. If an entity can have an internal dialogue with itself then it is truly conscious – and by her definition I take that to mean sentient. So ironically the real challenge with chatbots shouldn't be to build one that can have wonderful Turing level conversations with other people (something that I'm sure is a brute-effort, non-sentience issue), but to build one that can have convincing dialogues with itself!
The model also has an element of “critical awareness” to it. The brain ticks over quite happily doing most things “subconsciously”, and only becomes “conscious” when the internal dialogue starts up for some reason. When we're “in the zone”, doing things as though they are second nature we do just do, not question, we have actually lost our consciousness for a while. Blackwell's model also gives an elegant solution to things like Schizophrenia, it becomes just a case of multiple internal dialogues rather than a single “consciousness”.
Of course there is still the issue of the continuity of self. Is the me that wakes up the me that went to bed? Is the me that boots up as a bunch of code the same as the me that was just scanned? Susan Blackmore takes a Zen view that there is no such things as self, all of our feelings of it are transient, those critical awareness moments which we then just let go.
So what actions fall out of this?
First I need to revise my future timeline around the impact of GNA. Should look good on a SIMILE Ajax timeline. - download timeline as PDF
Second any SF I now write, role-play etc needs to reflect this new “reality”. Bye-bye Type S Scout, quaint relic of the twentieth century.
Third, my chatbot and AI development needs to try and look more at the internal dialogue. I already had plans to do more on personality, memory and dreaming. These aspects now look even more important than ever but need to be joined by an “id” that can discuss them with the “ego” of the chatbot.
And two to think on....
Forget “make poverty history”, just how do we make the future evenly spread?
Finally, this whole continuity thing needs a lot of thinking (and reading) about. If we're to live forever, or reach the stars, it could be the most vital link.