Recently in Essays Category

One of the things that has struck me over the last few weeks in discussions about where virtual worlds are going is that we are in danger of making the same mistake made by application development and the web, by concentrating on the wrong thing. With virtual worlds, and the moves towards interoperability and standards, there is a real opportunity to get things right first time.

The "mistake" is that we tend to concentrate on the visual. It's only natural, it's probably our most powerful sense and the one that most of us would least wish be without - and hardly a surprising one for virtual worlds!

Since the first computer read-out and green-screen VDU we have developed computer applications and their user interface as a single entity. Even with the move to client-server this did not change - one application, one user interface. However with the arrival of the web, and of mobile phones, things began to change a bit. Users wanted access to the same application (be it a CRM system or Facebook) regardless of the device they were using. In fact almost 10 years ago I had a slide in my Aseriti slide pack (selling utility billing systems) that called for "Different Users, Different Needs", showing a user interface less application engine surrounded by optimised user interfaces for mobile, consumer, web and call-centre users.

The development of the mash-up culture has pushed this even further. Once we replace applications by user interface-less application engines, and then create user interfaces (and even other application engines) which talk to the application through an agreed API (typically a web service) we can unleash a whole new set of creativity in how we create applications.

The web unfortunately made a similar mistake, hardly surprising since its was based around HTML, but disappointing given Sir Tim Berners-Lee's own original vision, and that of Alan Kay and the Dynabook. HTML is mostly about marking up the display, not the content. David Burden means nothing more but the characters D-a-v-i-d-%20-B-u-r-d-e-n displayed in bold type. If you search for "David Burden" on Google you'll find lots of the same characters in the same order, but you'll have to guess that they actually refer to different people.

The "solution" of course is the Semantic Web - championed by Sir Tim Berners-Lee. But trying to retrofit it to the Petabytes of text strings that make up most of the web is an enormous challenge. Formats like RSS, and even Twitter hashtags, begin to give us some sort access to a semantic web, but the true semantic web languages of RDF and OWL (which at Daden we are using to give our chatbots semantic understanding) are woefully under-used. Even less used are things like Info URIs - agreed semantic codes, like an ISBN number, that say that info:people/davidburden/515121 is me, and not the CIO of the Post Office. If every mention of me on the web was marked up semantically then finding all references to me on the web becomes trivial. It's good to see that Google is beginning to introduce aspects of semantics into its search results, but without the original content being semantically marked up its only a small step - the mistake has already been made.

So what's all this got to do with virtual worlds? Almost any initial assessment of a virtual world starts with how well it looks - how good are the textures, the avatars, the sky, water and shadows. After that it's about functionality - how naturally does the avatar move, how can you interact with objects, can you view the web or MS Office - and about deployment issues (does it run on a low spec PC, can it run behind the firewall, can we protect children). There is active debate at the moment about standards in virtual worlds - Collada, X3D etc - and whether virtual worlds should be downloads or browser based (and this itself offers a spectrum of solutions as pointed out by Babbage Linden at the recent Apply Serious Games conference).

But to me all this is missing the point. Virtual worlds are NOT about what they look like, but about what's in them.

Let's not repeat the mistake of application development and the web. Let's start thinking about virtual worlds in terms of how we semantically mark up their content, and then treat the display issue as a second order problem. The virtual world is not HOW you see it, it's WHAT you see (or more precisely what you sense and interact with).

Some examples. These are all based around Second Life, since with libsecondlife/libomv we can actually get access to the underlying object models (which is as close to a semantic virtual world model as you can get).

  • With Open Sim you not only have a different application sharing the same object model as SL, but also different clients using different graphics engines to render "SL" in subtly different ways.

  • We have been working with the University of Birmingham to use their expertise in robotics to help create autonomous avatars in SL. The University uses a standard robot simulation application to visualise and model physical world spaces and test robot software, before downloading the code to the physical-world robots. To work in SL they've taken the SL object/scene description and dynamically fed it to the bot modelling tool - so SL "appears" as a wireframe model in the simulation application just as their physical world spaces do.

  • On my iPhone I have Sparkle, a great little app which lets me log my avatar into SL. No graphics yet, just text (and not even a list of nearby avatars) but adding a radar scan of nearby people - and objects - would be almost trivial, and adding a 2D birds-eye view of the locale only a little harder. Even a 2.5D "Habbo" rendering of SL would not be impossible.

  • We've already played around with using LSL sensor data and libomv to generate live radar maps in web browsers - why not push this a bit further and use Unity, X3D or similar to "re-create" Second Life in the browser - it won't look "identical", but in reality it's all just bits anyway.

Four situations, four different ways of rendering the Second Life "semantics".

Our own work on PIVOTE shows another approach to this problem. By creating the structure and content of a training exercise away from the visualisation tool we are free to then deploy the exercise onto the web, or iPhone or virtual world of our choice without having to change the semantic information or the learning pedagogy. If that semantic model could be extended to include the virtual world itself, then we would have a true write once - play anywhere training system.

One final issue that our bots particularly suffer from, is that having access to objects is no real guarantee of having access to true semantics. I might create a plywood cube in Second Life and call it a chair, a snake, or anything. The bot cannot guarantee that reading an object's name will tell it what the object is. To be truly semantic any future editing system should ideally force us to put accurate semantics on the objects we create - and in particular their place in the ontology of the world. Then even if we can't recreate a "chair" as the specified collection of prims or polygons we can substitute our own.

So this is my challenge to the virtual world community. Stop thinking of virtual worlds (and especially the future of virtual worlds) in terms of how they are rendered, but concentrate on their object models and the underlying semantics. I have every confidence that the progressive increase in PC power and bandwidth - and the existing capabilities of computer games - will mean that the look and feel of virtual worlds will come on just fine. And those of us deploying virtual worlds into enterprises will find that with wider adoption and real business need/demand will come the solution to all our problems of firewalls and user account controls (just as it did when the web first arrived in enterprises). These are (almost) trivial problems. If we want to create truly usable and powerful virtual spaces (and I even hesitate even to use the world virtual) then we should be focussing on the semantics of the spaces and the objects within them. That way we will avoid the problems of applications and the web. We will know what the objects in our world are - we only have to decide how to render them.

Humaniti 2100

| | Comments (0) | TrackBacks (0)

I must try and get back into the habit of a Friday afternoon blog post. So here's something I've been putting together for a while on my Palm - it is unashamably direct and does not particularly hedge its bets - where's the fun in doing that?.

I still reckon that Greg Egan (in Diaspora) has probably got the most accurate view of how humanity may 'evolve' over the next few centuries. Inspired by him here's my take on where we could be by 2100.


Naturals are those who have refused any sort of body mod or digital existence. In 2100 there will be a frighteningly large number of people living (even in poverty) for whom this may be the only option. Increasingly though this is a moral choice. However general health advances mean that life expectancy is 100+, with good quality of life to 90+ for those in the developed world.


The Augmented are those who have taken advantage of the transformative technologies of genetic and nano engineering and digital/cyber mods, but who see their 'self' as purely their organic mind. If they use 'scapes and virtual worlds through avatars it is for specific, non-persistent, purposes. Augmentation itself may range fm slight (e.g. just regular use of an avatar or life-logging systems) to extreme (bio-electronic cyber-systems).


Multiples are those of organic descent who have created digital copies of themselves which exist persistently (and probably in multiple instances) in 'scapes across the globe, planets and (by 2200) the stars. I still think that 'uploading' a mind from the brain could be in the near-impossible category. lnstead the first 'copies' will come from explicit teaching/programming, quickly followed by automated learning from email & social media, and ultimately by eavesdropping on everything we do from birth - our lifelog becomes us.

The real challenge for multiples is the re-integration of learnt experiences. Copies can easily just copy data, but what about the organic prime? Again I think that uploading memories is probably a no-no, but in-silico memory accessed through some sort of personal agent or augmented reality (or even brain-jack) would seem achievable.

Freed of a corporeal existence the copies can explore the stars in starship borne 'scapes, and even be beamed from star to star at the speed of light (and bear in mind that since the Multiples sense of time is dependent only on processor clock speed the journey to Alpha Centauri could pass in seconds or millennia - again see Egan, this time in Permutation City).

But perhaps the most telling feature of Multiples is that they can be immortal - so whilst your organic atom based self may die your digital Multiples can live forever - perhaps we might even call them ghosts.

And if we can create simple Multiples now (and I think we can) then it means that we can create simple (multiple) immortality right now - and just think of the moral and ethical issues that raises.

(and if you doubt this whole section take a look at this recent DoD requirement)


Digitals (who also normally exist as Multiples) are personality constructs that are not derived from an organic, living source. At their most basic, and in current tech terms, these could be virtual receptionists or game NPCs, but very shortly we'll be able to create autonomous, self-motivated avatars - the fore-runners of true digitals. We might also create Digitals from historic personalities, and we could even use software DNA to allow Digitals to breed and evolve (for what a baby Digital might experience read the opening of Egan's Diaspora). The key point is that within the virtual space Digitals are not differentiable from Multiples or the avatars of the Augmented.


But Multiples and Digitals need not be confined to virtual spaces. Once we have a digital self controlling an avatar body there is no reason why we can't have the same self controlling a robot body. Indeed in building our own AI engine we created a Sensor and Action mark-up language to isolate the AI 'brain' from the embodiment technology. So the same AI that controls a Second Life avatar could also control, and live through, an AIBO or ASIMO. Fast forward 50-100 years and our 'human-level' Multiples and Digitals can walk the atom-based physical worlds in human (or non-human) bodies (what Egan calls Gleisner robots). And whislt embodied Digitals may sound weird just think what it would be like, as the human root of a Multiple, to shake your own hand.

When I first wrote this I entitled it Humaniti 2200 - but I've now put it to 2100. For either date though it's a matter of degree - particularly for the 'organic' element. There is also though the potential of a 'singularity' effect; once we have one fully fledged Digital we could rapidly clone or evolve it (or it could clone or evolve itself) so that the Digital population could go from single figures to thousands to millions in years or decades.

So how might percentages or numbers go? This is probably not even a guess, but its always useful to have some strawman figures.

Type / Date200920152020205021002150 2200
Naturals99%95%90%80%50%30% 10%
Figures for naturals and Enhanced are % biological human population. + means +/- 1 order of magnitude, ++ means +/- 2 orders of magnitude

And if you doubt the 2015 figures I fully expect Daden to be running at least 1 Multiple (probably me!), 1 Digital (an enhanced Halo) and even a Gleisner (Halo in some 2015 equivalent of a Femsapien).

But the real message is that we can already create Multiples and Digitals, and even Gleisners - they just aren't very good yet! But it means there has been a shift. This is no longer a question of when or if or can, but just one of how good, and how fast will we improve.

A Seismic Shift

| | Comments (0)

My reading of the last few months has been so novel as to make me change a few of my fundamental views on things.

The books were The Singularity is Near by Ray Kurzweil, Accelerando by Charles Stross (who'd obviously either read Kurzweil or the same sources, and Consciousness by Susan Blackmore. The first two very much majored on GNR (Genes-Nanotech-Robotics) and the singularity, the last on the consciousness aspects of humans, animals and AI. In fact I prefer to think of GNR as GNA – Genes-Nano-Artificiality, encompassing both AI and VR.

The changes of viewpoint are largely around a) SF and the future of space exploration, and b) AIs .

Nabaztag! No its not my hay-fever coming back but the cute bunny now sitting on our breakfast bar. Like Harvey he's white, but unlike the famous film rabbit he's only 20cm tall, conical, glows with multi-coloured lights, has rotating ears, and is French.

Birmingham Post - Keep It Simple

| | Comments (0)

Why re-invent the wheel? Last weekend I finally hawked my CD and vinyl collection round the record shops of Birmingham. Having moved everything to MP3, and stored a DVD backup in deepest Surrey, I vowed never to buy another mainstream CD.

This month the British Standards Institution finally published the “Publicly Available Specification 78 (PAS 78): Guide to Good Practice in Commissioning Accessible Web Sites”. PAS 78 has been under development by the BSI and the Disability Rights Commission (DRC) for many months. Its intention is to provide those commissioning and designing web sites with clear guidance as to how to implement accessibility effectively.

Here's a call to action. I've created a Digital Birmingham group at Frappr! Just go to, put yourself on the map, take a look at your digital neighbours, and demonstrate your commitment to a truly digital Birmingham. And don't forget to tell your friends.....

Digital Birmingham launches today in Chamberlain Square. Kids and half-term willing I might head down there to find out what all the fuss is about. My Digital Birmingham guide tells me that Birmingham is leading the digital revolution and that the city will be “embracing the latest technology .. to ensure our city and its workforce are ahead of the game”.

Needless to say that got me thinking. What would be on my shopping list for a Digital Birmingham?

When Reality Blurs - BPost 051004

| | Comments (0)

Ever wanted to be a roadie, a film star, a CEO? Now you can be. With 20Lives the Nokia Game, whose loss I lamented here a year ago, is back.

The premise is simple. Every day you play one of 20 different lives. Living out a day in the life of one of the characters. The nicest touch is that all of the game is in a first person perspective, you see people and places only through your character's eyes – and there are a lot of people talking right in your face. Each of the lives is interlinked, inhabiting the same city. You'll play one character one day, only to find yourself encountering that same character the next day whilst you're in somebody else's skin. Useful information comes fast and without warning, and you soon learn to keep a pen and paper handy.

A major trend in technoculture at the moment is the remix. We've all grown used to DJ remixes dominating the dance floors – but the remix, or mash-up in US geek speak – is now appearing almost everywhere. The reason is that the cost of the technology needed to re-edit existing material has plummeted. Star Wars fans have used desktop PCs to create a Jar-Jar Binks-less A Phantom Menace, anime fans have cut video-game footage to fit their favourite pop songs, and machinema enthusiasts have used commercial game engines like Quake and Halo to create their own video soaps. Now, though, it's happening to the web.

Every so often an application arrives on the Internet that takes your breath away. Google Earth is such an application. Using satellite photographs taken over the last three years Google has stitched together a complete image of the Earth. A globe which lets you zoom in with every increasing detail until you can see individual buildings, gardens and even cars. And when you get down to ground level you can tilt the entire image into a relief view so that mountains rise high above you, and seas spread out before you.
This is a Flickr badge showing public photos from halo4256. Make your own badge here.


Powered by Movable Type 4.1

About this Archive

This page is a archive of recent entries in the Essays category.

CyberTech is the previous category.

Family is the next category.

Find recent content on the main index or look in the archives to find all content.