Your Personal AI (PAI): Pt 3 — The Agent Environment

The Brave New World of Personalized Smart Agents & their Data

A Multi-Part Series

Part 1 — Your Attention Please
Part 2 — Why Agents Matter
Part 3 — The Agent Environment (this post)
Part 4 — Deep Agents
Part 5 — Deep Training

This is an excerpt from my book, The Foresight Guide, free online at The Guide intros the field of professional foresight and offers a Big Picture view of our accelerating 21st century future.

Our Great Race to Inner Space (Physical and Virtual)

This Part 3 of a series on the five- to twenty-five year future of smart agents and their knowledge bases. Each post explores a different aspect of the agent-enhanced future. The series focus is personal AIs, personalized agents that will increasingly aid and advise us, and act as our proxies in the world. Our children and friends will interact with our PAIs when we are gone. We will come to see them as intelligent extensions of ourselves. Because of this, I think PAIs, agents and their knowledge bases are the most important IT development we’ll see in the next generation. Agree? Disagree? Let me know.

Let’s begin by zooming out to a universal look at the amazingly rapid change we are seeing all around us, and the nano- and info-acceleration that drives this change. That will help us see where we are going, to both physical and virtual “inner space.” Then we’ll look at mediated reality, one of many labels for our increasingly virtual environment. These two topics should help us understand, in broad strokes, where most of us will increasingly “live” in the 21st century. Hint: It may not be where you think.

Our Great Race to Inner Space: Seeing Nano- and Info-Acceleration

Image for post
Image for post
“World in a Buckyball.” Image by nanoscientist Chris Ewels (2003).

We live in an incredibly special place and time. Our computers are learning to map, simulate, and “see” the world around them, and now even to think, in increasingly human-competitive ways. They are also accelerating in these abilities, because of the special nature of our universe. Scholars and future-thinkers are coming to recognize that just two of our technologies, nanotech and infotech, continually accelerate. Because of these two techs, humanity is presently engaged in a Great Race to “Inner Space”. Our next frontier isn’t “Outer Space” — doing things at human scale, or venturing to the stars — but rather “Inner Space”, porting all the processes and objects we care about into both nanotech (physical inner space) and infotech (virtual inner space). We do this because physical inner space (nanotech) continually delivers results that are “Faster, Smaller, and Cheaper”, and virtual inner space (infotech) continually delivers results that are “Smarter, Stabler, and Better”.

Surprisingly, when any other process on Earth accelerates, it is always a secondary acceleration, a direct consequence of the primary accelerations of these two very special technologies. All biotech accelerations, for example, occur only via accelerating nano and infotech advances. All social accelerations, when anything “goes viral,” are driven by exponential replication of information, in brains or machines. Because of this fact, understanding these two techs is now critical to strategic foresight.

Let’s look first at nanotech. Ever since human civilization emerged, we have been drilling down ever smaller scales of observation and activity, and we’ve done that faster every year. In just under 90 years, between 1895 and 1983, the smallest distances accessible to science shrank a hundred million fold, and with quantum computing, we’re journeying further inward still. See Abraham Pais’s majestic Inward Bound (1988) for that amazing story.

What’s more curious, every time our engineers and scientists have figured out how to do things on smaller and denser scales (make energy, sense, produce, filter, store, compute, communicate, etc.) their processes become stunningly more capable and more efficient. Not just a bit more capable and efficient, but orders of magnitude more, often in a single innovation step.

Advances on the human scale typically give us 10%, 30%, 50%, sometimes even 500% (5X) improvements in a single step. Think of Ingvar Kamprad’s famous invention of flat-packed furniture. That one idea gave IKEA a roughly a 5X increase in shipping efficiency. But innovations at the nanoscale often start at 10X (1,000%) capacity or efficiency improvement, and routinely go as high as ten millionfold (10,000,000X), in a single step.

Image for post
Image for post

Consider a few examples: We split the atom in the 1940s, and we got 1,000X more energy yield per mass in response. Now we’re trying to fuse atoms (fusion). When we do this, by mid-century perhaps, we will get 1,000X more energy yield per mass yet again. When a chemist adds a functional group into the active site of an enzyme, she often gets a 1,000X increase in speed, yield, stability, or efficiency. A microlaser called a photonic crystal, discovered by one researcher in 2005, is 1,000,000X more energy efficient than previous microlasers. And so on. See any good book on nanotech, like Boysen and Muir’s excellent Nanotechnology for Dummies, 2nd Ed. (2011), or futurist Eric Drexler’s latest, Radical Abundance (2013), for many more such examples.

Why does this matter to our series? Because the hardware on which tomorrow’s agents will be built is seeing vast nanotech improvements every year, due to the current global explosion of smartphones and interactive entertainment (video games). Nvidia’s Pascal GPU, being released next month, has 5X faster connections to memory than previous GPUs, and uses 2X faster floating point math. Together these will improve the performance of neural networks and deep learning (our next post) by 10X. In a single product step. In general, hardware based neural nets are more than 100X faster than software neural nets. They are also up to 1,000,000X more energy efficient. Optical neural nets, which we will eventually build, are over 100,000X faster than electron-based neural networks. And so on. The further we venture into nanotech, the more amazing performance and efficiency gains we get. That’s how the universe works, whether we want it to or not.

The second special technology is infotech. Whenever humans or machines increase their computational ability, they can use that new ability, in a virtuous cycle, to make smarter, stabler, and better things. Part of why they are better is that the more things become information-enabled, the less physical resources we need. We substitute information — and intelligence — for physical activities and things, a process called dematerialization.” Think of all the physical actions and devices that an iPhone and its apps replaces, and makes unnecessary. Think also of the ever-growing fraction of our economy based solely on bits, not atoms. As infotech advances, the world itself becomes not just physical, but increasingly virtual.

Systems that use nano- and info-acceleration are special, because they can increasingly get around physical resource limits in doing their jobs. They run into resource blocks, but the smarter they get, the faster they find a new way to use nano and infotech to get around them. I’ve studied these exponential technologies for over a decade now, and founded a small nonprofit to study them, the Acceleration Studies Foundation, in 2003. These accelerations aren’t going to end anytime soon. Because of the physics of our universe, the smallest scale of spacetime that we know of, the Planck scale, is a long way from where nano and infotech reside today. If you’re wondering where the acceleration ends, I speculate on that in a 2011 paper, the Transcension Hypothesis. Here’s a lovely two-minute visual intro to the hypothesis by futurist Jason Silva.

Whether this hypothesis turns out to be true or not, so far, complexity’s history has been a Great Race to Inner Space, not Outer Space. Every leading system, starting with large scale universal structure, then galaxies, then life-supporting planets, then bacterial life, then multicellular life, then social vertebrates, then humans, and now our self-learning computers, has emerged via an accelerating journey inward, to ever greater physical and virtual inner space.

Each Human Brain Contains at Least 80 Trillion Unique Synaptic Connections

Which systems on Earth have gone the furthest into inner space so far? That would be human thinking and consciousness. They are the most advanced computational nanotech (physical inner space) and infotech (virtual inner space) we can presently point to. Consider that there are 80 trillion informationally unique synaptic connections inside every three-pound human brain — an incredible feat of nanotech. All our moral, empathic, and self-, social-, and universe-reflective thinking and feeling are virtual realities (infotech) that has arisen directly from that nanotech. Yet tomorrow’s machines will venture even further and faster into nano and infospace.

Consider how the knowledge web, just twenty-seven years after its invention in 1989, is already a vast, low-level simulation of physical reality, even before high-level VR, AR, and simulations have arrived. We’ve also seen accelerating virtualization (simulation) of hardware, operating systems, infrastructure, and business processes since the 1960s, and ever more of what we do goes into the cloud every year. As entrepreneur Marc Andreessen said in 2011, “software is eating the world.” Most curiously, as we’ll see in our next post, the most advanced simulations being built today are moving from being “artificial” intelligences, specified top-down by human engineers, to natural intelligences, emerging bottom-up, with us as trainers and “gardeners”. These machine intelligences seem destined to be the next natural form of mind and self-awareness on Earth.

To recap, when we think carefully about it, we must admit that thinking and consciousness, whether in humans or machines, are virtual worlds. They are as real in the universe as the physical world. That’s why we should stop using the phrase “‘real’ world” as an opposite to “virtual world.” Information is as real as physics in the universe we live in. All life’s virtual processes emerge out of physical reality. So do the realities our computers are creating for us. So rather than “real” and virtual, physical and virtual — or nanotech and infotech — are the right pair to think about as we run this Great Race.

Hopefully, this helps us appreciate why our emerging agent hardware and software are so special. Tomorrow’s agents will use both nanotech, and biologically-similar infotech, like deep learning, to accelerate their emergence. As their capabilities grow, we’ll use them to drive many secondary accelerations, including human creativity, wealth production, and mass-desired social changes like basic income. They’ll also drive the elimination of disease and involuntary death. These are big claims, and we’ll explore them further in future posts. Meanwhile, please let me know if you disagree.

Mediated Reality: Our Increasingly Virtual, Digital, & Intelligent Planet

We have talked previously about the semantic aspects of the knowledge web, but that’s just one of its virtual aspects. Let’s look this rising virtualization more broadly now.

As with agents, several terms are presently bandied about to describe our emerging shared virtual space. The metaverse is one. Mixed reality and simulated reality are others. Artificial reality is another. My favorite term to refer to our increasingly virtual, digital, and intelligent infotech is mediated reality, short for computer-mediated reality. This phrase draws attention not just to the simulation abilities of our infotech, but to its growing intelligence, and to what is happening to us as digital mediation grows. Our infotech is progressively mediating, or acting as an intelligent interface to, almost every aspect of our physical environment.

Image for post
Image for post
Four Axes of The Metaverse Roadmap (Smart, Cascio & Paffendorf, 2007)

In 2007, I co-authored an industry study on our virtual future, The Metaverse Roadmap. We divided virtual space into four domains: augmented reality (AR and VR), virtual worlds (games and social worlds), simulations (“mirror worlds”) and archives of human activity (“lifelogs”). Let’s look at each of these now, and some of their implications for agent emergence.

Augmented Reality and Virtual Reality

AR will increasingly mediate our lives in coming years. Visual AR gets the most attention, as it is so flashy, and we’ve seen impressive recent demos of AR games from Magic Leap and Microsoft’s HoloLens, as in the image below.

Minecraft in HoloLens, Microsoft’s Augmented Reality Platform (Developer Edition). Source: Microsoft HoloLens.

Tech visionaries like Peter Jackson predict visual AR platforms like Magic Leap will be as popular as smartphones in ten years. That assumes AR glasses will be high-res, miniaturized and inexpensive enough, batteries good enough, and the interface compelling enough, for many of us to use them, in addition to or in place of our 2D screens. I’m hopeful it happens so fast, but I also think we’ll be more selective with AR usage than many folks think.

We’re seeing steady growth in AR-at-work for niche uses. Google’s Glass is finding use in a few maintenance, engineering, and medical environments, after Google realized it didn’t yet have a killer app, or device, for consumers. Innovative companies like JoinPad, DAQRI, Atheer, Vuzix and Augmedix offer or are building AR solutions for various verticals. HoloLens is used on the International Space Station. Work-focused headsets can be pricey, going as high as $10,000, and are not always desired. Many users prefer tablets over headsets. Once we can display AR info on a bracelet phone (a 4.8 oz Galaxy Nexus 6 weighs less than a 5 oz. Brietling watch, it’s just in a nonwearable form factor), that may also compete with headsets.

I think persistent visual AR (PVAR, pejoratively, “AR-in-our-face”) when we are mobile, will have a very high adoption bar for the majority of us, as it is so easily distracting, and as safety and privacy issues will have to be worked out. See Eran May-Raz and Daniel Lazo’s award-winning 8-min film Sight (2012) for one example of just how dystopic persistent visual AR might be. I think we’ll be a lot more selective about what we put up on those glasses and when. We’ll need a lot better AI to manage that selectivity too.

May-raz and Lazo’s Sight (2012). A great short AR dystopia.

A key insight about persistent visual AR is that its info can almost always be delivered in a less distracting auditory form. 3D VAR (visual AR) is mentally taxing to watch. Its typical use cases will not be persistent but rather time-limited, for shared 3D experiences and entertainment, immersive 3D education, and training our 3D abilities. Used persistently, it typically would add no value over 2D VAR. 2D VAR, in turn, is often taxing and distracting to have to watch, relative to listening to audio AR. So no matter how fast AR glasses and tech improve, I’d bet we use AR in mostly audio mode, and in carefully restricted 2D visual modes, when we’re moving around in the world.

AR contact lenses for consumers won’t arrive for the next decade at least. See my 0% prediction for that at the sci-tech prediction market, Metaculus. Disagree? Give your percentage prediction. If they arrive before 2026 (or as I bet, don’t) one of us will earn reputation points for our prediction. Even once AR glasses are no longer geeky looking, AR text, symbols and images will stay distracting, even when they sit semitransparently in a corner of our field of vision. Some of us will be more receptive to VAR when we’re sitting down, especially if we can interact with it with hand and finger movements as well as voice, but most of us won’t want it when we’re moving about, or interacting with people. But when our PAIs can see what we see and speak to us quietly into our ears, with us giving conversational or private gestural feedback, that is the most continuously useful and least intrusive interface I can imagine.

Harman’s Audio AR Headphones (in development)

So except for when we want immersive presence — as for entertainment, education, and synchronous shared online activities — most of us may prefer dimensional reduction, reducing the complexity of our interfaces from 3D to 2D, or to just 1 dimension. For mobile AR, I think we will usually prefer to use a different part of our brain for input and output (audio and linguistic cortex) than the part we use to navigate the world (visual and motor cortex). That’s why audio AR tech, like Harman’s Audio AR, and Jabra’s intelligent headphones, is so potentially interesting. If it continually gets smarter on the back end, via a deep learning-backed personal AI, an audio interface can be concise, relevant, and subliminal, freeing up our attention for other things. If someone (Apple? Samsung? Google?) were to offer audio AR in magnetically dockable wireless earpieces for our phones, as we discussed in Part 1, with location- and interest-related news, podcasts, music, etc. accessible by conversation, that’s something a hundred million of us would buy right now, with our next phone. It would be socially empowering. So get on it designers!

At some point, computer vision will be powerful enough that smartphone visual AR apps, like Wikitude’s World Browser, Yelp’s Monocle, Google’s Goggles, and others to come, like face recognition, will have large user bases. None do today, to my knowledge. But once computer vision gets good enough for these apps to work, I think they will be used mainly by our PAIs, not by us. I’d much rather have my PAI tell me the details I need most often about the person or thing I’m looking at, than hold up my smartphone to get that info, or read the visual info on my AR visor. I’d rather offload that cognitive challenge to my PAI, and use visual AR only when audio fails. In other words, PAIs are much more needed than AR, as only they can determine the kind of AR that will be most useful to us throughout our day.

Both AR-at-home and VR will be exciting in the coming decade. Interactive entertainment will drive these technologies, and great and therapeutic things will be done with them, long before our hardware can support multi-player social VR. See Cosmo Scharf and Jonnie Ross’s VRLA for an excellent and visionary community pushing the boundaries of this tech.

But I am unconvinced VR headsets, like Oculus Rift and the HTC Vive, will ever be used by the majority of us. Early adopters will rush to them. They will be quite fun. But will more than, say, a few tens of millions of us ever use these regularly? I presently doubt it. If instead we can have a UHD large screen, spatial audio, and minimalist controllers in our hands (possible now), or AR visors (hopefully arriving soon), and AI and sensors in our console that can “body read” our expressions, gestures, and finger, head and eye movements, I think few of us would want to stay with the limitations of a VR headset.

Samsung Curved UHD TV (2016)

With this alternative, “partial VR”, we could eat, move around, multitask, and interact socially far more naturally, with others in the room and remotely. Most importantly, we can verbally dim or turn off part or all of the PAI, without taking off the visor, in a very natural transition. So I hope VR headsets start losing out to these nearly-as-immersive but less isolating partial VR setups relatively soon. Some gamers, those who can’t afford AR setups or home theatres, and other groups will surely continue to prefer VR headsets. But if AR tech and body reading improves as fast as it should, VR headsets may stay niche. We shall see.

Virtual Worlds (Video Games and Social Games)

Smart blending of human and machine intelligence in virtual worlds can already be found in Riot Games, makers of the massively multiplayer online game, League of Legends. Since 2013, Riot has pioneered the use of both AI and humans in behavioral science (“nudging”, “social engineering”) experiments to reduce abusive behavior between their 30 million players. They also publish some of their findings, encouraging others to replicate it and move the science forward. At the outset, Riot’s machine learning goals were modest, such as automatically banning offensive new players “within 15 minutes of their signing up.” After setting up this commendable “zero tolerance” environment, they learned that only 1% of their newest signups were regularly toxic (“trolls”), and that 95% of all toxic posts came from on-average good players, who lashed out very occasionally. They then began a series of machine-learning guided nudging experiments, in which 24 in-game tips are displayed just after a toxic communication, such as “Players perform better if you give them constructive feedback after a mistake.” Error flagging tips appear in red, and commendation tips in green.

They next learned that these automatic tips work for a while, but lasting change comes only when the machines flag questionable posts to tribunals of community members, who then discuss and vote on whether the comments are deserving of punishment, and if so what type. Clearly exemplary behavior, if machine-flagged to a group of peers for evaluation for commendation, will also be far more motivating than just the machine’s kudos alone (I don’t know if they’ve done that positive experiment yet). As with Google’s RankBrain in our first post, Riot’s approach shows us the behavioral science driven, socially adept future of all online community activity, not just games. As agents exponentially gain new intelligence, the best ones will be our coaches, advisors, and social lubricators, the grease on the machine that helps us all craft a Good Society, with both astonishing individual diversity and deep agreement on universal rights and codes of behavior. There will be a role for lots of humans in this future, because it will always be a blend of both humans and agents that humans respond to best, in behavior change.

Image for post
Image for post
Hello Games’s No Man’s Sky (launching June 2016)

The acceleration of mediated reality’s capabilities will continue to surprise us. It was a surprise when Hello Games, a small group of developers behind No Man’s Sky, an adventure survival game launching this June, said they’d used procedural generation to create 18 quintillion planets, many of them with game-unique flora and fauna. Procedural algorithms have been used in video games for years to create textures and shapes. But few people expected they would be used so soon at this scale. Thus even today’s weakly-biological computer hardware greatly exceeds our human ability to explore the virtual realities being created. But not, of course, humans in partnership with their agents.

Thus it is my hope that human-agent teams will dominate the leaderboards in No Man’s Sky in coming years, just as human-AI teams, not humans and not AIs, presently lead in global chess rankings in Advanced (“Freestyle”) Chess. Whether this partnership happens soon or not depends on whether the agent-building tools and interfaces for players are sufficiently intelligent, easy to use, and empowering. It’s up to us, as designers and consumers, to ensure our agents “race with the machines” as fast as our computers themselves are improving. When we don’t, I think we miss an opportunity to make the kind of world we most want to live in, and we give away too much personal and social power to our machines.

Second Life (launched 2003)

Let’s think now about social (vs. pure entertainment) virtual worlds like Second Life, which attracted millions of us in the mid-2000’s. Notice that we didn’t need immersive 3D for those worlds. What compelled us was the shared creative social space. Minecraft became even more of a success, with even worse and less immersive 3D, as it gave an even more visceral and responsive sense of shared creativity, to an even more receptive set of users — kids. One key lesson of such worlds, one that tells us just how powerful and important smart agents will be in the next set of worlds, are that if the interfaces for our virtual worlds don’t keep getting more responsive, natural (conversational), and useful, giving us ever more power, and if they don’t solve their social problems (reputation and rules to deal with obnoxious actors) we keep moving to worlds that offer more social equity, more empowerment, more capacity for empathy, and when we want it, more evidence-based thinking and acting. Second Life fell short on such counts, but most critically on usability of — and lack of improvement to — their interface. So after a surge of initial interest, and lots of inspiring creative exploration, most of us moved on. We’ll keep moving on until we find our “virtual homes.”

Mirror Worlds (Simulations and Serious Games)

This brings us to evidence-based (“reality-based”) based virtual worlds, which include simulations (“mirror worlds”) and lifelogs (big data and “archived worlds”), the last two aspects of mediated reality we will address in this post. Entertainment AR and VR will be huge industries in the coming decade. But so will mirror worlds and all their big data, though they don’t get as much visibility. We get vast social value from simulation-based services like Google Maps, Street View, Earth, and all our GPS-linked GIS systems. New layers of these worlds continue to emerge, like Waze (realtime traffic, harvested bottom up, using cellphones-as-sensors).

Google’s Project Tango is a neat example of the mirror worlds and lifelogs frontier. Tango is hardware and software that lets our our mobile devices use computer vision to determine their position relative to the world around them, without GPS. In other words, it is a visually-driven LPS (local positioning system). As these new mirror worlds and their archives emerge, they’ll offer stunning new services on top of them.

Image for post
Image for post
Google’s Project Tango (launched June 2014)

Consider Tango’s implications: Once you’ve got it up and running, in any environment, things will never get lost again in that environment. The location of each item in a store will be remembered in its worldlog (archive), easily accessible to the employee and customer, including misfiled items. The last place you left anything you care about will be presented to you. The last time you spoke in anger to someone, or in love, and what you were doing or saying at the time, will be easily located, in time and space, a conversation away, deliverable to you by a smart agent, or your PAI.

The more we all use mediated reality, the more everything we do becomes tracked, quantified and transparent to both the knowledge web and to the humans and agents operating in it. Our bodies and behavior, what we say, our eye movements, emotional states, all of these things and more be virtualized, both in both public and private knowledge bases. Many of us will opt-in to this virtualization for all the new abilities, convenience and intimacy that emerges. We’ll need new laws and social conventions, and especially smarter agents, to manage all these new mediated reality superpowers. Fortunately, our agents will keep learning exponentially, for the rest of our lives.

Think also of the predictive value we get from big data and archives of those worlds. These will get a lot more accurate and granular in coming years. That’s very important, because when our simulations reach a certain physical reality threshold, we increasingly use them to what-if reality. As simulations improve, virtual exploration, learning, and collaboration becomes even faster, more resource efficient, and more useful than the physical world.

Image for post
Image for post
NASA/JPL Solar System Simulator

Today, some astrophysicists now get PhDs doing experiments inside the NASA/JPL Solar System Simulator, rather than observing the physical world. If the transcension hypothesis is true, the fraction of time that all of Earth’s scientists spend in simulations must inexorably grow, due to the unique advantages of virtual space. Digital previzualization of movie storyboards grew out of the gaming industry in the mid-1990s. Our defense, security, medical, disaster relief, aviation, space, media, engineering, and many other communities all use simulators. Virtual Heroes started as a combat training platform ported from a video game, America’s Army, and has expanded to many other training uses. Classified, high-accuracy urban simulators are in the planning phase now, for simulating bad actor strategies in homeland security.

simSchool’s Teacher Training and Assessment Platform (2016)

simSchool is a leading classroom simulation environment, built for teacher training and improvement. It is a 2D and cartoony sim at present, but even with such dimensional limitations it can help teachers get better at classroom management, and diagnose weaknesses, including cognitive and social biases. A 3D version with facial microexpressions would be even more effective, but just as important would be increasing the semantic richness of the existing 2D experience, better visualizing and quantifying teacher performance, and demonstrating alternative strategies to teachers.

Image for post
Image for post
Mursion’s Teacher Training and Assessment Platform (2016)

Mursion is another impressive teacher training simulation platform. What makes it particularly effective, as the education futurist Maria Anderson explains, is that humans act as “puppeteers”, driving the 3D characters (students, parents, and fellow teachers) in the classroom. That makes it more unpredictable than today’s game AI. Mursion is new and expensive, but it is rapidly gaining high-end clients. Eventually, we’ll see free, open source versions of platforms like Mursion emerge. Good platforms will always blend both agent and human intelligence. The fastest human learners will use them while being advised by, and while verbally training, their personal AIs.

These are all examples of a growing industry of serious games. We could also call this “serious AR/VR”, as it isn’t AR and VR for games, but a blend of simulation, exploration, collaboration, education, and entertainment. Serious games are a good deal harder to market than entertainment games, their value needs to be higher before they’re used, and their early user bases are more niche. But they will grow as AR, VR, simulations, lifelogs, and agents continue to improve, and we will benefit immensely from them.

Now is a good time to talk about gamification and nudging,” the infusion of incentives, play, and games into the physical world. We will see a lot of these and other behavior change strategies in an AR- and agent-rich world. They can be done well, openly and with evidence-based approaches, as we’re seeing at Riot Games, and with a high degree of user consent and control, via our conversational agents, or they can be done in secret, without user control, as with Facebook’s emotion-altering experiments on their users in 2014. A gamified and agentless world could easily become a very disempowering, manipulative, addictive, and distracting dystopia. See game designer Jesse Schell’s hilarious 10 minute “Gamepocalypse” scenario for one such nightmare we really need to avoid. Our gut tells us we’ll see some of this in the short term, in our current consumption-driven and plutocratic society. But if we keep our focus on the Five E’s, and on agents to mediate them, we’ll use these tools as a vast force for personal empowerment and social good.

Finally, it’s worth remembering that we don’t need high tech to do good simulation. Many communities have done valuable simulations mentally, as “wargames” and “simulation games”, for generations. See Dave Gray et al’s Gamestorming (2010), and Herman and Frost’s Wargaming for Leaders (2008) for two good overviews of such methods. See Wikistrat for a company that offers these kinds of human-driven simulations as a service, via the web. So we can make great use of simulation’s benefits right now, without a big IT budget, while we wait for a more mediated reality and its agents. Of course, as these technologies grow up, they will make all our serious games far more useful, continuous and pervasive. In fact, all human life can be thought of as a never-ending positive sum serious game, as James Carse reminds us in his playful Finite and Infinite Games (1986/2013), and Robert Wright reminds us in his brilliant Nonzero (2001). Both books change the way you see the world.

The most amazing example of human-machine partnership I’ve yet seen in blending simulations with the physical world is Tesla’s solution for shared autonomy in autonomous vehicle development. Watch this brilliant TEDx video by Sterling Anderson (@sterling_a), head of the Model X Program at Tesla Motors, describing how they developed software that allows the driver to share control of the car with the software, while still protecting the driver from danger, something that hadn’t yet been done in autonomous vehicle solutions, like Google’s. They realized that to deliver this solution they needed to predict what the driver wants, while also predicting what was safe, and to continually give the driver as much control of the car as was safe. Anderson also gives great advice to innovators on starting with vision (the “what”) and only then start to think up techniques to get there (the “how”) as a key way to improve your innovation process.

“What.” Then “How”, Realizing the Power of Right-to-Left Innovation, Sterling Anderson, TEDx BYU (2015)

So where will we “live” in the 21st century? In every facet of our brave new mediated reality. Video games will enhance our imaginations. Serious games will improve our exploration, understanding, and collaboration. AR, mirror worlds, lifelogs, and smart agents of all types, like this driverless car software, will be our interfaces. But of all these, agents and PAIs are special. They’re going to grow rapidly more intelligent and autonomous. They will be the new leaders of both physical and virtual space.

Our next post, Deep Agents, will look at the learning algorithms tomorrow’s agents will use, including deep learning and other forms of natural (biologically-inspired) intelligence and computing. We’ll also briefly survey some of the progress that has already been made with these approaches in just the last few years. This will help us understand what agent abilities will be achievable soon, and why anyone who wants to be an agent builder, trainer, and user can play a role in the very exciting years ahead.

Calls to Action

  • If you post about smart agents, PAIs, or mediated reality on Medium, Quora, Reddit, etc., consider using these tags.
  • If there’s an idea or resource you’d like to see in this series, please tell me and I may share it in future posts.

John Smart is CEO of Foresight University and author of The Foresight Guide. You can find him on Twitter, LinkedIn, or YouTube.

Feedback? Leave it here or reach me at
Want first access to my events? Enter your email address at
Need a speaker? See my speakers page,

CC 4.0. Anyone may share or adapt, but please with link and attribution.

Part 4 —> Deep Agents

CEO, Foresight University. Author, The Foresight Guide.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store