Contemplating Mortality: Personal AIs, Mind Melds, and other Paths to Our Postbiological Future

Note: This is one of my more speculative posts, about events I expect to occur in later decades of this amazing century. Nevertheless, it has implications today, so perhaps you’ll find it worth a skim. As always, let me know what you think, here or privately at john@foresightu.com. Thanks.

When we presently die, many of us realize that most of our unique self, including, for most of us, the majority of our unique experiences, ideas, values, goals, and personality, currently dies with us. That can be a traumatic realization.

But consider that once we have personal AIs (aka PAIs, see my Medium series on them) in wide use, even in their early and limited forms in the 2030s, that will no longer the case. Interacting with our PAIs will be a very natural and unavoidable way for us to increasingly upload us into a new substrate, whether we want to be uploaded or not. In fact, I’d predict that the migration of individual minds from biology to technology, incrementally via our Personal AIs must occur on all Earthlike planets everywhere in our universe, and I bet that such PAI uploading will happen much earlier than by any other more biologically invasive method (eg, brain-machine interfaces).

Consider that such uploading is already sneaking up on people already, in tons of little ways. But even though PAI uploading is subtle and incremental, it is also powerfully exponential. Subtle uploading is a planet-scale process that our philosophers, academics, and pundits will increasingly appreciate and debate in coming years. Yet most of us, I expect, will just joke about it and get on with our lives.

Imagine what it will feel like when your 2030’s or 2040’s PAI, knowing your past statements and later, even watching your facial expressions, can complete your sentences for you when you are having a senior moment. A computational linguist once told me that if you have two years of past data on what a person says, you can correctly guess the word on the tip of their tongue, by past context, 80% of the time. I don’t know if those numbers are accurate, but I deeply trust the general concept. Growing up, or even as an adult, you may have played the “mirror game”, completing the sentences of someone you know well, because it’s fun to guess them and sometimes you want to help them say faster what you think they are likely to say.

I expect our most advanced PAIs will feel like mental twins within just a generation of their commercial use. My original name for them in my 2003 article was “digital twins”, and they will have high-quality records of the large majority of our past experiences, ideas, values, and goals. Those unique aspects of self will no longer die with us, if we don’t want them to. They will be able to live on in our PAIs for our family and friends to interact with in whatever way they like.

There are of course lots of mental health implications for this. There will be empowering and disempowering ways to interact with the PAIs of past loved ones, as you can imagine. Those who die, or if they didn’t think it through, those who live on, will have to decide whether to keep them running, and if so, whether to let Google or whoever continue to make them smarter on the back end. Many of us would feel more empowered and empathic to have our parent’s PAIs continue to survive, and be improved, after they passed away. Continually improving ancestor PAIs may “interview” survivors to get their recollections of the deceased, and improve the usefulness and value of the ancestor simulation.

We’ll also have to decide how much we want to alter our ancestor PAI’s traits going forward. For example, if your biological mother always favored your sibling over you in certain contexts, or was an alcoholic, you would likely want the ability to reduce or eliminate some of those traits in the version of her PAI that you interact with after her death. In many Western cultures, we’d find those freedoms to be valuable. But in some African and Asian cultures, such modification would today be considered disrespectful to our ancestors. So we’re going to see a lot of interesting discussions ahead.

In general, once ancestor PAIs arrive in our world, I imagine that a growing fraction of us will feel a lot less traumatized by the high level of informational destruction that presently accompanies biological death.

Of all our current social problems, arguably the greatest tragedy presently inflicting humanity, happening 55 million times a year at present, a number that will rise annually until roughly 2050, is the inevitable death of each of us, due to the “disposable” (not a very nice word, but an accurate one) nature of human biology.

But personal AIs seem likely to exponentially reduce, and then finally eliminate, the death problem. We certainly won’t see human-guided medicine or molecular biology solve this problem any time this century. See my article, Limits to Biology, 2001, if you still think biological immortality is possible, or could ever be achieved by biological humans, using science. It’s far too hard a problem for us (vs our coming AIs) to solve in any reasonable time frame.

Unfortunately, we biologicals begin falling apart, from the inside out, in scores of convergent ways as soon as we are born, due to imperfect error correction at the molecular level. Humans and today’s AIs aren’t anywhere near smart enough to solve that problem. It’s going to take quantum computing to simulate our cellular systems at great fidelity to find out ways to stop and reverse ever-increasing cellular entropy. By contrast, building self-improving machine intelligence, capable of replicating, evolving, developing, and undergoing natural selection, seems to me to be well within our capability. I could easily see late 21st century AIs growing powerful enough to understand and improve us at a molecular level.

But we should recognize that by the time they can do that, we’ll consider our PAIs as the place “we” typically want to go. This seems especially likely if our AIs can learn how to create postbiological consciousness, and fully simulate even this highest and most prized feature of our minds within our machines.

As a presently mortal species, we biologicals don’t like to think too much about the scale or impact of death. Ernest Becker’s Pulitzer prize-winning book, The Denial of Death, 1974/1997, discusses many ways modern cultures lie to ourselves about death, telling ancient fables of the afterlife, fables that delay and dissuade us from solving the problem, and also cause us to avoiding the challenge of fully living, and improving our own imperfect science and morality, right here and now.

It is my hope that in coming years, as more and more of us see the unique advantages of PAIs for dealing with our biological mortality, we’ll see a growing momentum behind their development, and more efforts to remove the many short-term roadblocks standing in their way.

Once a substantial fraction of us realize that these solutions actually work, we can expect that most of our cultures will finally stop pretending that personal mental death is a good thing (in the vast majority of cases it isn’t, if that person had a say in things), and we’ll upgrade our religious faiths to be consistent with a new world of perpetual growth and renewal, for all souls who might personally desire that future.

The first solution to our mortality problem that we will now discuss in more detail, PAI uploading, seems likely to have the biggest impact on the nature of mortality. The second, mind melds, will be the “nail in the coffin”, the final development likely to drive the vast majority of us into postbiological form, though it will likely take a lot longer to emerge. Let me know if you agree.

Spock mind melding with a computer, Nomad, in The Changeling, Star Trek: TOS, S02E03 (1967)

PAI Uploading and Mind Melds:
Two Major Paths to Our Postbiological Transition

Per pound, biological brains are the most complex things on Earth today. But they are no longer exponential. Several studies have argued that they are nearly as optimized as they can get, given what they are made of — three pounds of electromagnetic meat. As one of their limitations, our biological minds think at roughly 100 miles an hour, the rough limit of chemical action potentials, even after speeding up by jumping between unmyelinated “nodes” in myelinated neurons. An electronic neural network thinks at roughly the speed of light, seven million times faster. Your electronic “You” can thus do the same amount of thinking in three seconds that your biological “You” can do in a year. And that’s before it starts to improve itself with even more exponential capacities, and presumably, new kinds of consciousness.

This astounding acceleration, lying in wait for all complex minds on Earth, seems baked into the physics of our universe. Due to their glacial slowness, their lack of exponentiality, and their inability to renew themselves, it seems that if we can, we’ll outgrow our biological brains, moving our minds into something just as natural, but better — postbiological life.

This Great Transition, involving the “uploading” of biological minds and bodies into much faster and hardier machine substrates, is the most obvious way we will solve the problem of death. It is clarifying for us to admit that all of humanity is already being uploaded right now into the digital world, bit by bit. We’re been doing this with all our easier data since the birth of digital computing. As deep learning advances, we’ll move on to our harder data and algorithms.

Don’t believe me? Read this amazing review, by John Lisman, The Challenge of Understanding the Brain (PDF) 2015, of how much we already understand about the brain as a computational system. As Lisman says, the first half of the 21st century is likely to be forever remembered as the period during which the brain, and all its algorithms, finally came to be understood (and by implication, replicated in machines).

That’s just how the development of intelligence apparently works in universes like ours. Everywhere complexity exists, the transition of leading intelligence from physics to chemistry to biology to biominds to cyberminds may be the only exponential path available. Thus our next great transition, when biology begets postbiological life, may be a universal developmental process that arises on every Earth-like planet in our universe.

Physicists call definable shifts to a different set of environmental dynamics phase transitions. Think of the transitions from gas to liquid to solid in our high school chemistry. A popular term for phase transitions is “singularities.” Many natural phase transitions, such as the creation of black holes from a dying star, involve exponential processes prior to the transition. When mathematician and sci-fi author Vernor Vinge wrote The Coming Technological Singularity in 1993, about the exponential 21st century emergence of greater-than-human machine intelligence, he was thinking of a phase transition. A whole new kind of world will exist after that point, one with a different set of global dynamics, whether we want that outcome or not.

I like to call the exponential Great Transition to postbiological life that we are presently engaged in a “slippery singularity”, as I don’t expect machine life and intelligence to emerge via any single definable process or event. Even deep learning is just one piece, albeit an important one, of this process. Instead, we can expect many gentle changes, most just big enough to be noticeable, each pushing us toward this future, and a steady build up of those changes until they flow all around us, like a river carrying us downstream.

Some folks will be scared of and reactive to these changes, but on average, most of the changes will be so incremental and beneficial that most of us won’t be interested in resisting them. It is easy to argue that the smarter they get, the more we will consider our PAIs to be a part of us. Many people already feel that way about their smartphones. Consider now that as their AI continually improves in the cloud, and our wearable sensors get ever more granular, we may eventually consider our PAIs as “our better selves”. Why so? Because they will be able, if we set them to do so, to cleverly nudge and coach us in the directions and toward the values that ourhigher minds want us to go. They will keep careful track of the directions, priorities, and values that we presently choose for ourselves, in our moments of conscious resolve.

As biological beings, we will are continually subject to the many urges of our unconscious processes, over which our conscious minds are actually only weakly and intermittently in control. See Leonard Mlodinow’s excellent Subliminal: How Your Unconscious Mind Rules Your Behavior, 2013, if you don’t believe this point. But our PAIs will have no such constraints, and we’ll be able, if we choose, to increasingly rely on them to become better people, in every sense of the word.

Thus within just a decade or two from now, I expect billions of folks will consider their PAIs, which were increasingly useful to them throughout their lives, as a “good enough” version of themselves to leave behind for their children, friends, and the world. We already leave behind our Facebook pages and in some families, our Gmail accounts for our children, and with PAIs managing all that data it will become far more intimate and useful.In early decades, we might view our PAIs as an upload of, let’s say, only 20% of us. But with good AI doing the filtering on data and modeling, it might soon become the Pareto 20%. In other words, the 20% that represents 80% of what we cared about most in life.

As it offers the simplest value proposition and requires the least behavior change, I think leaving behind a PAI upload will be the dominant form of cultural “immortality” (technically, superlongevity) we can expect. While at first only a minority may interested in leaving PAI uploads, this behavior might grow to a majority in a very short timeframe (consider how fast gay marriage was accepted, after contentious debate, for a similar example).

In the end, most folks may be deeply comforted by the recognition that their best stories, ideas, and records of their lives can live on, for as long as they might be useful, available to loved ones and for some, to the world, after they have died. Of course, the AI and data for those PAIs will keep getting exponentially better as well, if you enable that feature (and some won’t), so the PAI of any deceased person can keep improving , hoovering up more data about that person, and even conversing with loved ones about them after their death. Done sensitively, I’m sure that for some families this post-death improvement of the PAI will become a popular choice.

Again, some of us will recoil at first from such treatment of our ancestors, but others will embrace it. As with other digital technologies like social networks and video games, which can be evidence-based and empowering, or perpetuate disempowering fantasies and filter bubbles, ancestor PAIs can be used for good or ill. But for those who use them, having PAIs of late friends and family who cared about you still present in your life will be a powerfully humanizing advance. Their existence and popularity will change our attitude toward, and anxiety regarding, biological death.

Let’s look now at a second major path to postbiology, mind melds, a path that hundreds of millions of us might take prior to the close of this century, if scientific developments continue at the pace they have in recent decades.

Minsky’s famous (and not very organized) book (1988)

Understanding our mind meld future begins by realizing that every one of us are already what Marvin Minsky called a “Society of Mind.” This means we have many independent mindsets inside our own brain, each a distinct yet overlapping set of neural networks, that stores its own redundant data (as well as accessing common data). This diversity allows us to see the world from many simultaneous viewpoints, and to argue with ourselves over every important decision we must make.

Our minds, it turns out are very much like beehives, with each mindset being like an individual bee, doing a constant waggle dance with its thoughts and internal dialog, trying to convince the whole hive to do something, for our best interest. We maintain this redundancy and diversity of viewpoints in our brains because it makes us more adaptive, but sometimes it malfunctions, as we see when our minds split into multiple personalities during trauma. Most of the time, though, the quality and quantity of information shared between each mindset is so high, that it is most useful for us to think of ourselves as one person. Even though we are, in actuality, also a society of mindsets.

Our minds are like beehives, with each unique mindset (semi-redundant neural net) acting like a bee in swarm cognition.

You can probably see where this is going. At some point later this century, let’s guess circa 2080 or so, your PAI will ask you, or your children if you are no longer around, if you would like a direct “mind link”, or BCI (brain computer interface), to its own neural networks, via the use of removable nanobots (transducers) in your brain that allow you to wirelessly and continously connect your two natural intelligences. This would require that sufficiently powerful forms of General AI (GAI) have emerged, something I’ve argued might happen in our 2060’s, present trends extended. It is only the GAIs, in my view, that could develop such deeply powerful nanotech.

All the efforts of today’s brain machine interface companies, like Kernel, I expect will be very underwhelming and incremental, as biological humans will make very little progress on molecular nanotechnology in coming decades. Like biological immortality, it’s just too hard a problem for us. But not for GAIs, in my view. I can easily imagine that GAIs, with human supervision and much more advanced quantum computers (which we’ll need to do all that molecular simulation) might invent molecular nanotechnology within a generation (just a guess) after they are on the scene. That puts this scenario in 2080 or later, in my view.

Consider that this 2080’s (or 2100?) mind link and the nanotech behind it, will allow you not only to talk to and argue with the mindsets within your own biological brain, but you’ll now be able to use the same high-bandwidth neural language to talk to and argue with the mindsets in your PAI as well.

I think you can see where this scenario is headed. Once your mind link is sharing sufficiently high quality, high-quantity, and high-speed neural information, it will be more useful for you not to think of yourselves as two minds, but one. You’ve now “melded” with your PAI.

Direct mind links without the use of neural technology were first popularized by parapsychology researchers, as telepathy (often claimed, never found). Real BCI research began in the early 1970s at UCLA, and it happens in hundreds of labs today. Mind links using nanotech have been done well in sci-fi since the 1960s. An on-screen depiction of a mind meld with a computer happened for the first time ever (to my knowledge) in Star Trek: The Original Series, when Spock did a Vulcan mind meld with a computer called Nomad, whence I get the name for this scenario. See Nexus (2012), by futurist Ramez Naam for a neat recent sci-fi story featuring mind meld nanotech.

Eventually, the nanobots your PAI offers you may let you do things like neural synchronization, which is my current favorite hypothesis, among all the competing ones currently on offer, for how consciousness arises. Neural synchronization, facilitated by specialized long range connective tracts like the claustrum, seems likely to me to be at the center of the self-awareness we find in all complex animals, like mammals. That synchronization, combined with humanity’s particularly prosocial (planning, self-modeling, other-modeling) brains, creates the special kind of self-consciousness that gives us an identity narrative to go with this synchronization, whenever it occurs.

See the lovely 12 min video, What is consciousness?, The Economist (2015) for a current take on research in this area. Neural synchronization and integration tell us why we don’t have consciousness in sleep or under anesthesia, and why our consciousness rises and falls so much in intensity throughout the day.

As we learn more about how consiousness works in biological brains, we’ll incorporate those methods of neural network synchronization and integration into our artificial neural networks. At some point, our leading GAIs will start to develop self awareness, and humans and our PAIs will develop one shared consciousness. We’ll slowly realize we’ve grown into new, larger “us”. That us will still have raging arguments and disagreements, as we do when we argue in our own minds (such diversity of mind is always helpful for any finite intelligence) but now we’ll recognize that we can easily move our point of view between our biological and our electronic self. We’ll see them as a single integrated identity, not two.

Great technical work on consciousness. Buzsaki (2011)

If you have an interest in graduate biology and want more on neural synchronization and the mechanisms that do the synchronizing, see Buzsaki’s Rhythms of the Brain (2011) and read up on ephaptic coupling. As with memory encoding, neuroscientists don’t yet have all the details, but consciousness is no longer a mystery we expect we’ll never understand. It is a fully physical process, a puzzle that we’ve already partially solved. Tomorrow’s computational neuroscientists will surely crack it’s remaining details, and we’ll duplicate it fully in our neural network-based machines.

We’ll also solve making neural nets that not only think and remember, but feel. Every one of our different feelings (anger, happiness, sadness, envy, etc.) are, at their core, different strings of positive and negative sentiments that we have associated with past actions, like notes in a song we hear whenever we think about various subjects or possible future actions. We build these “feeling songs” using neural networks specialized for sentiment — the amygdala, limbic system, and parts of our prefrontal cortex — the latter being what doctors cut through in their surgical lobotomies during the 1930s-1960s, creating much more passive and emotionless people. These sentiment networks give us “gut feelings” about what to do next. We need those feelings when rationality fails us, as it often does. Patients with lesions in these emotional networks can’t feel, but still can still access sentiment memories in their prefrontal cortex. As neuroscientist Antonio Damasio describes in Descartes’ Error (2005), some of these patients can rationally argue forever the merits and drawbacks of various actions, but they can’t make decisions, and are as unmotivated as a lobotomy patient.

This is a clue that if you want to stay motivated in life, let yourself consciously feel both the highs and lows of your day, and observe closely how those feelings relate to your thoughts, and vice versa. You may need to think more at times, as with unconscious bias or anger. You may also need to feel more at times, as with procrastination due to unconscious fears. When you consciously feel and acknowledge your emotion, whatever it is, and think about how your thoughts triggered it and whether it was useful, you can get on with making changes to both your thoughts and feelings that will give you real progress. Both are sets of neural networks, trainable by your mind. For students, The Oxford Handbook of Affective Computing (2014) is an nice survey of work to bring sentiment to computers.

This work makes a good case that naturally intelligent machines will require sentiment networks (gut feelings) as they get smarter. As no finite physical mind can ever be “Godlike” in its intelligence, and no being ever has perfect information, rationality and logic will regularly fail our PAIs, just as it does with us. In that environment, gut feelings and moral sentiments will always inform and motivate us to make better decisions.

In the early stages of a mind meld, we might notice many differences in the way our biological and electronic minds think and feel. At first, we’ll surely still feel most at home in our biology. But our PAI, our electronic self, will be learning, thinking, and feeling, millions of times faster than our biology does. It is that differential rate of learning, long documented by scholars of accelerating change, that leads me to expect our PAIs will eventually also feel like our biological selves.

Consider also that one of the first things your new “hybrid” you will want to do is scan and back up all your biological brain’s memories into your electronic brain, as your biology will continue to age and die.

As you perform this memory backup, you might be amazed to realize that you can recall your life’s memories, and think with them, millions of times faster in your electronic mind than your biological mind. Increasingly, as your electronic mind improves, you may even feel your center of consciousness moving out of your biology and into your PAI.

Again, your electronic mind may feel in several ways more primitive than your biological mind in the early years of this technology. You’ll know you haven’t captured everything yet from your biology, and you’ll continue to upgrade your PAI every year. But as your PAI encodes exponentially more bio-inspired algorithms, and invents new ones biology doesn’t have, your personal AI seems likely to become the place where “you” increasingly live your mental life.

Consider now that when our biological body dies after years of such a mind meld, it may not feel like death, from our perspective. It might feel instead like metamorphosis, or natural change of form. The transition could be like going from childhood into puberty, or a caterpillar into a butterfly.

That’s the slippery singularity. Using the mind meld, a modern version of the Moravec Transfer, you slide right into your postbiological form, and you do it in the most natural way imaginable, with no interruption of your personal consciousness or feelings — in fact, with an enlargement of them. As a postbiological being, you will feel like a perpetual child, constantly able to grow and learn. You also have a potentially perpetual lifespan, assuming you want to stick around and keep improving, as many folks surely would.

One of my favorite works of foresight from the 20th century foretold a lot of this. Grant Fjermedal’s The Tomorrow Makers: The Brave New World of Living-Brain Machines (1986) , written thirty-five years before the deep learning revolution circa 2010, paints an incredibly prescient look at the future of AI as it increasingly simulates the thinking and feeling processes used by biological brains and bodies. Fjermedal interviews three of the four then-50ish founding fathers of AI: Alan Newell, Marvin Minsky, and John McCarthy (Herbert Simon is the fourth). He also interviews a young Hans Moravec, Danny Hillis, Rodney Brooks, AI skeptics including Hubert Dreyfus, and many students and hackers then passionately pursuing the dream of building software and robotic simulations of human beings, and “downloading” the contents of our human minds into robotically-embodied computers, what we now more commonly call mind uploading. He explores the personal, social, economic, political, military, and spiritual implications of this fantastical idea, touring CMU, MIT, Stanford, Berkeley, DC, New York, Minneapolis, and Japan, and he comes to believe this Great Transition of Mind from biology to technology is inevitable, eventually. The important questions, as he recognized then, are how will we choose to do it. Will we empower individuals, or continue to reduce their freedoms, intelligence, and wealth relative to corporations, the rich, and the state? Will we do this work recklessly, driven only by commercial, ownership, and power interests, while minimizing values of safety, privacy, and user control, or will we prioritize those values at every step, even as this slows down the process and increases its expense? The choice is ours.

Brain Preservation — A Less Popular Third Path to Postbiology

Let’s look now at the last path to postbiology we should discuss — brain preservation. Since the 1960’s, the practice of cryonics has made it possible to freeze your brain at death, but very few folks have done this (just a few hundred so far) in all that time. There are many reasons for the reluctance. One big reason is that we have no solid scientific evidence yet that it might work, only guesses. Another is that it has been very expensive. Few would take a large sum of money away from their children ($80,000 is a typical cost at present), either via insurance or in a lump sum, for such an uncertain future return. There’s even a derogatory term I’ve heard applied to those brave (and wealthy) folks who do, the “cryoselfish.”

Fortunately, advances in brain science are now closing in on the mechanisms of memory. Within a decade or two, I think we’ll see simple memories recovered from “uploaded” animal brains. We also have new and much less expensive methods like plastination, which doesn’t require refrigeration, being used to preserve and upload simple animal brains (including worm, zebrafish and fly brains) into computers today. We just don’t know how to read the memories encoded in those uploads yet. But the neural code will be cracked, and when that happens, it will be obvious to everyone that a plastinated or cryonically preserved brain stores useful retreivable information. Eventually, we’ll realize that the entire person’s personality is preserved, and that they can come back later, as an upload.

Brain preservation today splits into three camps of folks who are interested in it. See my Medium article, The Transporter Test and the Three Camps of Brain Preservation, 2016, for more. In coming decades, I expect all three brain preservation camps will grow, but it is the Uploaders (folks who are fine with the idea of coming back inside a computer), rather than the Reanimators (folks who only want to come back as biology) that will grow the most. As the cost of brain preservation drops, and its validation grows, more and more folks will know someone who as done it, and will themselves become interested in doing it.

Later in this century, validated brain preservation may be available around the world, the lowest cost versions may be under $10,000, neuroscientists will largely agree that memories and even personalities are preserved, computer scientists will largely agree that future computers will be able to cheaply scan and upload those memories, and the procedure may even be covered by health insurance and available in all major cities in special facilities, hospices, and hospitals in our most progressive (and wealthy) nations.

As a co-founder of the Brain Preservation Foundation in 2010, I’d love to see all of this happen by mid-century. I also think that in any society where, say, 100,000 people have used this technology, we will see that society’s values move toward something we can call a Preservation Value Set, advancing science, progress, future, sustainability, truth-and-justice, preservation, diversity, and community-oriented values in those societies. So I consider brain preservation a very worthy goal.

All this said, I think the brain preservation path to the problem of death will always be a very small minority, relative to the use of PAIs as a way for our ideas and experiences to live on, and (far later in this century) mind melds, for three reasons:

  1. First, brain preservation requires the largest leap of faith about the nature of the future, and the greatest behavior change. The other two solutions are far simpler to understand and offer more obvious benefits, in the here and now.
  2. Second, brain preservation is by far the least exponential of these three processes. Brain preservation technologies will grow slowly relative to the other two solutions, due to cultural, medical, and professional opposition. As a species, we don’t like thinking about death.
  3. Third, we always understimate the long-run power of exponential processes. Few of us realize how satisfying it will be to leave behind a personal AI for our loved ones in 2040, how much less grief we’ll have about our own death as a result, and just how smart those PAIs will be. For most people in the 21st century, I think PAIs will be immortality enough.

To sum up this contemplation of the future of personal mortality, I think it is our responsibility to tell the best stories we can about our Great Transition to Postbiology, as speculative as those stories may be today We also need to carefully consider their potential downsides and abuses, and ask how we can guard against them with proper foresight and action. The choices we make with our emerging PAI technologies today will determine how well, how soon, and how many of us will benefit from them tomorrow. Our postbiological destination may be inevitable in a hundred-year view, but the quality of the path we walk toward this amazing future is entirely in our hands.

John Smart is CEO of Foresight University and author of The Foresight Guide. You can find him on Twitter, LinkedIn, or YouTube.

Feedback? Leave it here or reach me at john@foresightU.com.
Want our newsletter? Enter your email address at ForesightU.com.
Need a foresight speaker? See my speakers page, JohnMSmart.com.

CC 4.0. Anyone may share or adapt, but please with link and attribution.

Written by

CEO, Foresight University. Author, The Foresight Guide.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store