Your Personal AI (PAI): Pt 1 — Your Attention Please

The Brave New World of Personalized Smart Agents & their Data

A Multi-Part Series

Part 1 — Your Attention Please (this post)
Part 2 — Why Agents Matter
Part 3 — The Agent Environment
Part 4 — Deep Agents
Part 5 — Deep Training

John Smart is a global futurist, helping people thrive in a world of accelerating change. These posts are from his book, The Foresight Guide, free online at The Guide aims to be the best single intro to the field of professional foresight, and also a great Big Picture guide to our 21st century future. Check it out!

Part 1 — Your Attention Please: A New World Is Almost Upon Us

Summary (tl;dr)

  • This series will explore the five- to twenty-year future of smart agents and the knowledge bases they use and build, with a focus on the personalizing versions of these tools, which we can call Personal AIs (PAIs). In my view as a futurist, PAIs may be the single most socially, economically, and politically important IT development we will see in the next two decades.
  • As we’ve seen in the headlines about deep learning since 2012, the AIs are starting to get impressively smart and useful, whether we want them to or not. To paraphrase futurist Stewart Brand, “We are gaining superpowers, so we better get good at using them.”
  • A central question of the future of AI is whether we use it to further empower corporations, the rich, and the state, or whether we’ll use it to reempower citizens, and reduce the growing power imbalance ordinary voters have felt relative to these three actors since the mid-20th century.
  • A second question is whether we will build our AIs recklessly and selfishly, to serve only commercial, ownership, or power ends, or whether we will prioritize concerns like safety, privacy, and user control along the way, even as that must slow down the inevitable transition?
  • A new kind of software agent called a personal AI (PAI) is the most empowering and intimate form of AI on the horizon. In addition to today’s crowd-trained, data-anonymizing, impersonal agent AIs, we will soon see a few visionary entrepreneurs offer us AIs able to model our personal interests, goals, and values in their knowledge bases, and which act as our assistants and digital interfaces to the world.
  • These PAIs will train their models off our personal data and our ongoing private conversations with them. They’ll build these models beginning with our emails, photos, social network posts, and browsing habits, and some users will let their PAIs run lifelogs, recording everything they say.
  • For most users, all that personal data will be in an encrypted private cloud. All that private data, and user training, will ultimately allow our PAIs to build a better model of us and our values, interests and current goals than the models based on the more limited data that all the powerful actors (corps, marketers, government, etc.) will continue to collect and resell about us. I think PAIs will eventually become the primary and most trusted personal interfaces that most of us use to navigate the web and the world.
  • Both proprietary and open source versions of PAIs should be in wide use by the late 2020s. The open source ones will likely be less polished and less adopted, but they’ll be significantly more trusted by some users. Open source PAIs will be a check against both the manipulativeness and the cost of proprietary versions, which will not always serve our values best.
  • In their early years, we’ll think of our PAIs as bright but slightly autistic helpers, much better at many tasks than we are, but still quite unschooled and unwise in most ways. The knowledge bases our PAIs use will be full of errors and biases at first. Nevertheless, many of us will increasingly rely on them as our interface to what we watch on the web, what we read, who we connect with, what we buy, and what we do in the world.
  • We’ll see titanic struggles between current AI leaders to become the primary PAI we use for various categories. Partially open strategies like Android will likely be the most successful strategy for most PAIs that become category leaders.
  • Due to the intimate nature of PAIs and the privacy concerns around their data, companies with a reputation for openness and crowd-benefiting business models (open source, digital coins, blockchain) presently have a strategic opportunity, particularly with millennials and centennials, to create a PAI that outcompetes those offered by industry leaders. For example, a PAI-compliant crowd-benefiting social network, like, could conceivably dethrone Facebook as the most trusted social network for a significant fraction of younger and early adopter users, in the coming PAI-enabled web.
  • Companies based in nondemocratic countries, like China’s Baidu, even as they match or exceed US-company investments in deep learning, will be at a significant disadvantage with PAI products they launch in the US and Europe, as trust and privacy issues will be be central to PAI adoption. Nevertheless, their PAIs might still become category leaders in various less-democratic countries around the world.
  • The takeaway from this series will be that we will need to build and raise our PAIs and their knowledge bases well, with love and care, and think hard about their implications for global security and strategy, as they will become particularly useful to a minority of us in the 2020s, and central to how billions of us live our lives in the 2030s and beyond.

Series Questions

As futurist who enjoys the Big Picture perspective, I’ll ask you to consider some “Big Questions” in this series. I hope they spur good discussions, and new insights, for each of us:

  1. How big a force for social change do you think AI (smart software and hardware) will be in the next generation? Could anything be bigger?
  2. What are the benefits and risks of personalizing smart agents (intelligent software we can talk to, with an always-improving model of the world, itself and us)?
  3. Is the convergence of neuroscience & computer science (deep learning) and continued artificial and natural selection on AI hardware and software the fastest and most effective way to continue to improve AI? If not, what could be better, and why?
  4. If we increasingly “grow & garden” our deep learning-based AIs rather than “engineer & program” them, will we be able to use bio-inspired methods (selection, immunity) to make them trustable and secure?
  5. What is social progress? How do we make a “Good” Society? Will we need social reforms like basic income to deal with the social impacts (tech unemployment, growing rich-poor divides, etc.) of accelerating AI, automation, and globalization?
  6. If people can preserve much of their life’s memories and identity while they live, simply via interacting with their personal AIs, how many of us will leave our PAIs behind for loved ones to interact with when we die? Will we let these “ancestor PAIs” continue to get smarter over time? How long will they live, before they grow into (turn into) something else?
  7. How many of us will actually get “uploaded” into computers in the coming century, a bit at a time over our lives, as our PAIs grow every year in intelligence, without even stopping to think about it? Is that kind of biological-to-digital transition not just happening here on Earth, but a process we might expect on all planets with intelligent biological life?


The biggest advance in computing technology that humanity has yet seen is sneaking up all around us, right now. In just the last five years, our leading IT companies are building the first truly useful intelligent assistive personal software agents, or “smart agents” or “bots” for short. The exponential convergence of speech recognition, natural language understanding, deep machine learning, increasingly deep and valuable knowledge graphs and knowledge bases of big data, and context from our digital devices, including our online habits, email, social networks, purchases, and location data, is allowing smart agents to anticipate our daily actions, wants and needs.

Conversational agents include Google Now, Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, Facebook’s M, the Siri founder’s startup Viv, Baidu’s Deep Speech (technically, the speech rec front end to an agent), IBM’s Watson Analytics (technically, an analytics front end to an agent), and a large number of offerings from smaller companies, such as Soundhound’s Hound, call customer service and center automation agents and agent platforms like IPSoft’s Amelia, Next IT’s Alme (I am an advisor to NextIT), Creative Virtual’s V-Person, Go Moment’s Ivy (for hotels), platform tech support bots like Slack’s Slackbot, scheduling bots like’s Amy/Andrew and Clara Lab’s Clara, email assistant Crystal, chatbots like Microsoft’s Tay, and a host of others around the world, either less well-known or presently in development. Smart agents are increasingly good at understanding the semantic meanings in our text, voice, GPS and other data, inferring our likely next actions, and predicting our needs, based on our current context. When they don’t understand, they are also learning how to ask questions to clarify their models and our intent.

At the same time, Google, Baidu, Microsoft, Amazon, Facebook, IBM, Nvidia, Factual, and other IT platform leaders are building both proprietary and open knowledge graphs and knowledge bases, vast databases that allow automated semantic structuring and semantic search of public and private information, and which use machine learning to do reasoning and inference with this data. In early 2015, Google began ranking websites based on their factual accuracy, not just PageRank. Any site with more than a few factual inaccuracies, according to the base, such as the antivaxxer sites, now gets a lower ranking in the new algorithm, renamed RankBrain, as it is centered around artificial neural networks.

Certainly today’s clickbait and unhelpful comments will eventually follow, a topic called personalized search. Imagine YouTube’s comment wasteland autosorted by relevance, truthfulness, and truenames. We’d all start reading the comments on YouTube videos again. Just like spam was tamed via a knowledge base of known bad actors and spam reporting buttons in our cloud-based email, all data on the web will be semantically rated, ranked, and filtered by a combination of AI, agents, and people. Roll RankBrain forward a few years, and we can see how powerfully and irreversibly tomorrow’s open knowledge bases and AI will change the web, making it much more valuable and relevant to each of us. Because our brains are highly evolved to use language, emotional tone, and visuals, our coming conversational agents, and the dashboards, infographics, and choices they continually offer us, will be the primary means by which the web is customized, to each of us.

All this has some very big implications. As we’ll discuss in a future post, each of us will be able to use agents and their knowledge bases to increasingly see only what we want to see. That power can lead us into an ignorant, biased, filter bubble hell, as Eli Pariser warns in The Filter Bubble (2012), or into a evidence-based, cognitively diverse, and empowering set of digital living and working spaces, bringing us to new heights of insight, empathy, and productivity. Which way we use our personal agents will be ours to choose.

As these knowledge bases and their brain-like networks grow, we can expect not only knowledge, but also truth, opinion, reputation, probability, goals, values, and other information graphs to become available on the public and private web, in both open and proprietary forms. In 2005, to honor futurist George Gilder’s seminal 20th century thinking on technology, I called that near-future world a valuecosm a time when increasingly granular maps of the values, interests, and goals of participating users become part of the open public web. The valuecosm is a predictable outgrowth of today’s datacosm (cheap and abundant big data), the telecosm (cheap and abundant telecommunications) of the 1990s, and the microcosm (cheap and abundant microprocessors) of the 1980s. It is almost upon us.

Microsoft CEO Satya Natella says “smart agents will supplant the web browser,” and “bots are the new apps.” Such statements are aspirational at present, but will be increasingly true in coming years. Looking back from the view twenty years hence, we will come to see today’s web of social big data (Web 2.0), as the precursor to a “Knowledge-Mapped” and “Agent Web”, a near-future when agents and their knowledge bases become the main way we choose to interface with the world. That seems worthy of the title Web 3.0. It is a world where semantic knowledge bases, brain-like machine learning and personalized smart agents all emerge in one big transition, at the same time.

Each of the coming posts will look at a different piece of this Web 3.0 future.

We’ll begin with an introduction (first and second posts), then look at how agents will likely be built (our third to fifth posts), then consider eight societal domains where we can expect big impacts from agents as their smartness accelerates over the next two decades. The eight domains are:

1. Personal Agents — News & Entertainment, Education, & Personal Growth

2. Social Agents — Teams, Relationships, Poverty and Social Justice

3. Political Agents — Lobbying, Repres. & Tax, Basic Income & Tech Unemp.

4. Economic Agents — Shopping, Financial Mgmt, Funding and Startups

5. Builder Agents — Built Environment, Innovation & Productivity, Science

6. Environ. Agents — Population, Pollution, Biodiversity & Sustainability

7. Health Agents — Health, Wellness, Dying and Grieving

8. Security Agents — Security, Privacy & Transpar., War, Crime & Corrections

In each area, we’ll look at two sets of scenarios for how smart agent adoption might play out. The first will typically be a dystopia, a sample of social outcomes with PAIs that we’d like to avoid, and the second a protopia, a measurably better world that a large majority of us would like to reach.

In the long run, I’m optimistic that PAIs will help us make major progress on these social variables. But in the short run, lots of bad things could easily happen. So let’s look at both sides of the future of personal agents, and please let me know what I’ve missed or am getting wrong.

Together we can create much better foresight than any of us can alone.

At present, there are no general-interest books on the present and future of conversational smart agents and their knowledge bases, to my knowledge.

But here are the three best background books I’d recommend for this series:

  • For an easy intro, read Scoble and Israel’s The Age of Context: Mobile, Sensors, Data, and the Future of Privacy (2013). You’ll learn about current and coming context-aware platforms and technologies feeding the knowledge graph, and some of their social implications.
  • For the next level up, read Eric Siegel’s excellent Predictive Analytics (revised for 2016), a great overview of all the industries and ways people are presently building knowledge and anticipation from big data, with a gentle introduction to AI.
  • For another level up, read Pedro Domingos’ The Master Algorithm (2015), a well-written and layman-accessible intro to the “Five Tribes” of machine learning. Two of these tribes, neural networks and evolutionary computing, are particularly important to the future of AI, in my view. Neural networks may be dominant for the next decade at least, with evolutionary developmental (evo devo) computing, a much more biologically-inspired approach, emerging sometime later, as we’ll discuss.

For bonus reading, Chris Steiner’s Automate This: How Algorithms Took Over Our Markets, Our Jobs, and the World (2013) considers some of the markets and jobs that our presently mild forms of automation have been disrupting. Steiner’s book says little about machine learning however, which are the algorithms that will increasingly change things going forward.

For a bonus read, see Smolan and Erwitt’s gorgeous coffee table book, The Human Face of Big Data (2012). Their lovely book offers a high-level tour of the knowledge web, and the ways our lives are changing as big data on all our public and private activities is becoming accessible to more and more of us. At one point, Smolan makes the metaphor that humans and our digital devices are now acting as agents in an emerging global nervous system. This metaphor, that we and our technologies are understandable, from both structural and functional perspectives, as a kind of emerging global superorganism, is quite old. If we are very careful with how we caveat it, it also seems increasingly apt.

Every good idea is usually a lot older than we realize. For those who like history, a great early overview of agents was Caglayan and Harrison’s Agent Sourcebook (1997), written to help businesses implement agents on their computers and the web. An early and very breezy look at agents was Andrew Leonard’s Bots: The Origin of a New Species (1997), focusing mainly on chatbots and spambots on the early web.

Most recently, Chris Brauer of UCL led a study on smart agents in 2015 that is a great resource for a commercial look at their near-future prospects.

Do you know of any other great background books or studies I should list here? Let me know by email or in the comments, and I’ll add them, thanks.

Bots are now in a second renaissance of sorts, with Microsoft, Facebook, Slack, Kik, and others launching bot-building frameworks or stores. Bots can help today with basic and common online problems, like password recovery, booking flights, and many other highly structured activities. They need good language understanding in their domains before we will use them, and the best are well integrated with human agents when the bots aren’t being useful. Companies in agent-assisted call center automation have been doing narrow integrations for a few years. Facebook is attempting to massively grow this market with its bot-building framework inside Messenger (announced Apr 12, the week after this post). That should help train up thousands of new bot builders, and move the ball forward. They’re also building a general purpose messenger bot, M, now in private beta. Given the small number of developers as far as I can tell, and the technical challenges, I’d guess we are still three or more years away from M hitting mass use. But they have users in a learning cycle, and they’re aggressively using deep learning. So kudos to them.

The name used most often to describe smart agents today is virtual assistants (“VAs”). But the term “virtual assistants” is clunky, and it gets confused with living virtual assistants, people that work online for others. Computer scientists have been calling these intelligent agents for years, so like Natella, “smart agents” is the term I’d recommend specifically for software with a statistical understanding of human conversation, emotion, and behavior and with some agency capacity (able to perform tasks for you), whether that software talks to you or not (and most of the time, it won’t). Bots is a great term for any automated agent, smart or not. PAI is a particularly great term for a personalized smart agent, as it is a single syllable, and it is relatively self-explanatory to any who learn what PAI stands for. Let’s find good terms soon, as we’ll talking about these for the rest of the century, in my view.

In a recent Slate article, Will Oremus predicts smart agents will increasingly be “the prisms through which we interact with the online world.” Consider next what happens when we add wearable audio and video augmented reality to our agents, and our knowledge bases get a bit deeper and smarter, aided greatly by the internet of things. In that future, it’s obvious our agents will become the main software interfaces we use to interact with the world, period. Oremus’s article is titled “Terrifyingly Convenient,” a phrase that is a great way to highlight both the disturbing and the enticing aspects of agent technology.

As humans, our minds naturally go to the dystopian aspects of agents first, for deep evolutionary reasons. Only secondly, and warily, do we contemplate their progress-related, or protopian aspects. But both negative and positive outcomes are likely, and we’ll do our best to cover both futures in this series. I discuss the importance of keeping a good balance between these two key ways of thinking in Keeping Intelligent Optimism in The Foresight Guide.

Our most intimate agents will be highly personalized to us, building accurate internal models of our current context, preferences and values. That makes them different enough from unpersonalized agents that I think they deserve their own unique name as well. In 2014 I began calling highly personalized agents personal sims, or simply, “sims”. Think of a simulation, or The Sims, a game played with graphical representations or “avatars” of people, one or more of which might look like the player. That is a good name, but I now find PAI, which I began using in late 2017, to be an even better term.

Our PAIs won’t have to look like us in order to have an accurate internal model of who we are. In academic labs, an agentthat acts like a great butler, secretary, humorist, guide, or coach, who is not a visual copy of us, is usually more popular than an agent that looks like the user, which is often seen as narcissistic or creepy by users, at least today. But any highly personalized smart agent, though it may have its own appearance and personality, perhaps like Carson in Downton Abbey, has a good portion of its mental architecture dedicated to being a software simulation of us. Thus both PAI and “sim” is are good terms for such software helpers. In the not-too-distant future, I can imagine us saying “my PAI said this”, or my PAI did that” when we talk about our lives in this brave new world.

As their smartness grows, we’ll increasingly use our trusted PAIs to advise us, and act on our behalf. PAIs will be continually conversationally trained by us, and each will have a growing collection of encrypted private data (emails, social network posts, photos, browsing histories, conversations) about us, much of which is not shared with the outside world. Our PAIs will be able to use that data, and their growing neural network intelligence, to help us make choices that better reflect and protect our interests, goals, and values. As learning agents, they’ll also increasingly acquire interests, goals, and values of their own.

I’ve been thinking about PAIs and their knowledge bases for about fifteen years, since the start of my career in strategic foresight. I gave my first tentative talks on them at a Foresight Institute gathering in 2001. In 2003, I published an extended interview and a popular web article on them, and the conversational interface they would need to build good semantic models of us. In the early 2000’s I called personalized agents digital twins” and “cybertwins” to signify that they would become like software twins as they acted for us in the world. I used the simpler “sim” from 2014 to 2017, and now I find PAI the simplest and most useful term, and recommend that term to you as we discuss this emerging product and service in coming years.

Science fiction authors, futurists, and visionaries usually get to the future first. But at the same time they also make us slog through a majority of false futures, and only careful critiques allow us to distinguish the two in advance. See Wikipedia’s AI in Fiction page for examples of both in Sci-Fi. In the commercial arena, Apple was the first big company to bring the PAI vision to the general public, in their Knowledge Navigator concept video in 1987. In that video, which was set in 2011, a user talks to a bow-tie wearing personal AI on an iPad-like device. The real iPad debuted in 2010, and Siri was launched on the iPhone in 2011. Pretty good foresight, in my view!

At this point in our intro, a host of PAI-related questions may spring to mind:

  • When you act in the world in coming years, how will you know when to trust your PAI’s recommendations for who to date, what to read, buy, invest in, or how to vote?
  • How will you judge when its intelligence exceeds its wisdom (common sense), and when it is serving your interests, rather than the company that created it?
  • How early should children be allowed to use PAIs? How early should educational PAIs, via smartphones, be given to emerging nations youth?
  • How many “virtual immigrants,” working online in tomorrow’s startups, can we expect when global youth learn English, other leading languages, and technical skills, from birth from their wearable PAIs, via what futurist Thomas Frey calls teacherless education?
  • How intimate will you let your PAIs get with you? How will we best respond when some people start to fall in love with their PAIs? See Her, 2013, one take on that scenario.
  • What will be the impact of therapy PAIs? Correctional PAIs? Shopping PAIs? Financial management PAIs? Voting PAIs? Activism PAIs?
  • If your mother dies in 2030, will you find it helpful to talk to the PAI she talked to for the last ten years of her life? Will you let Google, Facebook, Microsoft, or whoever continue to improve her AI after her passing, and interact with surviving friends and family, so her PAI can become an ever-better interface to all the data of her life? Strange as this sounds, a few startups are already working on that idea today. See Also the short film, Eternity Hill (see the trailer here). How will this coming PAI “immortality” (or more accurately, just a much longer PAI lifespan than biological lifespan) change our culture?

These are just a few of the big social questions raised by PAIs, and we’ll try to take a good early look at many of them in this series.

Surprisingly, if accelerating computer hardware and software trends continue, sometime between now and the latter half of this century, our PAIs will begin to seem generally intelligent, to their users, both intelligent in the human sense and in a number of senses wholly new. At the same time, our most advanced PAIs will come to be seen, by their users, as digital versions, and indistinguishable extensions, of us.

In fact, I think that’s what the long-discussed technological singularity will primarily look like, to the typical person, some time in the second half of this century. Each of us will experience our own “personal singularities” as our increasingly intelligent PAIs, and the data and machines they control, start to reach and then exceed us in their understanding and mastery of the world. PAIs are thus the “human face of the coming singularity,” to riff on the title of Smolan and Erwitt’s The Human Face of Big Data (2012).

In this view, we are heading for a primarily bottom-up, diverse, and massively parallel world of distributed PAI intelligence, with a small amount of ideally well-intentioned but ultimately secondary top-down efforts at control of the gathering intelligence storm by various authorities. In my opinion, a very open, distributed, and highly bottom-up approach to machine intelligence is also the only way we’ll actually create all the experiments, data, and training necessary for human-surpassing machine intelligence (also called “general AI”) to emerge, both quickly and (for the most part) safely in coming years.

At the same time, to balance all this new personal empowerment and collaboration capacity, individuals, teams, and nations will need ever better security, privacy, and adaptive political systems. I think those better rules and systems will also emerge by primarily bottom-up means, again with a small fraction of top-down strategies as well.

Understanding all complex adaptive systems, whether they are organisms, organizations, societies, technologies, or even universes, as primarily bottom-up, experimental, and selective, and only very secondarily top-down, rational, and planned, is a way of systems thinking that has a name. It is called evolutionary development, or evo devo”, and it comes from the field of evo-devo biology, which I believe is the best current framework to understand adaptation and change in living systems. In 2008, philosopher Clement Vidal and I formed a small research community, Evo Devo Universe, to study this particular approach to complexity and change.

A great early book on evo devo thinking, applied to societies and technologies, is futurist Kevin Kelly’s Out of Control (1994). If we live in an evo devo universe, then most processes and events will always be evolutionary, unpredictable, and “out of control,” while a special few things will be developmental, top-down, and predictable. Both unpredictable and predictable futures lie in front of us, each waiting to be seen.

Unfortunately, every popular book I’ve read on the future of artificial intelligence either ignores or discounts the likelihood of a mostly bottom-up, divergent, creative, unpredictable, and “evolutionary” agent-driven future of AI, in combination with a much smaller amount of top-down, convergent, conservative, predictable, and “developmental” set of architectures, priorities, and controls. Yet as I will argue in this series, given the impressive advances we’ve seen in deep learning since 2012, that primarily bottom-up approach now looks to be the most probable future for AI.

A world of exponentially more intelligent agents and PAIs acting as proxies for us, in deep harmony with our robots and machines, will be a tremendously empowering but also a disruptive and potentially dangerous future. Deciding who controls their construction and training, and the sensors and data they have access to, will be among the most important social, commercial, and political choices of the coming generation.

This is a future I don’t think we can avoid. It seems a developmental inevitability, so we better get better at thinking and talking about it. Let’s end this post with a version of a prescription from one of my favorite futurists, Stewart Brand, editor of the Whole Earth Catalog and co-founder of the Long Now Foundation: “We gain new superpowers every month now, whether we want them or not. So let’s get good at using them, to help each other thrive, as best we can.”

As you think about agents and PAIs in coming weeks, I like to suggest three more questions for conversation:

  • Who will build the most trusted and popular smart agents? Big corps? Open source? Govt? Who will build our most trusted and popular PAIs?
  • What does our future economy look like, in a world of ever-smarter personal AIs?
  • What is the future of politics, as our agents and PAIs increasingly understand, assist, and advise us?

Lastly, if you are near San Jose this week and have the means, consider taking a day at Nvidia’s 2016 GPU Technology Conference. With 5,000 academics, technologists, and entrepreneurs in attendance, GTC is presently the builder’s deep learning event of the year. It’s got the excitement of Macworld in the 1980’s. A whole new frontier of human-machine partnership is emerging, right now.

A highly recommended skim, for all the tech curious, is Monday’s keynote from CEO Jen-Hsun Huang. If that doesn’t blow your mind and give you a severe case of future shock, I don’t know what will.

John Smart is CEO of Foresight University and author of The Foresight Guide. You can find him on Twitter, LinkedIn, or YouTube.

Feedback? Leave it here or reach me at
Want first access to my events? Enter your email address at
Need a speaker? See my speakers page,

CC 4.0. Anyone may share or adapt, but please with link and attribution.

Part 2 —> Why Agents Matter

CEO, Foresight University. Author, The Foresight Guide.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store