Sturgeon’s Law and the Future of Reading

John Smart
3 min readOct 30, 2024

--

As youths, we often find value in reading many things word for word. But once we have enough life experience, and specific views about the nature of life, people, and reality, some of us find it helpful to increasingly skim and scan much of our nonfiction, and after a while, our fiction as well. It gets ever easier to find implausibilites and impossibilities, implicit or explicit assumptions, plot elements, characters, behaviors and other aspects of any book, especially fiction books, that we can interpret as not worth our time. If you’ve ever walked out of a theatre in the middle of a movie, or stopped reading a number of books without finishing them, you know the ever-growing value of discrimination.

As the great sci-fi author Ted Sturgeon famously said, 90% of everything increasingly becomes not worth reading as our maturity grows, at least word for word. This is called “Sturgeon’s Law” and is well worth knowing. It is better to skip or skim the poor and anachronistic stuff, so we can spend more time reading what we think matters most, both today and for the future. As we grow, our increasing inability to suspend disbelief in many stories, both in fiction and in what is alleged to be nonfiction, is a helpful outcome that we should admit to more often. It is a consequence of both accumulated life experience and a love for evidence-based thinking.

But even though all if this seems true, I think we can also recognize that much of that 90% is still worth reading in bits. I can imagine that reading difficult and flawed fiction works like Star Maker (see my post, the Necessary Noosphere for more on that book) in an AI-guided digital form, with a record of what sentences and sections others with values and world views that we admire have read and highlighted, and with a host of skimming, scanning, and summarizing tools at our disposal, would be an excellent way to keep their best parts informing us, and is a platform we can expect in the future of reading.

Kindle’s “Popular Highlights” is a nice start at showing us the best bits, but it does not allow us to subscribe to just the highlights of those colleagues and opinion leaders whose thinking and world views I particularly value. In the future, I would love to see channels like Bookpilled (see previous post) offer a link to a database of their public highlights and margin notes, allowing me to selectively read or watch what they liked, in any media, and understand why. That could also provide a bonus income stream for reviewers as well. I’m convinced that treating creators better from a monetary perspective is going to be key to the highest-reputation web platforms in coming decades. I also expect our Personal AIs will steer us toward those platforms, the way we buy local today, whenever we can.

If Amazon truly prioritized the welfare of its creator and reader communities, rather than just profits, I believe it would already offer a basic version of such community and creator micropayment features on its Kindles. In the meantime, we can envision what should come, and take steps to make it so.

I’ve also written a onesheet (single page), Sprint Reading for Busy People, with twelve tips on how to skim read. It is popular with audiences in my foresight training work. It will help most people to read a lot more, every week. If you download it and use it, let me know. I always appreciate feedback, and let me know how I can improve. Thanks for all you do!

John Smart is a futurist and scholar of foresight, leadership, technology, life sciences, and complex adaptive systems. His book, Introduction to Foresight, 2022, covers models and methods of personal, team, and organizational adaptiveness. CEO of Foresight University, he talks and consults with industry, government, academic, and nonprofit clients.

--

--

John Smart
John Smart

Written by John Smart

CEO, Foresight U. Author, The Foresight Guide. Foresight, Empathy, Ethics, Equity, Empowerment, Purpose, Good Society, Evo-Devo, Health, Tech, Personal AIs

Responses (1)