How many near-collisions on the sidewalk will it take for me to kick my head-buried-in-phone zombie-walking habit that I so loathe in my fellow citizens? How many more vacant, listening-but-not-listening interactions must I suffer, and in turn hypocritically inflict on those I love and respect? I acknowledge the problem of my own technological maladaptation. Many of us do, and achieve varying degrees of success curbing our use. But troublingly, we are seeing very little of the same recognition一that we and our technologies are not in perfect harmony一from the tech juggernauts whose products and services literally power our modern lives and shape our behaviors. As the tech hype engine continues to herald the transformative power of every new feature, it’s increasingly critical that we ask the tough question: What truly constitutes “progress” in our innovations?

Advances in what humanity can do technologically have always precipitated questions of what it should do. Technological optimists are quick to observe that many major innovations in history, from typewriters to televisions, have been met with criticism and fear-mongering when first introduced, but once adopted, largely failed to deliver the doomsday scenarios some predicted. But in 2018, this same level of optimism carries huge risk. While the public discussion of the relationship between ethics and technology is gaining some healthy momentum, there is real cause to ring our collective alarm bells more loudly. 

Our lives are riddled with ethical technological quandaries, from election meddling to AI, but I believe one of the most troubling and addressable issues is the zero-sum battle for eyeballs in the “attention economy”, where increasing time spent and engagement on a platform is the fundamental product design principle. Our knowledge of how to hack the brain’s reward centers (as Tristan Harris has observed) allows our digital products and services, from Instagram to email, to propel us into frequent and sustained engagement. Digital addiction is rampant, its effects are not fully known nor viewed as particularly risky by consumers whose social and professional fabric often demands opt-in to always-on norms. 

While consumers may not be changing behavior en masse, they’re starting to recognize the scope of the challenge for themselves and for society. The World Health Organization has officially recognized video game addiction, and Apple investors have begun raising the prospect of a long-term risk to Apple’s commercial outlook resulting from potential of damage from mobile device use in early child development

The root of the problem is that our current ability to advance our technological capabilities far outpaces our collective desire and capacity to understand the implications of these “advancements”. Because when it comes to product and service design today, knowledge is far more easily acquired than wisdom. Determining whether attributes we usually associate with “better”一faster, cheaper, more powerful一are truly advantageous to our lives is a complex, values-based calculation, and consensus can be elusive. It’s imperative to close the growing gap between our ability to innovate, and our ability to devise solutions that are in our best collective interests, but not without concerted effort and bold moves from the entire technology ecosystem. 

Steering this ship in a new direction will not be easy. The five most valuable U.S. tech companies have used data to propel themselves to being cumulatively more powerful than most governments on Earth by learning what makes us tick, click, view, buy, discuss, and share, and engineering services to manipulate and tap into these proclivities. They have built powerful castles of knowledge, bolstered by network-effects of scale in data and machine learning, which positions them for both short-term and long-term advantage: privileged insight into what people want, increased ability to sell their own and others’ products, hyper-targeted advertising solutions, platform lock-in, and virtuous cycles of data generation, to name a few. This centralization of powers is concerning for individual consumers and new startups alike, but it’s also deeply concerning for humanity at large. At what point will companies put their immediate commercial gains at risk when they conflict with what they believe users truly need, what users decide they want, or what we as society collectively determine to be healthy? 

Let’s envision a few scenarios. If a company knew a big-ticket purchase in my cart would plunge me into debt, would it alert me to that fact? If a company knew I was having trouble sleeping at night and had a habit of watching media into the wee hours of the morning, would it ever consider a suggestion, say, to read a printed book? If a platform only served me recommended content similar in topic or viewpoint to my viewing history, would it ever see risk in that narrowness and one-sidedness? Many would argue that part of living in a free society means that we make our own choices, and we do; but shouldn’t our technologies help us make more considered and informed decisions?

History is rife with examples of how companies have generated consumer appetite for dangerous products, obscuring or downplaying the risks of adopting them. The tobacco industry in the 20th century is surprisingly analogous to mobile-enabled social networks today. Both have manipulated our tendencies for trendiness, distraction, relaxation, escapism, and social engagement through a highly addictive product. Generational shifts in attitudes toward tobacco have now prompted meaningful investment in “healthier” tobacco product innovation. Committing to smoke-less futures has become an industry rallying cry, and a way to advocate more “responsible use”. Will the equivalent of a surgeon general’s warning be needed in tech?

“Responsible use” is not something that tech companies are actively discussing or advocating. Installation of product ethicists is a start, but the reality remains that it is too costly and risky to lose attention, engagement and wallet-share by bucking the norms of product design that have made these companies so successful to-date. The status quo marches on with more inertia than ever, posing risks for our ability to make progress on the plethora of hard problems we face, individually and societally. Hard problems require deep thought and powerful creativity to tackle一mental breathing room of which our technologies are often depriving us. 

There is a powerful case to be made for the role of technology in advancing existing fields and birthing new disciplines, from fostering better information-sharing to applying artificial intelligences to surface new patterns. But we must take a harder look at how our daily interactions with technology risk interfering with our own mental performance一limiting our ability to advance our disciplines, and to examine ourselves. 

So how could tech companies remain commercially viable while also becoming a force for good?

  1. Goal-based personalization. Take “Settings” to the next level to allow consumers to enter specific ambitions they have, and enable their services to be delivered in a personalized way to further them. Can we hack the brain, not for immediate action, but for patterned activity over time? For example, if I told Netflix I was interested in gaining a richer understanding of World War 2, couldn’t the platform serve me an intelligent sequence of documentaries and movies to do so, and wouldn’t I be a more satisfied customer for having a partner in my pursuit of betterment? Couldn’t Microsoft build smarter infrastructure for managing email notifications for employees on PTO in a way that managed their stress and allowed them to unplug? Or during the workday to align with meetings and allow more presence in the moment? 
  2. Establish industry standards that go beyond empty buzzword commitments. Assemble an industry board to get ahead of the collective issue and create a universal grading system (or set minimum standards) for digital product design that factor in greater transparency of communication in product functioning, convey simpler terms of service, and allow for more consumer empowerment and control. Get ahead of the issue before the heavy hand of regulation puts you on your heels. Barring that level of initiative (and perhaps the more likely scenario), government bodies could develop and enforce of standards much like emissions standards or LEED certifications arose to support sustainability. 
  3. Longitudinal studies of technology use and psychological health.  Invest in serious studies with academic institutions related to the cognitive and emotional well-being of a range of modern technology users. Acknowledge the value of this sort of data for the common good in doing so, and reap the reputational benefits while designing products that are actually better for us (or at least have the potential to be). 
  4. New forms of experimenting, and publishing data. Experimentation on users is part and parcel of being a smart tech company today. The problem is, most experiments are around how to get people to buy faster, spend more time, share more, etc. How about being more proactive about designing experiments that promote different sorts of engagement, and enable user feedback at scale about how they perceive shifts in product and service functioning? Did they relieve anxiety? Improve productivity? What should be the mental-health standard for deciding to release a new feature or product into the wild?
  5. New business models. Since a big generator of the zero-sum attention-economy dynamic is advertising based business models, how can companies go beyond ad-supported versions of social media and search? Is there a premium, opt-in paid version of these services that not only is ad-free, but has other benefits? How willing would consumers be to pay to use a product truly designed for them, not designed for them to be the product?
  6. Appoint more “corporate philosophers” and help train employees and students alike on design ethics. The college major everyone’s parents told them would never lead to a lucrative career should in fact be one that is highly compensated and sought-after. Shouldn’t consumers demand the same standards of excellence from ethicists weighing in on digital products as they would from medical experts evaluating a new pharmaceutical drug?

What these suggestions have in common is an embrace of some short-term commercial sacrifice for the potential of long-term stability, customer loyalty and industry thought leadership. And they also may help avert more near-term risks: say, the unforeseen crisis lurking in next year’s big product or feature release, or retaining the top talent that tech firms rely on to be competitive, many of whom are growing more disillusioned by the day. It’s a paradigm shift in thinking for the industry, but one that will only continue to rise in importance. 

These tensions hit close to home for me as someone who studied philosophy, neuroscience and psychology and now works at a firm focused on product and service innovation. We have a guiding credo at Fahrenheit 212 that would be worth taking to heart as technology ponders its future: Make things better, make better things. Innovation needs to be purposeful一solving real consumer needs, and generating lasting commercial value. But it should not only be about usability and appeal. It also needs a higher purpose, where getting to “better” means taking the long view, and going the extra mile to unpack what “value” really is.

Interested in chatting or learning more? Get in touch.