Since the mid-nineties, we’ve been told that the “new information economy” would give way to vast gains in productivity. If we simply implemented more enterprise resource planning (ERP) and customer relationship management (CRM) software – along with a slew of other systems – our companies, public services, cities, and infrastructure would be smarter and more efficient. Whether the buzz is around IoT and big data or deep learning and AI, part of the marketing model for information technology and consumer technology has always been to spread the belief that the next big advancement is just around the corner – and that whatever it is, we humans will be supercharged by it, becoming vastly more productive as a result. Aside from the objections of Luddites and conspiracy theorists, we seem to have an unwavering faith in relentless technological advancement. If we want to make our businesses better, our economies stronger, and our youth more competitive, we need sleeker, smarter tech – and more of it. We expect all companies – tech, automotive, toy, and otherwise – to constantly innovate and experiment. But with this constant demand for progress, companies pursuing advancement are neglecting to ask two fundamental questions: “What for?” and “At whose expense?”
The notion that technological advancement is always a net positive is one that we as a society often do not question, regardless of the stakes. Take, for example, one of our most essential and human institutions: our schools. According to the Organization for Economic Co-operation and Development (OECD), which tracked the relationship between math performance and access to information and communication technology in schools across several countries from 2000 to 2012, there is actually an inverse relationship between how well our kids learn math and how many computers we put in our classrooms. The study found that in every single country, the more computers were implemented, the worse children performed. In fact, children who used pen and paper to solve math problems had higher test scores than those who used computers. In spite of this, the prevalence of computers in schools continues to grow rapidly – in the US, computer use in schools is growing faster than in any other industry, including healthcare. The Department of Education continues to tout technology as integral to student productivity, implementing new standards each year that pressure teachers to incorporate technology into the curriculum across all subjects – often without providing the tools and training to properly integrate it. The myth of tech supremacy is so pernicious, so saturated into the soil of our current reality, that we have accepted it as dogma.
Not only have we seemingly absolved tech of any wrongdoing by trusting in its innate productivity, constructiveness, and benefit, but we also do not to take it very seriously. When Google released its ubiquitous computer headset, Google Glass, in 2013, it did so without bothering to answer a key question: Why? That it was a stunning technological accomplishment seemed to be enough for the company, who argued that the device would find its purpose eventually (by being strapped to people’s heads). When the backlash to the product went into full effect – apparently, people find interacting with someone with a computer fixed to their eyes unsettling – the tech giant seemed genuinely shocked. Google had either not considered or had greatly underestimated the inherent intrusion of constantly capturing information, and they did not think about the significant discomfort this would produce in human interactions. They had had so much fun concocting the thing that they developed and released it without considering the sacred behavioral norms it would violate; indeed, they designed it with no regard for the important issues of human consent or privacy. The idea of an omnipresent computer seemed really cool, and the notion of strapping it to a pair of eyeglasses was almost playful. The fact that it had no discernible use case was irrelevant. This was a sophisticated new toy whose value would eventually reveal itself – or so the company thought. If the Google designers ever did stop to consider the very human violations of the product, they must have assumed its ultimate benefits would justify them.
In today’s innovation economy, it is easy to treat technology as a game. We often play around with it for the fun of advancement, without giving proper weight to its human consequences or to the needs it was meant to fulfill in the first place. We forget that on its own, technology has no inherent worth; instead, its value comes from the impact it has on the lives of users. Humanity should therefore be the essence of every product we produce. What if instead of simply spending billions of dollars to put computers in schools, we also invested in the materials and resources – including the teachers who work with the computers – to make this technology of actual use to students? What if rather than mindlessly incorporating the most advanced technologies into our consumer tech, we thought critically about the very human tendencies and biases embedded in their development? What if we built technologies that were meaningful extensions of us, rather than novelties for their own sake?
Critical thinking feels almost revolutionary in the context of technological advancement, but it’s just what we need. When companies innovate, they should be considering context. When their technologies miss the mark or fail, it should not be at the expense of human consumers. And when they develop new products, they should start by asking that most fundamental question: What human problem are we trying to solve?