Does AI Threaten the Human Future? – Jason Thacker

Everyone’s been trying to get up to speed on artificial intelligence (AI) and its role in our society since the public release of ChatGPT by OpenAI in November 2022. ChatGPT became the fastest-growing consumer application in history, amassing over 1 million users by December 2022, 100 million by January, and over 200 million users as of August 2023.

Scholars and practitioners have been thinking for years about how these technologies are shaping us as human beings and altering our perception of the world around us. The Age of AI and Our Human Future is a brief yet helpful nontechnical introduction to AI, its role in our communities today, and where we might be headed as a society in the future.

This volume has three coauthors: Henry Kissinger, former secretary of state; Eric Schmidt, former CEO and chairman of Google; and Daniel Huttenlocher, the founding dean of MIT’s Schwarzman College of Computing.

The goal of this book is to empower readers with “a template with which they can decide for themselves what that future should be.” Ultimately, they argue, “Humans still control [AI]. We must shape it with our values” (6).

AI Is Already Implemented

AI refers to “machines that can perform tasks that require human-level intelligence” (14). It’s a class of technology that “augurs a revolution in human affairs” (14).

The book was written in 2021, before the current cultural fascination with AI, so it avoids much of the trendy discourse. Even then, AI wasn’t a future phenomenon but a present reality that was altering human perception of the world. The authors highlight the rise of Generative Pre-trained Transformers (GPTs) and how these technologies may radically alter our society. It’s clear the questions surrounding AI didn’t arise in a vacuum over the last year.

According to the authors, advancements in AI are inevitable, but the final destination isn’t. They argue that “attempts to halt its development will merely cede the future to the element of humanity courageous enough to face the implications of its own inventiveness” (15).

This outlook on the future of AI seems ominous and fatalistic. However, the book reminds readers that it’s important to think about these tools to develop a plan for the real questions of their development, deployment, and use.

Philosophical Foundations

Debating the ethics of AI is inherently philosophical. The fundamental questions aren’t about physics or chemistry but about meaning and purpose. Thus, the authors include philosophical as well as technological discussions. They seemed to be on shaky ground.

The fundamental questions aren’t about physics or chemistry but about meaning and purpose.

Beginning from a naturalistic worldview, the book traces a philosophical history with little consideration of transcendence or purpose. The very concepts that seem to inspire scientific and technological development are left out, as if human invention and exploration of the natural world were simply givens.

The authors sidestep the discussion of religion proper to begin their philosophical exploration with the ancient Greeks, such as Plato, Pythagoras, Aristotle, and Lucretius. When monotheistic religions are discussed, it’s as if Judaism and Christianity were disruptions of the classical quest to know the world through autonomous human reason. The authors are comforted that the blip of religion on the radar was allegedly corrected with the Protestant Reformation, which they connect to the return to normalcy in the Age of Reason.

Such an emphasis on autonomous human reason seems to read modern concepts back into the ancient texts. It also cuts the authors off from sources of moral thinking that might move their arguments beyond utilitarian grounds. They’re too busy asking what we can do with AI to consider what AI is good for.

What Does It Mean to Be Human?

The rise of AI forces us to consider what it means to be human. The Age of AI addresses the significant question of “whether there is a form of logic that humans have not achieved or cannot achieve, exploring aspects of reality we have never known and may never directly know” (16). For the authors, the advancement of AI could mark a positive step change in human civilization.

The Enlightenment disrupted “the established monopoly on information,” ushering in a new era of human civilization (19). Likewise, they argue, the age of AI can bring about an even greater transformation as these tools see things and perform tasks outside of human abilities and expertise.

Here again, the naturalistic, human-centered worldview of the authors is on display. It’s clear as the authors discuss humanity that they see human logic as (nearly) ultimate. They also seem to value the material aspects of human existence as supreme.

These presuppositions may limit our acceptance of their analysis. However, the authors are correct to note these tools aren’t neutral instruments or machines but “will change humans and the environment in which they live” (26).

What Remains to Be Decided?

Dealing with AI is inevitable. There’s no question that social media, web searching, shopping, and navigation apps already use AI. As a society, we have “without significant fanfare . . . or even visibility . . . integrat[ed] nonhuman intelligence into the basic fabric of human activity” (94).

This has come at a cost. In many ways, new technologies make our societies more brittle. As the authors note, “A central paradox of our digital age is that the greater a society’s digital capacity, the more vulnerable it becomes” (153). It takes smaller interferences to have a much greater effect. This will only become more pronounced in the coming years as the price of these technologies decreases and usability increases.

As a result, some of the most pressing debates surround determining the values these technologies are designed with and deciding “who operates and defines limits on these processes” (109). To use Shoshana Zuboff’s framing from The Age of Surveillance Capitalism, we need to be asking who knows, who decides, and who decides who decides.

Christian Engagement

Any answer to these vital questions necessarily reflects value judgments and philosophical ideals. That’s why Christians must do more than passively observe developments, critiquing from the sidelines.

In many ways, new technologies make our societies more brittle.

Christians need to be actively engaged in discussions about the limitations and application of AI. Though Scripture and tradition don’t address computers directly, we have rich sources in both to explain human nature and from which to argue for strategies that seem more likely to encourage something beyond a mere “ethic of human preservation” (176).

Christianity reminds us of our rightly ordered, fixed nature as God’s image-bearers and of our desperate need for him. The good news is that God designed us and calls us to recognize true reality grounded in his love—not simply in our own understanding—and to love him and our neighbor above all else (Matt. 22:37–39).

The pursuit of complete human autonomy is a failed and shortsighted project. The powerful nature of these AI technologies is a constant reminder of our limits in light of the infinite God who created and knows all things.

The Age of AI offers a helpful window into some of the prominent questions about AI and the human future. However, readers will need to engage thoughtfully because of the book’s deficient underlying worldview and assumption of the inevitability of technological acceptance.

Read More

The Gospel Coalition

Generated by Feedzy