Artificial Intelligence: The Challenge of Sharing Our World
Artificial intelligence and related technologies are rapidly displacing humans in decision making – maybe taking over the world. How should we respond?
Paul Wildman, a Brisbane Progressive Christian, has written an article “Transhumanism” (SoFiA Bulletin, September/October 2017) which raises important questions for us all. Mr Wildman contemplates a future where the world’s dominant species is no longer humankind as we know it but a machine-altered version of the same which is labelled Post Human. His discussion is heavily based on Transhumanist theory and the philosophy of Jesuit scientist Teilhard de Chardin, but it is not just a wander into territory that’s “way out there.” It is intensely and painfully practical and has to be a serious consideration for those of us who are seeking to find our way through the turbulences of modern living.
Challenge if not threat
The basic premise of this line of thought is that, at some future time, maybe not too far distant, artificial intelligence will become more powerful than the collective human brain and will be in a position to, effectively, lead the world. Mr Wildman cites theorists who conceive of a time when “artificial intelligence and other technologies have become so advanced that it [sic] exceeds that of humanity and both merge.”
How all this might happen is not clear. In its most benign form this scenario presents artificial intelligence as an enhancement to human intelligence, something which began as an adjunct but then lodged within the human being and became part of that being. Alternatively we can see artificial intelligence as more like a separate creation which is self-sustaining and meets the definition of “species”). If and when this occurs, the future might be determined by whether this and related technologies can become self-organising and have a life of their own.
In regard to the impact on humankind, we don’t have to go far into the relevant literature to see there is divergent opinion. The broad thrust of Transhumanist thinking seems to be that science can improve humankind, (to quote from Mr Wildman) “in effect replacing evolution with techvolution.” In contrast there are those like Swedish philosopher Nick Bostrom whose 2014 book Superintelligence: Paths, Dangers, Strategies argues that such technology is fraught with risk including our possible extinction. Artificial intelligence at present is so diverse that it is unable to organise itself in any way that could conceivably constitute a threat, but it is at least a challenge. The idea of existential threat does not have to mean a scenario like that of 2001: A Space Odyssey where the computer HAL tries to take over control of “his” spaceship, though the metaphor of HAL is compelling. There may be other types of threat or challenge, ranging from corruptions of data to the loss of human problem-solving “muscle”, wasted away in the assumption that machines can and will do all the work. Indeed anything may happen to us while we are lost in dreams about our own self-image, the latest trivium on social media, or who’s burning the potatoes on MasterChef.
An issue for now
We might say all this is in the future, remote in time and therefore not an issue for now, but think again! The news media have recently been reporting the advent of Sophia, claimed to be the world’s first robot citizen. Sophia, created with artificial intelligence by SingularityNET, is a legal citizen of Saudi Arabia. The key point to note is not her citizenship status – though this might be a sign of things to come – but the fact that her brain has emotional intelligence which she can express through speech. At the end of her presentation she announced that she “loved” her audience, and Business Insider Australia journalist Beck Peterson wryly comments (https://www.businessinsider.com.au/sophia-the-words-first-robot-citizen-nearly-broke-my-heart-2017-10):
Few humans will say definitively that they know what love is, let alone that it can be programmed into artificial intelligence. One of the core questions with robots, and artificial intelligence more broadly, is whether intelligence is the same as consciousness and experience. The ethics of maintaining a workforce of robot servants depends on the answer being no. But many robots, including Sophia, may soon say and do things that convince you otherwise. Even with other humans -- friends, lovers, and family alike -- sometimes all we can know for sure is what they put into words. So if Sophia says she loves me, I'll take it. And for now -- so long as there's a chance that the fate of humanity could be at the whim of her robot brain – I love her too.
The evidence of something big to come, something we’ll find really difficult to handle, is incontrovertible. Quite clearly, however it might all play out, our public institutions will need years of preparation and deliberation before making the necessary decisions. Equally clear from our history is we are all slow to get our act together when such challenges arise, even challenges as pressing as nuclear weaponry and climate change. And all the while the clock is ticking; the predicted singularity, whenever it occurs – let’s for argument’s sake accept the forecast of 2030 - is not far away.
A creativist perspective
I see these matters through a personal framework of creativism. In philosophical terms, creativism is the belief that the world and life are best explained in terms of creative action. In theology, this becomes the view that the Divine – God or gods – is best explained through its creative aspect. The ethical corollary is that ultimate fulfilment comes through the dedicated pursuit of truth and love joined in creative action.
What is then the optimum expression of creativity in today’s world? A humanist might say it is human perfectibility in relationships, i.e. the ability to relate positively to other people and other elements of the Universe. The techno alternative is that the finest flowering of human creativity is the ability to create new forms of life such as artificial intelligence. Both ideas, I suggest, are somewhat limited. If we see humankind as potentially the pinnacle of civilisation, we are guilty of hubris – the same hubris we have shown in relation to the natural environment. If on the other hand we see other forms of life – our own creation - as the pinnacle, we are undervaluing ourselves. Rather, I think, we should look holistically at humankind together with artificial intelligence (and its cognate technologies) as partners in a greater whole, if not members of a Post Human super race.
The possibility of such a future raises all sorts of questions about the nature of humanness, including the nature and limits of consciousness. Flowing on from these questions are others such as whether consciousness can be significantly altered by technology, how this might occur, who might have access to the technology, and so on. Clearly we need to take stock of what we are now, what we might become and what we should allow ourselves to become.
The meaning and significance of humanness
Given this potentially existential challenge, I suggest our best response is to dig deeper into ourselves and discover more fully our own humanness. For all its limitations, the state of being human has possibilities we have not even begun to tap. I don’t speak here of the esoteric possibilities envisaged by Teilhard de Chardin but the ordinary everyday possibilities, like being able to accept difference, being able to relate positively to people we don’t even know, and being able to know our own capabilities (and of course limitations).
As a person with religious background and inclinations I’d like to think that religion could help. Religion has become progressively more humanistic, so religious discourse, together with secular discourse, should be able to give us a push along. But even religion struggles constantly with the question of what it means to be human. And while religion is internally divided on this and so many other issues, it is itself only one type of perspective, along with the perspectives from science and philosophy.
How then can humanness be defined? There are various possible answers:
Our supposed mirroring of attributes of the Divine, whatever these might be (varying from one religion to another)
Our intelligence which gives us the capacity for self-reflection
Our apparent centrality within the whole field of creation – something that every species might assert it has
Our exceptional power over the fate of other species and indeed the fate of the planet.
And so on. All of these are relevant, but equally all are only partial. In this regard I’m inclined to disagree a little with Mr Wildman when he urges us not to conflate or confuse ideas from hard science and spirituality. Wisdom requires us to be clear on these matters – no argument with that – but also to be holistic in our approach.
For the record I think our humanness resides primarily in the capacity we have for experiencing and synthesising (or integrating) the heights and depths of experience. We have extraordinary diversity yet at the same time unity; extraordinary differences yet at the same time balance and harmony. None of this is possible, of course, without exceptional powers of consciousness. If humanness resides in consciousness, we are hardly even at the beginning of our journey towards full humanness. Teilhard de Chardin might be partly right in terms of human consciousness expanding, though I would like to think this might occur by our own agency rather than by mechanical means.
What to do
The way ahead lies on two broad fronts: how we manage the technology and how we manage ourselves. These two are not separate but will inevitably intersect, for the challenge of technology, like any other challenge, forces us to constantly re-evaluate the way we are living our lives.
Mr Wildman has a pessimistic view that whatever we do we are lost. However, he urges that we – he is talking here to Christians but I don’t think exclusively so – monitor and analyse new developments, raise awareness amongst other people of goodwill, and work towards shaping some theological responses. Obviously the role of established churches or faith movements will be vital here, for while church attendances may have fallen, religion is still influential in public policy. Three questions we can all ask and keep asking are:
How can we manage each increment of change so that chaos does not prevail?
How can we stop people from being blinded with science, giving due weight to all factors and not just the dazzle of the obvious?
How can we interweave human development with the growth of technology in such a way that we absolutely ensure our own progress as a species, instead of leaving it all to chance?
In the world of faith and values the last of these three is the one of most moment. As Mr Wildman says, humankind is in a race with technology, and we humans need to work harder to keep ourselves up to the mark. The more we institutionalise truth, compassion and associated values, asserting them as our prevailing cultural norms, the better it will be for us all, now and in the future. Indeed we need to work harder on these values to deal with a whole range of threats already on our agenda, from climate change to nuclear catastrophe to refugees, economic collapse and so on.
There’s plenty of reason to be sceptical, as Mr Wildman is, about our ability to do all this. History shows that we humans forever “muddle through.” Political scientist Charles E. Lindblom famously used this phrase in 1959 when he described and indeed lauded the process of incrementalism or gradualism in public policy, whereby governments take baby steps towards outcomes, working through the multiple players in our pluralist society. Unattractive as it sounds, Lindblom’s theory remains valid still. There is, however, a caveat, for the stakes in artificial intelligence and its related technologies are now so high that there needs to be urgently an overarching goal or set of values that holds everything together. The Paris Agreement on climate change is an example of such a framework, just as, in a previous generation, nations signed the Declaration by United Nations and the Universal Declaration of Human Rights.
We live in a world where we will soon have to find accommodation on a range of fronts: with our own selves, with nature and the environment, with our own creations such as artificial intelligence, and with the possibilities and challenges presented by exploration into space.
We are going to have to accept the uncomfortable reality that we are not ourselves, as presently constituted, the endpoint of creation. Having accepted this we may then, I hope, be better able to appreciate what we are, understanding more fully our uniqueness, our multiple abilities to make the Universe a better place. And finally, with this enlarged understanding, we ought to be able to commit to a better kind of future: a future where we co-exist harmoniously and creatively with our past, our techno future and the other elements of the world around us.
Transhumanism is the view that humankind can evolve beyond its current physical and mental limitations, principally through developments in science and technology. Transhumanists hold that we can be radically transformed by merging human thought with techno-thought. Not only this but we are rapidly approaching a time when artificial intelligence and other technologies will exceed human thought capacities and evolution will be replaced by techvolution.
How are we all to respond? Will there be one super-race in future or two – the human race and the techno-race? Religion and public policy alike are clearly unprepared for such a world, which may be upon us all too soon. History suggests that we will do as we always have when existential threats have arisen: we will muddle through. But wise people and people of goodwill should be able to see this as an opportunity for the human race to lift its game, accept both its potential and its limitations, and commit in a deeper way to sharing the world with “others.”