Skip to main content

Trinity College Dublin, The University of Dublin

Trinity Menu Trinity Search



News

You are here Media > News

We have to decide what we want the world to be and we have to build it”

Professor Genevieve Bell on ‘Managing the Machines’

3 August 2018 - Professor Genevieve Bell is frequently asked whether she takes an optimistic or pessimistic view of the future.  Her answer is always the same: she thinks of the future only in terms of how to build it.  This is effectively her job description as leader of the 3A Institute at the Australian National University.  In her July 2018 talk at the Trinity Long Room Hub on ‘Managing the Machines’, Professor Bell spoke of the urgency of the task she and her team have undertaken: building a new applied science around the management of artificial intelligence, data, technology and their impact on humanity.  This is the applied science that we need to not just manage, but shape the incoming wave of technological change: “the Internet of Things turbo-charged by Artificial Intelligence”.

 

‘The notion of what it means to be human in the 21st century is one of the critical questions we have to answer, not just in the humanities but across all the disciplines; I think it’s a question that should echo through humanities, the social sciences, the sciences and engineering too, since it is in some ways what it is we have to think about in universities and beyond.’

To explain her particular, multidisciplinary approach to that task, Professor Bell first spoke about her own background, from childhood years on the field sites of her anthropologist mother in the Australian outback, to a PhD and ultimately Professorship in anthropology at Stanford, followed by almost 20 years working with technology company Intel, which decided in 1998 it needed anthropologists and social scientists to help explain the world and offered Professor Bell "an opportunity to be a social scientist in a company that was building technology for the future, an opportunity to make an impact at scale ."

In 2016 Professor Bell was invited by the Australian National University to create an institution in which she could build new knowledge, experiment with new relationships between universities and others and experiment with new ways of transmitting knowledge. Recognising another opportunity to make an impact at scale by influencing future technology, Professor Bell agreed to lead the new 3A Institute after defining its brief: nothing less than the development of a new applied science.

To explain why this had to be the brief, she began her talk with the well-known chart of technological progress published by the World Economic Forum in 2016. The chart highlights that we are now at a moment in time that can be compared to the three previous seismic changes in technological infrastructure over the last 250 years.   

However, Professor Bell analysed the chart from a social science and humanities perspective and pointed out that it does not communicate anything about either the profound social and cultural shifts that were going on at each of those key moments, or about what the major institutions in the world did to harness and control these new technologies. But, she asked “What can we learn from what happened with those first three waves to help us inform our response to the fourth?

The first wave of industrialisation brought mechanization, water power and steam power and generated the new applied science of engineering, as taught in the first school of engineering, the École Polytechnique in Paris, established in 1794 and rapidly followed by others.

The second phase of industrialisation brought mass production, the assembly line, electrification and the need to manage money and capital. This resulted in the emergence of capitalism and new institutes spearheaded by the Wharton School of Business (est. 1881): set up by the University of Pennsylvania with funding from industry, its faculty rapidly created tools for managing money and economies that are still in use today – for instance, GDP as a measure of a nation’s economic health.

The third wave of technology, post-WWII, was the advent of computers and automation. By the mid-1960s the US government had a problem with over-reliance on proprietary software and it asked George Forsyth, a Stanford mathematician, to attempt the creation of an overarching, abstract language for all computers. Forsyth assembled a team of “mostly reformed linguists, mathematicians, philosophers and logicians“ and within two years they had developed a ten-page curriculum for computer science, an undated version of which is still the standard today.

So, engineering, business and computer science: three completely different applied sciences, emerging from three completely different technical regimes, with different impulses…completely different roles played by universities in each case… Each starts out incredibly broad in terms of the ideas it draws on, rapidly narrows to a very clear set of theoretical tools and an idea about practice, then is scaled very quickly.

We are now, says Professor Bell, at another critical moment, the "precipice of the next wave of technology" and we need the new applied science that will enable us to shape and manage this technology in the way that we want. “This Internet of things super-charged by artificial intelligence…a digital that is no longer as it has been, a computer that doesn’t have prewritten rules, a computer that is somehow moving without reference to traditional software… how do we think about building them, scaling them and doing it securely?...this requires a new applied science.”

Professor Bell assembled a multidisciplinary team at the 3A Institute and they began “multi-sided qualitative field work looking at places that build these things”. The aim though, was not to come up with solutions but rather, to identify the questions that we need to ask: "Framing the quesitons is more important...we have to move from problem-solving to question-asking".

Professor Bell and her team have now identified the first five sets of questions that will form the guiding principles of the new applied science. The first three are contained within the name of the 3A Institute: Autonomy, Agency and Assurance.

‘Autonomous’ is what we call these new systems, but Professor Bell questioned the use of this "deeply freighted" term which has a history in philosophy and, in Western culture is associated with a very particular imaginative chain “that begins with the Gollum, goes live with Frankenstein, on to the Terminator – and it usually doesn’t end well, the robots come alive and then they kill us.” Even the meaning of ‘autonomous’ as applied to technology has not been clearly defined: for instance, Volvo’s autonomous vehicle technology is based on a flat structure that operates in a completely different way to Tesla’s more hierarchical structure.

If multiple people are using the word ‘autonomous’ and building different things underneath them how do we regulate it?...Do we need to get to a shared understanding and standardised technical buildout...and how do those objects move across national boundaries? And how and to whom do we signal that a technology is autonomous?

The social science term ‘agency’ covers limits, controls and boundaries.

If an object can act without reference to a pre-written rule set, some form of autonomous action, what are the limits on that action and how are they imposed…Who determines the rules and how are they litigated, in both the legal and cultural sense? Do those rules sit on the object – or somewhere external to the object?

Professor Bell highlighted just one example: in Australia, one of the great concerns with autonomous vehicles is that they should give way to the emergency vehicles speeding towards bush fires. An override system is possible, but it needs to be imagined and with it the parameters as to who uses it, how and when. Again, Professor Bell emphasised that it is necessary for us to think now about the completely new and different rule sets that we will need, especially since: "we know that…some of those rule sets are built in commercial companies and not government enterprises, some are built for consumers not citizens, many are built for certain kinds of citizens and not others – how do we imagine all of that and what would it mean to give objects and systems limited forms of agency?

‘Assurance’ deals with questions of safety, security, risk, trust, explicability and liability, all especially important in the tightly controlled and regulated spaces where AI has obvious applications: medicine, the law, finances and the military.

Professor Bell pointed to two recent official publications that begin to address some of these questions: a 2017 White Paper by the German government on autonomous vehicles envisages among other things that federal government is responsible for creating a “shared and easily promulgated rule set that everyone can agree to”. While Professor Bell described some of the paper’s suggestions as “interesting first stakes in the ground”, she pointed out that any shared rule set needs to be absolutely explicit and without ambiguity, while we also need to consider a situation where autonomous vehicles are obeying the rules while humans disregard them (just as humans may ignore a red light at a pedestrian crossing if there is no traffic).

The recent GDPR legislation from the European Union includes a piece on the ‘right to explanation’ which Professor Bell describes as a huge challenge in the AI space, because “Most algorithms, particularly anything that relies on deep learning or machine learning in the unsupervised space, have a very hard time explaining how they got to the answer…so [we need to think about] how these systems will explain what they are doing in a manner that is transparent to citizens, regulatory bodies and anybody else that is a piece in the puzzle.

The fourth set of questions, said Professor Bell, relates to metrics: are productivity and efficiency, the metrics that were established in the early years of the Industrial Revolution and have dominated ever since, the right metrics for cyber physical technology?

Two important issues we need to consider are environmental sustainability and cultural differences.

Technological objects are highly energy-intensive: already, upwards of 10% of the world’s energy is spent on server farms and even if individual technologies become more energy-efficient over time, their proliferation means the cumulative energy requirement is likely to remain high. “What if one of the metrics that we set in place was that actually sometimes it’s better to use humans rather than a piece of technology because the energy burden is less? Or what if we said these objects actually need to be energy-efficient… How would we think about cyber physical systems where the key metric was sustainability and we built that in from the beginning?

Differing cultural views on data are “an issue we’re already seeing play out in New Zealand and Canada where indigenous ideas about data ownership might mean these systems proceed completely differently”.

The fifth set of questions relates to the nature of human/computer interaction: although HCI has been discussed since the dawn of the computer age, it is, said Professor Bell “something completely different when you imagine these objects may live in you or around you – and may not care about you at all.” But once again, as with all the questions the 3A Institute is addressing, Professor Bell proposed that rather than automatically apply answers, standards or metrics based on our experiences so far, or react in a defensive way, we should instead “get beyond the classic….paradigm and start thinking about something else: what would it mean to rewrite this?

What would it mean to think about systems that were nurturing, caring – the robots that don’t want to kill us? …also begin to think about how those systems function and what their relationship is to each other, rather than imagining it is all…a contestation.

The 3A Institute is developing a curriculum around these key questions, which it will begin testing in 2019 with its first cohort of graduate students. In her conclusion Professor Bell emphasised the urgency of the 3A Institute’s work:

AI at the moment is a form of magical thinking – we imagine it’s going to fix everything… But if humans can’t fix it, the machines that we build won’t get it done either…[We]actually have to put the time into working out how to work through and with these other systems. We have to decide what we want the world to be and we have to build it. And we build it in universities by educating people, but we also build it by taking it outside the universities and making it count for something.

Professor Genevieve Bell is Director of the 3A Institute, Florence Violet McKenzie Chair and a Distinguished Professor at the Australian National University.

A cultural anthropologist, technologist and futurist, she is best known for her work at the intersection of cultural practice and technology development.

‘Managing the Machines’ was the inaugural talk of a new public lecture series on ‘What does it mean to be Human?’, jointly organised by the Trinity Long Room Hub and Trinity Research and Innovation. In this high-profile lecture series, experts from enterprise and education will reflect on what it means to be human at the turn of the 21st century in light of the growing influence of technology on the way we live, work and communicate.

Contact: Niamh O'Flynn,

Communications Officer | Trinity Long Room Hub | noflynn@tcd.ie | 01 896 3895

Support Trinity Long Room Hub

Click Here