Complexity, Information & AI


thank you very much great pleasure to be
here good morning thanks for showing up yes I think we have a very exciting
topic and I’m actually going to argue for something that is somewhat unusual I
think for computer science circles how do we create more complexity because we
would like ultimately to create artificial machines and the paradigm we
use is machine learning but where will all the complex interactions come from
that enable us to train such machines to become intelligent and I’m going to
argue that it’s that it is multi-agent learning that we need to do we need to
put many of these machines into a box and let them interact just as we learn
by interacting together okay so before we before we think about how to solve
intelligence if you like what is intelligence and I like to refer to the
definition by my colleague Shayne leg here who has been arguing that
intelligence measures an agent’s ability to achieve goals in a wide range of
environments to solve many different problems and he also has this formula
that quantifies that but I’m not going to go into the details here but if you
think about this definition and if you think about the learning based approach
to intelligence then clearly the question comes up where do the learning
environments in which these agents act come from but first let’s look at the
agent and the world so on the left here you see a learning agent and you’re a
network and on the right you see the world and typically we work in the
reinforcement learning framework so the agent observes the world and takes
actions and then also receives a reward depending on on the outcome of that
interaction over time so that’s the reinforcement learning paradigm and the
agent needs to discover policies ways of behaving in order to maximize long
goals or rewards and here you see some examples of environments that have been
created by deep mind this is called the deep mind 30 lab where where designers
have created specific environments in which these agents that observe the
world can interact and learn how to do things how to navigate mazes how to find
treasure how to solve test questions and so on but you see this is just not
scalable because we can’t just continue developing smarter algorithms and at the
same time create more and more complex worlds every single one of these
scenarios basically a minigame we’d have to hire all the games developers in the
world to do this and so the proposition is let’s put many many agents into one
environment so that they can pose problems to each other while they’re
learning and that’s basically an ecosystem of multi agent and I’ll give
you two examples of where this has been done one of them is alpha zero which is
the successor of course of alphago and alphago zero those systems that that
beat the stronger players at first at the game of Go and then in alpha zero in
the game of chess and so this is in some sense the simplest version of a multi
agent problem it’s a two agent problem you know going from one to two can make
all the difference though and so what you see here is the basic architecture
there’s a there’s a learning agent at the top and there’s two versions of it
one playing black and one playing white and they play against each other and as
they play initially there’s nothing big great happening they’re playing randomly
but at some point the agent stumbles upon a victory you know by chance it
wins and that’s when the magic starts happening that’s when these algorithms
learn and they learn to make the better moves that lead to victories to make
them more likely in their play and the less good moves to make them less likely
and so they become better and better over time and that we tested the system
of the zero against Dogfish which is a very very strong
chess engine that has been engineered based on mostly on search and designed
evaluation function and it turned out that alpha 0 the system that has learned
just by playing against itself millions of games mind you can beat stockfish
many more times than stockfish can beat it of course chess is a draws game so
there are a lot of draws at that high level of play
and so even Kasparov was very impressed I think maybe the greatest complicate
meant that he could make to the system was that it plays a bit like himself so just to summarize this in eight hours
alpha zero learns more about the game of chess then humanity has over the past
1500 years and that’s the result of self play of two agents playing against each
other many many times to explore this world to pose problems to one another a
second example capture the flag this is a completely
different game and in which two teams compete and need to capture the other
team’s flag here’s a little tutorial on the left you see the agent perspective
and on the right you see a top-down view and you see these blue guys need to run
to the opponent base and pick up the red flag and then return to their own base
and they score if they can take it to their own base but only if their own
flag is still there and the opponent’s hadn’t captured it in the meantime this
is a very complex task because the agents perceive the world in completely
just from their first-person perspective they need to cooperate with their
teammate they need to find ways to compete against the other team they need
to find those flags and bring them back and to make things more difficult we did
this not just in a fixed map we did it in a large collection of
randomly generated maps so that every game for them is different and they
cannot just learn one map by heart but they need to learn how to
explore maps and we also train them with different teammates and different
opponents to the point that they can also play with humans and in fact it
turned out that they preferred humans preferred playing with the trained
agents because they were so skilful and reliable in their behaviors and when we
did a test on individual behaviors we found that these agents that were
trained in this way by playing in these various combinations again stand with
each other they found these really fascinating strategies they defend their
home base they really team up to do that they do opponent base camping they wait
in the opponent’s base and wait till the flag comes back there and then they
steal it and come back in the meantime the other agent already goes to the to
the base again to steal the next flag and finally they follow that teammate in
some scenario so that they there’s two agents that can work together because
they’re in the same area of the map and this I mean you know those are my
interpretations of that behavior good and finally these learning agents were
really good at what they were trained to do in a fixed pool of opponents the
learning agents won 74% of their games were strong human players only one 52%
of the games you know it’s still close but you know should be fun matches okay
let me conclude we believe that reinforcement learning is really a
phenomenal paradigm for studying artificial intelligence for creating
artificial intelligence but we need two things for them if we just use a
standard paradigm we need to develop stronger stronger learning agents neural
networks if you like that can learn how to behave well and we need to develop
more more and more complex environments almost like a school curriculum we need
to build these worlds that guide our agents enable them to learn more and
more things but that’s not scalable and what I’m arguing here is that just in
evolution created us in a multi agent process we need to put these learning
age together so that when they learn
together they pose problems to one another just like one chess player did
to the other in in the alpha zero setting or one team of capture-the-flag
players for the other one by changing their tactics they challenged the other
team to invent new things to be innovative to find new solutions and
we’ve shown this actually works in practice in such diverse scenarios as
chess go and so on but also in video games like capture the flag so in the
future where can we take this we will taupe to build more and more complex
worlds in these in which these agents learn to interact solve more challenging
tasks maybe find niches and specialize and where we can also understand how
initially selfish agents can get together and cooperate thank you very
much thank you – thank you very much so Tora
wants more complexity I’m wondering whether our next speaker Erik once less
Erik is professor of the Blavatnik School of Government and executive
director of the Institute for new economic thinking at the University of
Oxford welcome Erik to the stage good morning as an economist I have to admit
I’m feeling a little bit out of my element at at an AI conference but I’ll
note that in in college I wrote a lisp program that could play a pretty decent
game of Canasta but things have moved on a bit since the
1980s when I last delved into this world but I studied the economy as an
evolutionary complex system of networks intelligent agents and so I’ll talk
about how that might be relevant to to this topic so this is a nice picture of
some weaverbirds doing a pretty intelligent thing building a complex
nest now we know that the natural environment is complex and creatures
need strategies for surviving in that complex environment and evolution works
through a process of niche creation and niche filling so as the niches that
allow simple strategies get filled up and more competitive this creates
pressure to create more complex strategies which in turn create new
evolutionary niches but complex strategies require different forms of
intelligence to manage them so this creates a bootstrapping
dynamic where intelligence emerges from complexity but intelligence also creates
its own complexity particularly when you get to social creatures like these
chimps having a meeting over that not the environment these chimps live in is
both a physical environment and a social environment and they need to navigate
both to survive and reproduce but the complex social environment is
self-created by the chimps themselves and their own intelligence and
co-evolved with them now things get really interesting when survival is not
just an individual matter but also function of your group then there’s
really huge evolutionary pressure for complex social structures that promote
group cooperation and then the intelligence needed to create and
navigate them and researchers think that human intelligence involved out of an
arms race between our brains and our complex social structures now we can see
these kind of dynamics in a very abstract way in a simulated environment
this is a classic study from the 1990s of agents playing prisoner’s dilemma on
a lattice and the way it works is you live or die defending on your score the
prisoner’s dilemma and if you live you reproduce into your neighbors who died
and you start with random strategies for the prisoner’s dilemma but then there’s
a genetic algorithm that allows the creatures to search for new strategies
and critically also allows their memory to grow over time and in this very
simple evolutionary game you see the emergence of more and more complex
strategies over time and you see even regimes where a strategy comes up and
dominates for a while and then a new strategy develops the old strategy dies
off a new one comes in so you see this process of niche creation and niche
filling playing out over time and you not only
get the emergence of complex strategies from this very simple game but you even
get the emergence of cooperation where agents will band together to play
cooperatively the game and then keep away agents that are defecting so from a
relatively simple game you can see this bootstrapping dynamic of complexity and
more complex strategies now we can think about very much as Thor described in his
talk that we can see new ways of developing learning in artificial
systems so the traditional approach of AI machine learning is to sample a
complex space we take data from the world and we train an algorithm on that
data to produce some result as deep mind is shown with it’s a amazing alphago
zero and other experiments we can also have these learning games
you have two agents playing each other in a game learning the rules of the game
and learning exploring the strategy space and learning very effective
strategies but we can also imagine another approach where we start with a
population of agents of heterogeneous agents and we allow them to play in a
set of games in the strategy space but we also allow that strategy space to
evolve and create more and more complex games and more and more complex
strategies again in this evolutionary bootstrapping dynamic and that just like
the slide with the prisoner’s dilemma will create these complex emergent
dynamics which then feed back into the agents as they learn from their world
and their environment now the we can be inspired by this natural process and use
technologies like agent-based modeling to model this in social systems and our
work at Oxford we use the economy as a lab for this kind of approach one can
think of the economy as a set of complex games within complex games that are self
created by all of us the economy is a set of social structures that emerges
out of our ideas our heads our strategies and we create the rules of
the game in economic systems this is a paper where we did some work with the
Bank of England simulating the housing market in the UK housing was a huge
factor in the financial bubble and crash of 2008 so we built a fairly realistic
simulation of the UK housing market using lots of data on households and
banks and real estate and so on and then we created a set of agents that could
explore this model and help the bank look at new potential policies to reduce
the booms and busts in the housing market and it produced some very
interesting results now we could also we also did use some machine learning to
sweep through the model and look for its dynamics and where it was stable and
unstable but we could also imagine using a eyes to play the housing game and look
for new strategies and new rules which could either be valuable if you’re a
real estate investor or it could be valuable if you’re a regulator like the
Bank of England so we can start to create a vision for a
next generation of economic models using these techniques so we again start with
a set of agents they could be adapted behavioural agents they could be human
players that they could also be a eyes we then can set up in the model the
rules of the game contracts markets information flows and even a physical
geography in the housing example you know geography is an important factor
and then emerging out of these interactions you get the macro behaviors
economic growth business cycles leverage cycles economic inequality we’re also
using these techniques to understand the transition to a zero carbon economy and
then you can explore these models and explore agent behaviors to both get a
deeper understanding of how these systems work but also for many public
policy applications again the Bank of England model is just a small example of
that but we’re looking more generally at how to mitigate financial crises issues
of economic inequality and opportunity monopoly and market power innovation and
growth and so on and there also may be a number of
private sector applications a number of hedge funds are very interest in these
ideas also retail e-commerce manufacturing and and and energy so just
to sum up we can build realistic world’s a lot of the video games that
authorities showed but we can build realistic worlds that look like the real
world of our social and economic systems and then use evolving intelligent agents
to explore those worlds develop new strategies and also develop new rules of
the game that can make them better systems for all of us thank you thank you Eric thank you very much and
so AI to be able to build new ways of exploring complex systems like economics
next up we have Cesare Hidalgo Cesare is the director of the collective learning
group at MIT and he is founder of data wheel
welcome Cesare thank you so when Fahrenheit has started to
explore the idea of measuring temperature that was an idea that was
not that easy to convey to people why because well at that time the ideal
temperature was not a concept that was quantitative like today you know people
thought that maybe in one room some people were hot and other people were
cold and temperature at the end of the day well some people are hot and some
people are cold it means that it’s just subjective but eventually you know as
they develop this capacity to measure temperature you know Fahrenheit invented
the mercury thermometer now people started to realize that even though you
might feed coal and she might feel hot in the same room temperature was
something that was measurable and my argument is that something similar has
happened today to another concept which is the idea of knowledge okay not
knowledge at the individual level but at the social collective level so during
the last years we’ve learned a lot about knowledge and about the properties that
it has we have to measure it we need to understand a little bit about it and the
first thing that we know is that first its non-rival what does that mean well
it means that as many people can use it without taking the use of other people
away so a hammer you know they’re using to build a birdhouse is a rival good if
I’m using the hammer you cannot be using that hammer but the idea of the
birdhouse is something that you could be using at the same time as me without
taking it away from me a also knowledge is not just simply additive you know
it’s not just simply extensive so if we’re gonna think about knowledge at the
collective level we have to figure out more interesting ways of aggregating
than just adding it okay so if I know how to make shirts I don’t know how to
make pants then I learned how to make blouses I
didn’t increase my knowledge by an entire new unit there’s a lot of
redundancy knowledge you know especially between similarly related activities and
hence you cannot simply extend it knowledge can beat acid can be explicit
there’s some things that I can learn through acts of communication and some
things that I cannot learn through acts of communication we could bring messy to
speak at Cog eggs all day about soccer and I’m sure that the moment that you
leave the room you wouldn’t be better soccer players because that type of
knowledge is tacit it’s something that gets acquired through experience in the
context of social learning you know a knowledge diffusion is also very limited
you know but knowledge more importantly berries also this degree of complexity
so you have an orchestra in which every musician is very well trained and every
instrument is of high quality but all of the instruments are the same and every
musician only knows how to play the same instrument
well the complexity is going to be much lower than if I have an orchestra that
have sections of brass and winds and percussion and so forth so one of the
things that actually matters is that in some way when we think about knowledge
we have to figure out this knowledge that allows to a large diversity of
things so the question is how to heck we measure this knowledge we know that it
has all of these properties you know we cannot simply add it and so forth you
know it is tacit or explicit it is non-rival but how do we measure it and
one way to try to measure knowledge is by looking at the things that places or
people know how to do so well how do I know that German is good and Mechanical
Engineering well because they export a lot of props that are intensive in
mechanical engineering and so forth you know so we can grab data on the
activities that are present and locations to create very detailed
indicators you know of the knowledge that may be available in them and we can
use this to compare knowledge and to make predictions about the future of
these economies so there’s a few measures that we develop over the last
decade one of them is the idea of relatedness and this is the idea that
knowledge is not something that you have or that someone has is always something
that someone has about something okay so you just simply don’t have knowledge you
always have knowledge about something because knowledge is specific to
activities it’s not kind of like this general glue so if you have let’s say
two cities or two countries that have certain industries that are presently
then that is gonna imply that some other industries are gonna be related or close
by you know these are industries are gonna share many of the inputs you know
that they need with the industries that are already present in them so with that
idea of relatedness you can predict what are the activities that a place is going
to enter or exit in the future here you have an example of the economy of Chile
diversify in between 1979 and 1996 and you see that the product Chile enters –
during this period our products that are close by or related to the process that
they were doing in the past you know so that’s something that by now we know in
the literature to be generally true and there’s the principle at play here it’s
kind of like a new law of gravity but in this case is a law that rules you know
the activities that economies are gonna enter this is called the principle of
relatedness and it’s true at a variety of spatial scales and for a variety of
activities you wanna know what are the areas in which your university is more
likely to start publishing in the future well those are areas that are connected
to the areas you know where you publish today you want to know what are the
industries that your city is more likely to exit in the future well those are the
industries are gonna be isolated on this relatedness space but also we need
measures that are aggregate that are not just about the knowledge that you have
about something but the one that you have in total and this is the idea of
having measures of complexity you know so how do we do this and this relates a
little bit to some techniques that are similar to more traditional forms of
machine learning you know like dimensionality reduction and so forth so
what you’re gonna say is well you know how do I know if London it’s more
complex as an economy than Barcelona or Tokyo or Manchester or Paris you know
well one way to do that is to say well let’s look at the activities that are in
London and let’s say that the knowledge complexity in London is proportional to
the knowledge complexity of its activities okay so I’m gonna say that
the knowledge K of a city C is proportional to the knowledge K or with
activities P that are present in it now how do I know if let’s say software
development is a complex industry and garbage collection is not well I’m gonna
say that these complex activities can only survive in the places that have the
complexity that requires them that they require so I’m gonna say that the
knowledge K or the knowledge complexity of an activity
P is proportional to the places where this is present and if you do that you
get a self-consistent equation in which you can actually solve like an egg and
vector problem and would you have the knowledge of a place is equal to a
function of the knowledge of the place you can solve that using you know
eigenvector algebra or iterations and you can get a measure of the complexity
of a location and similarly of the complexity of an activity that doesn’t
require you to make any assumption a priori of which activities are more
sophisticated than others and this is nice because it gives you
like a bunch of bonuses you know when you’ve solved kind of like that little
matrix equation the first one is you have something that predicts you know at
the international level you know the level of income of countries and their
future economic growth so you have a measure now of the knowledge that a
country has and the countries that have more knowledge per unit of GDP per
capita are going to grow faster you know so in this framework the growth of China
is something to be expected because the cost of knowledge in China is very cheap
you know they have a lot of knowledge per unit of GDP per capita it’s not a
matter of labor cost is a matter of knowledge cost also you know you can
explain very well International differences in income inequality it’s
really hard to have a society that have low levels of income inequality you know
when you have the practice structure of Peru you know or Nigeria you know which
is very geared towards extractive resources it’s very hard to have a high
level of income inequality with a practice structure of Sweden or
Switzerland you know there’s a coevolution here at play you know in
which actually you know income inequality is connected to the
complexity of activities but the last thing I’m going to leave you with is a
more recent result in which we have been trying to understand how the complexity
of knowledge relates to the spatial concentration of activities and why is
this because well we live in a world in which a spatial inequality is one of the
most important topics of today yeah cities like London are doing great but
other places in the UK are not doing that great you know is this something
that only depends on policy is this something that is about choices that we
made or are there some sort of natural economic forces that are creating this
divide so in this map you see the patent in activity of the United
States and you see that there’s some places in the US that patent a lot like
San Jose or New York Boston and Chicago but most of the US doesn’t patent at all
no so the spatial activity specially
innovative activity is very concentrated we known that for a long time we know
from the work of Marian Feldman and others and one way to study that a
spatial concentration is to do a simple charting when you say what is the
population of a country a of a city how many patterns they produce and you
realize that actually when you do this in a log-log plot you get a coefficient
that is like around 5/4 what that means is that the larger cities do more
patents per capita than the smaller cities okay but you can grab that and
you can start this aggregating let’s say those patterns or economic activities
into different categories and you realize that you know the patterns in
computer hardware software much much more super linear than the ones on pipes
and joints yeah so patent in a kingdom pipes and joints is distributed more or
less similarly to population but the ones on computer hardware software are
much more concentrated on large cities and you can do that for patents you can
do that for papers you know you can do that for industries you can do it for
occupations you know and what you find is that along all of them if you take
some sort of measure of the knowledge intensity you know of those activities
or the complexity you’ll find that the more complex an activity is the more it
concentrates in space so what this is telling us is that well if we’re moving
faster and faster towards an economy that is very knowledge intense you know
our economic activities when I continue to concentrate in cities and we’re gonna
have to start thinking about well how do we include more people into the cities
that we have instead of trying to develop you know the places that are
left behind because that’s where the economic activity that we are doing
right now is taking place so these are just a few lessons of how knowledge is
being transformed from a concept that it was more ethereal or qualitative into
something that is more quantitative and is helping us understand the future of
our economy thank you this lady sidarsky here please
and Eric Toro please join us on stage so we’ve got about half an hour just under
half an hour I think to try and delve into some of their issues and interests
that were highlighted by your presentations but I hope you’re all
going to forgive me for a really boring opening question I feel like we need a
local definition of complexity so when Torre’s talking I got a sense that
complexity for you means something a little bit akin to cognitive load so the
complexity of the environment and the number and types of interactions within
it whereas complexity seems to mean
something maybe a little different in the war at realm of economics is that
true it’s a good starting point I think in some sense with you complexity as
this useful thing almost like like the kinds of problems that an environment
poses to a learning agent because unless the learning agent gets exposed to
interesting problems and learns how to solve them it can never become more
intelligent if you like but of course there could also be an overload in
complexity so for example if you jump imagine a school curriculum with more
difficult topics being layered on top of simpler topics if you jump in at the
wrong level and go to some kind of graduate level class first then as a
learning agent you will not learn much and so so in some sense we would be
aiming at a layering of more and more complexity that these agents would be
exposed to I I think there was actually a nice
connection across the three stalks because one way to think about it is
it’s a search to some huge combinatorial space what gives the high cognitive load
of chess ergo is you have this huge space of possibilities what I was
talking about was how evolution searches spaces of possibilities to then create
strategies for managing that which koval and you know what Cesare was showing us
with some very nice data was that the product space of the economy is this
huge combinatorial space and we can think of economic growth as a search you
know through that space of all the possible products and services we could
make so a lot of definition of complexity hinge on on the fact that
systems that are complex can adapt and evolve and have multiple parts and so
forth the one that I like the most is the one that was proposed by Warren
Weaver in a classic paper on the 40s you know that it he says that science went
through like three eras you know the first era was the science of simplicity
which is when people discover that you can describe nature using trajectories
okay so that’s basically physics you know on differential equations and all
of that and that science gets into travel dude in the 19th century when you
know all of the craze was not about cannonballs but was about like heat
engines and they need to develop thermodynamics and statistics and that’s
another science that is based on another math in this case probability and
statistics that allows to deal with systems that have what he calls
disorganized complexity because these gases or things that if you change the
element involve you know you change an atom here by an atom there it’s the same
you know it’s a gas it’s like the statistical property of what matters and
what he said that what’s happening during the middle of the 20th century
was a transition to try to understand systems of organized complexity and to
him these were systems in which the identity of the elements involved and
their pattern of interactions cannot be ignored
okay so society biology our systems of organized complexity the economy of
course the system of organized complexity if you change one protein
type by another protein type the whole cell you know my Dino or you might get a
disease so just are we agree there’s a it’s about the complexity of
the search space yeah in Part II you know like yes I didn’t mean to cut you
off but I felt like we might be going down a rabbit hole and I was trying to
increase our common understanding so thank you for that now we’re all clear
Thanks let’s let’s move on and first off they were at least with Torah Eric your
your talks there seem to be quite some similarities in the sense of you’re each
talking about the Ko’olau evolution of intelligence and complexity could you
speak to where the similarities and maybe some dissimilarities in your
approaches are yes so I think I’ve focused mainly on systems with few
agents where each agent has a higher degree of internal complexity a higher
cognitive capacity if you like and you know in in chess there’s two agents if
you like in these capture-the-flag games there may be four or eight or
sixteen agents and so for us that is interesting because we’re interested in
growing the intelligence of each of these agents involved and so we want we
want them to have capacity to express more complex strategies to play chess
better clearly you need you need some degree of complexity there I think
that’s that’s in contrast to what Eric was proposing where the focus is on have
looking at more agents that that have lower complexity individually but that
can collectively create interesting macro phenomena and I think there’s an
interesting interpolation possible between these spaces we would certainly
also be interested in maybe inserting some of our agents with more learning
capacity into such simulations although at some point we might run into into
some computational limits but I could imagine for example that could be
bridged by creating cohorts of identical agents that you know represent groups of
a population that behave in roughly the same way but then could because of an
increased cognitive capacity in the model maybe come up with more
interesting innovations now I think that’s exactly right you know we’re sort
of working at two ends of the proverbial telescope and you know we’re building
models with lots of agents that we’re trying to make behaviorally realistic
enough to capture key elements of the phenomenon but are not nearly as
intelligent as the agents that Toria is working with and and these models can be
quite large one of our collaborators Rob X tells built a model of firm evolution
that has 120 million agents representing one agent for every worker in the US and
they’re relatively simple agents but out of that he gets macro dynamics that look
very similar to the real-world macro dynamics of firm growth in evolution but
you could imagine as we have more and more computational power you know you be
adding more agents and playing other kinds of games you know with with the AI
approach and we’re adding more intelligence behavior you know to our
large agent systems and then as they start to meet in the middle we get
things that actually look like what’s in this room and what’s out in Kings Cross
in in the economy and then we’ve got this amazing laboratory to be able to
explore real-world phenomena with seems like a good time to talk about how how
much do observations of natural intelligence noise formation inform AI
research and vice-versa how much are the two fields interacting
yeah a deep mind where we’re a big fan of that connection and it constitutes
one of the bases of our work so I think roughly the argument is very simple we
have one proof of concept that human level intelligence is possible it is the
human human mind and in association the human brain and so if we want to
engineer artificial intelligence it makes every
sense to peek over to our friends in neuroscience and work with them to
understand what prints what are the principles that make human cognition
possible and which one of those principles can we can we use in creating
an artificial agent that is such a fruitful endeavor and the the art here
is to find the right level of abstraction so does it have to be a
biological tissue that produces intelligence probably not but if we move
a couple of layers up if we see specialization of regions in the brain
then we can find very interesting correspondences and these have in fact
been made with reinforcement learning and the dopamine systems in the brain or
with convolutional networks that were inspired from how the visual cortex
works and I think at that level that makes a lot of sense and a lot can be
learned from that and I’m sure that’s true also for more of the macro level of
systems so Eric of course did talk about agent-based modeling and evolutionary
strategies in your seas are you using machine learning yeah I do a lot of
applied machine learning I don’t do fundamental research on machine learning
but we do use it a lot and I’m part of the way I think it connects to your
question is that at the end of the day the whole trick is always to have like
you know the computer learn from examples that come from in some way
human intelligence except when you’re having them play against each other of
course you know but if you read how smart machines think you know which is a
nice book that introduces people to AI if they showed like the big breakthrough
in the self-driving cars from going from this idea of trying to engineer
intelligence through mapping and features to basically having people kind
of like drive a car with a lot of sensors so then the car would learn what
parts of the road are acceptable or not acceptable and I do think that that
feedback loop that the fact that you know machines are learning from our
behavior is you know something that is fundamental is gonna stay you know as
part of the loop for at least the next decade I’ve just had one more connection
that you know the historical approach in AI machine learning has been essentially
to learn from a world that is fixed that we sample some data which is a kind of
snapshot maybe it appear of time or something try and learn from
that or we learn from a you know a game like chess you know which has a fixed
set of rules I think that the next approach where we can learn a lot from
evolutionary systems is how do we learn and innovate new strategies where the
world itself is evolving because I think you’re right and it’s quite fascinating
that we can in some sense do imitation learning it learn from the patterns of
humans but what we found in our work on alphago alpha0 and and those systems is
that while initially it was really helpful to do imitation learning based
on for example human go games or chess games it also constituted a constraint
on the system and so when we let go of that constraint once we had learned how
to train these systems and just let the system play against itself given just
the rules of the game it actually became stronger than human players so it wasn’t
limited so to speak by the existing knowledge it build its own treasure of
knowledge by just playing against itself and then what we found particularly
fascinating and working with those communities of chess and go players of
course we share that with those communities and then those humans
started learning from the machine and and picked up patterns that the machine
had discovered and and then started using that in their own games so so we
machines by themselves if you like can go beyond the knowledge that has been
created by humans but they can also feed it back and then enrich so I’m the human
come our search space as humans would be a little bit too limited and constraint
which would be an interesting cognitive science finding you know that we
basically don’t explore like the full space and we kind of like get stuck in
this local optima yeah yeah that was that was I think what you were
suggesting we might help us uncover new economic models yeah well we you know a
challenge in understanding economic systems or any any social system is
again it you know the system is self-created by all of us you have the
interaction of intelligence you know creating the structures and the rules of
the game that the intelligence that has to navigate and then that feeds back to
change the the structures in the rules of the game
and being able to simulate those systems and then have a eyes explore them offers
a range of possibilities both for understanding the systems but also for
training a eyes in different ways you can think of authorial
creating a you know a video game world for a eyes to navigate and explore well
you could also create a world like you know the housing market or the
transition to a low-carbon economy you know for those agents to explore and
just as in the NGO example it’s quite likely they would come up with things
that we haven’t thought of so I guess that leads us to the next question I
opened this session by rather facetiously saying that we’re going to
talk about planet scale problems now you know what kind of machine learning
reinforcement learning AI do for questions of the environments before the
economy when when will a AI actually be in a position to do something useful to
provide us with some insights be able to model those really very complex adaptive
systems it’s already happening in some sense yeah well I’d say you know it’s
it’s still more in the vision and slightly hand-wavy stage than reality at
this point but I think division is quite a concrete one that we now you know know
how to build actually quite high fidelity models of real world phenomena
through these multi agent simulations and quite importantly the the kind of
micro granular data it didn’t used to be available is now increasingly available
and the compute power is now available to do that and I and and you know we
have some visions about how you create software libraries and structures that
can be like LEGO sets to assemble these complex social models or economic models
but then I think the next step is to start bringing in some of the learning
and AI techniques you know from things that like you guys are doing into those
models probably baby steps at first because again there’s limits to compute
power but I think we would learn a whole lot from that kind of experimentation if
I can add a little bit to that so I you think that like at the moment you
know in all of these examples in which we have like multiple agents learning we
still have to learn like multiple agents learning within kind of like one god
there’s one person or programmer or group of people that created them but I
think the breakthrough or we have to go next is that well in which it’s not that
Eric makes one agent that represents me in his computer is that there is an
agent that represents me that exists in some sort of instance in the cloud that
can collaborate and interact with the agents that represents Thor and there’s
some way for these agents to have access to our data in some private form in a
way that we can always back pull it back and so forth and in that world in which
all of us have those digital twins that are helping us shop in the supermarket
arrange meetings with one another and do many other things we might have an
emergence of solutions and complexity and a public sphere that is empowered in
a different way our own digital twins yeah exactly yeah yeah well and and you
know to Tori’s point about humans also learning from you know these algorithms
in a eyes you know in these simulations you can have human players as well you
know interacting with the AI players and that creates a another set of
interesting opportunities to learn and there you made this interesting point
that also the rules of the games can be changed I find that very fascinating and
we have some some work on this for example you can think of two agents or
more agents that could be caught in some kind of social dilemma in the situation
where like a prisoner’s dilemma where they could be caught up by the fact that
each one of them being wants to behave selfishly but if they both behave
selfishly they actually end up in the worse spot then if they they both
cooperate it and that is actually a fairly common situation if you have
groups of selfish agents interact for example in the economy and AI systems
could also be used to help us get out of these situations you can think of some
kind of governance mechanism that could be learned that could provide just the
right incentives for agents to start collaborating and once they’re in this
this equilibrium of collaboration maybe it takes very little to keep them there
and so we could we could all get into a more harmonious mode of interaction if
we were just nudged that little bit into those equilibria that we would like to
be in anyway but we’re just struggling to find collectively because of our kind
of egotistical limitation and I’ll give a real-world example of how that
approach might be useful so you know we know that to transition to a zero carbon
economy we need to change the rules of the economic game and economists have a
favorite answer to that a carbon tax or a carbon price but we’re doing some work
trying to understand the system you know which has different regimes tipping
points dynamics and we know these systems can be sensitive where small
changes in the rules and different places can tip you into a different
regime so can we buy building you know high fidelity models of the system
actually find those what we call sensitive intervention points where we
could think of new rules of the game you know that would tip us into that regime
you know more quickly at lower cost and more effectively than perhaps other
policies would I do think that like in some way we might be getting a little
bit stuck in the context of the representative agent because at the end
of the day we could do a very advanced simulation you know that is based on
some agents that were created by someone but that doesn’t mean that those engines
reflect the Preferences and the diversity of preferences that people
might have I might be a guy that let’s say in the context of sustainability I
like to eat a lot of meat so I don’t care about that part but I’d be okay
with getting rid of all plastic well what’s the frequency of guys like me in
society or people with different preferences that’s why I do think that
at some point there has to be a transition in which the agent is
connected you know to high fidelity adaptive data about society so that that
diversity of preferences can be incorporated because otherwise you’re
gonna end up coming up with solutions and equilibrium for agents that are not
the ones that reflect us you know and that link between the agent and the
human is something that at the moment you know it’s a little bit missing from
this silico to data world but I do think it’s I talk a lot with private
the companies that are born into like the digital twins place in the next
decade and I think there’s gonna be a competition there yeah okay I’m going to
just try to we’ve only got three minutes left I don’t know how it went so quickly
and then Azeem whispered in my ear so I’m just gonna I’m going to compose this
question but maybe you can answer it afterwards in the audience sure because
I wanted to get onto the racial inequality question and Azim pointed to
your slide you had up with the ballot oh yeah and he said that is that right
there explains brexit I want instead to give you each a chance to just do some
closing comments so perhaps one minute each and then as always guys there is
the meet speaker session afterwards so I hope you’ll continue the conversation
tora can we stop with you yeah I think it’s it’s a time of immense opportunity
were building these more and more complex ecosystem and some and we can
benefit I think from from various different angles and this discussion has
shown that while we might be focusing on the intelligence of the individual agent
that can grow because of the rich interactions with other agents in that
ecosystem we can also take the macro view and zoom out of the system and a
user to understand what would happen in terms of qualitative behaviors that
emerge in these complex systems and and finally we might be ab