Shalev Lifshitz at SXSW – Reforming Civilization with AI

Are we alone? We’ve been asking this question since the beginning of civilization, religion, mythology. Are we the only
intelligent life out there? But this question is changing it’s not so much
out there the real question is what if we artificially created intelligent life?
Because that’s what’s happening right now most people think that AI is a
technology but really AI is the outcome of a lot of different technologies like
machine learning that allows us to understand patterns in data. Computer
vision: seeing, perceiving, and understanding the world around us.
Natural language processing communication and language as well as
robotics giving machines the ability to move in the physical world. We hope that
by mastering each of these technologies we can achieve intelligent-human level
intelligence AI, but what we’re lacking is an inter-connectivity between these
fields. We’ve made loads of progress in the past years. AI is much better than it
was 20 years ago, but there’s no inter-connectivity and right now we’re at
a point called augmented intelligence. Where AI and humans work together to
complete each other and a great example is the field of pathology. Pathologists
look at cell images, count cells, measure cells, in order to diagnose a patient and
then treat them, but usually pathologists would sit in front of their computer
screens looking at these images for hours, counting these cells. What I’m
doing at SickKids Hospital in Toronto, Canada is creating an AI to retrieve
that information from those images in seconds and then give that information
to the pathologist so they could focus on what matters
actually diagnosing and treating the patient, and this is augmented
intelligence where we work together to complete each other. But like I said
we’re at a point where we’re lacking interconnectivity between these fields
we’re just getting really good at each of them, and inter-connectivity is a major
contributor, major factor in reaching what we call artificial general
intelligence or AGI. Right now human are good at what a AI is bad at an AI is
good at what humans are bad at. AI can do these quick repetitive tasks that
would usually take us a long time and humans can make complex decisions
using external factors, prior knowledge, we have consciousness, creativity and this is
what AI lacks but AGI is the goal to close that gap to make AI good at what a
AI is good at and at what humans are good at. To make AI good at everything. And so why is AGI important? It will enable us to solve some of the
toughest problems in the world in an instant humans can only look at a few
variables at a time and understand how they relate but machines can look at
billions of data points and understand how each one relates to each other.
Imagine if you had the ability to understand how particles interact to
form molecules and then to form larger substances, imagine having that power.
That’s the power of AGI. Cancer? Gone. Interstellar travel? Easy. Bridging
general relativity and quantum mechanics in a theory of everything? We’ve got
it with AGI, but we can’t reach this point until we understand ourselves and
implement learnings from our brain inside of a computer and reach this
inter-connectivity, break away from these single task networks and so we need to
reach general intelligence to solve these problems, and the current method
that we’re doing this is through an algorithm called a neural network it’s
the current imitation of the human brain and it’s important to understand the
differences between a neural network and our brain in order to know what we need
to fix to get to AGI. So a neural network is composed of mathematical units called
neurons. They receive an input, they give out an output and they’re stacked on top of
each other in columns called layers and each one receives information from the
previous one. Thus it learns, that’s a very basic
representation. Now what we’ve been doing over the past few years to get to more
general AI to stronger AI is making these networks deeper with more
connections and more and neurons. Our game is to learn more, to be more
general, right? But deeper networks don’t necessarily translate into consciousness
and creativity. Simply making networks bigger won’t cut
it, because consciousness is more complex than that.
In fact current neuroscience research suggests that consciousness is an
inherent part of our brain resulting from the interactions of the different
parts in concert not just a specific place in our brain. We don’t have a part
of our brain where consciousness resides and it’s an emergent property
and so what we need to be doing is thinking how can we restructure these
networks to be more like the brain to focus on the interaction between them.
Now you might notice another important difference: this network is very linear.
Information travels in a straight line. Whereas this network is a straight line
the brain is organic it loops in and on itself many times and this linear
structure is stopping us from getting to more general AI to solving these
problems. So if we know that this is how the brain works why are we making neural
networks in such a linear fashion? It’s because of a massive limitation the only
widely used and generally accepted learning method for AI called gradient
descent relies on this structure this linear structure. In fact when you apply
it to a brain like structure it fails, it can’t learn and so we need to be fixing
these problems if we want to get to AGI my research at the University of
Waterloo in Waterloo, Canada has been focused on creating a new
neural network that’s more like the brain, but not only a new neural network,
actually creating a new algorithm, a new way of learning that could work on this
structure, that takes away gradient descents limitations. But let’s say we
achieve AGI, we get it. The questions that people are asking is
when AI is good at everything what will humans do? What will happen to
us? What are the dangers? With every new innovation there comes positive and
negative possibilities and it’s our responsibility to work on them, but the
real danger in AI lies in what we can’t control. As we code the next generation, the future of AI and set its goals. What’s the danger? It’s sub-goals. Sub-goals are a lesser goal that forms in part of a greater
goal so if I tell an AI, “improve living conditions on earth.” That’s it’s main goal,
but what if its sub goal is to eradicate half the population to improve living
conditions. Obviously that’s not ideal, right. So what do we need to do to avoid
these sub goals? What do we need to do to ensure goal alignment between AI and
humans? We’re thinking of doing this through encoding values into a machine,
human values. Why? Because values can’t be broken. Values can’t be broken. But this
brings two problems when we start encoding values: a technical and a
political problem. The technical, how do we encode values? We simply don’t know
how to do that, values that can’t be broken even through an iterative
learning process. The political aspect, as cultures change around the world values
change across the world. So which values will we encode into these machines? We cannot stop progress, we can only prepare for the future. We need to be asking
these questions and trying to find solutions and another key point is
education. More specifically, AI education. We need
to teach the next generation about AI so they understand the technology and not grow up fearing the technology. Because we fear what we don’t understand
and fear drives irrationality. AGI will enable us to solve some of the most
important problems, the toughest problems, the most exciting problems of the future.
But we need to ensure harmony between humanity and an entity like AGI. We need
to create a seamless pathway that connects the two entities and ensures
goal alignment, but not only will integrating AI and humanity together
solve this problem of, “the danger of AI.” But it’ll give us computational powers
that we have only dreamed of and with these newfound powers we will solve the
problems of the future. thank you *applause*


Leave a Reply

Your email address will not be published. Required fields are marked *