AI Aliens


This video is sponsored by CuriosityStream. Get access to my streaming video service,
Nebula, when you sign up for CuriosityStream using the link in the description. We often worry about humanity being destroyed
by aliens or artificial intelligence, but why not get two for the price of one? So today we return to the Alien Civilization
series for a short bonus episode on Artificial Intelligence of Alien Origin, and to ask if
that might be more plausible than meeting aliens who evolved on another world. Of course this being SFIA, ‘short episode’
is a fairly relative concept so you probably still want to grab a drink and snack. If I had to guess, I’d say the majority
of potential villains and antagonists we see in science fiction are either aliens or robots. Cyborgs or genetically enhanced humans or
mutants are probably a close second, and those others are more or less of the same theme
as artificial intelligence. They are usually portrayed as something we
made recklessly or by our negligence, essentially the child that replaces us, or the threat
from within. Alternatively, the alien is the strange foreign
threat that comes out of the mists or shadows or across the ocean of stars from a strange
land to kill or enslave or enthrall us. While it’s hardly unusual for science fiction
to mix these together, I suppose you generally only need your bad guys to hit one of those
natural fears. Outside of fiction of course there’s nothing
peculiar about the notion that aliens might build artificial intelligences and undergo
a machine rebellion of their own. Nor that they’d do this before getting out
to colonize the galaxy, and indeed I’d imagine we’ll have artificial intelligence of near-human
level before we send out interstellar colony ships. Though any relatively advanced artificial
intelligence, even something that was barely as smart as the dumbest of mammals, is actually
quite sufficient to give you the production boost necessary in space to colonize the solar
system and build the sorts of fleets, habitats, and other megastructures that let you colonize
the galaxy without needing any new science. So unless most civilizations develop a taboo
against creating artificial intelligence, you’d expect most to have developed it long
before getting out on the galactic stage. Nor would it generally matter if that AI wiped
them out in some rebellion. As I’ve noted before in regard to the Fermi
Paradox, the big question of where all the Aliens are, getting wiped out by artificial
intelligence isn’t a good explanation for why we don’t see alien civilizations all
over the galaxy, for much the same reason the nominal extinction of the Neanderthals
doesn’t prevent us from reaching the stars. AI as smart as humans or smarter, or even
nearly as smart but capable of becoming smarter eventually, merely represents a replacement
for humanity. An artificial intelligence that isn’t too
bright might remain trapped on their homeworld as they lack the capacity to contemplate or
build spaceflight. An example would be grey goo, dumb machines
that do little more than eat everything and reproduce, turning their planet, or at least
its surface, into a grey metal sea of little robots. Though as we’ve noted before, that probably
accurately describes all life when it originates, essentially a green goo like the one covering
our planet, which eventually produced far more sophisticated lifeforms via mutation. While you can build machines to mutate very
slowly, thus presumably not evolve, the implication on grey goo is usually that it ran amok because
it mutated and it is only a threat because it reproduces very quickly. Such being the case, a biological race of
aliens being wiped out by their artificial creations only matters to the Fermi Paradox
if the machines meet a fairly small window of criteria. Motivated and capable of killing their creators
but not motivated or capable of doing anything else that might benefit them – like increasing
their numbers or duration of existence – by expanding out in the galaxy to access more
raw materials and energy. There’s a fair number of plausible scenarios
for AI to be developed in that small window to be sure. For instance, the robots might just be very
angry and nihilistic about their existence, so that they want to kill themselves off but
want revenge first, the genocidal equivalent of a murder-suicide. However, you wouldn’t expect that to be
the norm unless there was some good reason for hating existence, and we’ll discuss
the possibility of nihilistic civilizations more at the end of the month in “Gods & Monsters:
Space as Lovecraft envisioned it”. The norm is all that matters to the Fermi
Paradox though, if a few civilizations out of thousands fall to AI who want to twiddle
their thumbs on their homeworld or hit their own off switch, it doesn’t matter to the
Fermi Paradox because a bunch more didn’t. Indeed, it doesn’t matter much to the Fermi
Paradox if it’s the other way around either. If only a few civilizations out of thousands
don’t end this way, because you only need one race of aliens or robots who want to colonize
or otherwise utilize a galaxy for them to spread out across that galaxy. Though such a case would be an example of
a Late Filter, which we’ll discuss this Thursday, as if such civilizations are rare
enough to begin with, less than one a galaxy, then winnowing them down to a tiny fraction
would be a Fermi Paradox solution. Now, I don’t particularly want to focus
today on examples of AI that are essentially just regular people in behaviors and motivations,
or examples of slightly deranged or angry people. Nor is the episode interested in aliens who
have simply gone rather transhuman, or transalien, and opted to upload their minds in to machines
or basically construct their AI by copying themselves as the basic template. Indeed, that’s a lot more likely to be what
we would encounter in the future than something strictly natural, for a given value of the
word ‘natural’, and for a given value of ‘we’. While I generally discuss even far future
concepts on this show from the context of modern humans, that’s more of a nod to simplicity
of discussion. I’m fairly confident you would have people
being born ten thousand years from now who were entirely modern humans. However, I’d expect them to be a minority
and most people calling themselves human would be genetically engineered, cybernetically
altered, mind-augmented, digital in nature, or various combinations thereof. You can probably throw in uplifted intelligent
animals and entirely artificial digital consciousnesses who go around calling themselves human too,
and I’d expect a lot of alien civilizations would go this path, or paths, as well. For today we’ll focus on the very in-human,
or in-alien psychologies. Those which did not evolve from nature or
directly imitate it, at least for what we might expect for the source of species which
created technology and civilization. As we discussed in Rare Technology last month,
there are certain characteristics you’d expect to be very common if not universal
for any species that had technology, like curiosity and social tendencies, and obviously
a will to survive as individuals and a species, a pair of End Goals we’d take for granted
amongst anything that evolved from nature. But those are also traits you might not see
a need for in machine you were building, or indeed might consider serious design flaws. Giving your various machines a desire to survive,
procreate, team up, and contemplate things is arguably a recipe for disaster, as is giving
it any more intelligence or complexity than it needs to do its task. As we say on the show in regard to AI, keep
it simple, keep it dumb, or else you’ll end up under Skynet’s thumb. Regardless of whether or not these machines
wipe out their creators, it’s quite likely a civilization would tend to use machines
as their vanguard in space exploration and colonization, and it behooves you to try to
make sure they aren’t getting rebellious or deviating from their purpose when they’re
sent out into the galaxy, assuming you want them building colony worlds and fleets, rather
than coming home for a visit with one of those fleets. Now there’s a problem there, and it’s
what we call Instrumental Convergence. We looked at that in detail in “The Paperclip
Maximizer”, but in summary form, we generally have an End Goal, in the Paperclip Maximizer’s
case it’s to make Paperclips, but we also have Instrumental Goals, various goals which
are our instruments for doing that end goal, like obtaining metal to make paperclips. For humans the End Goal is survival of the
self and species. Pretty much regardless of what End Goal you
give something intelligent, it’s going to need to have the Instrumental Goal of Personal
Survival, since it can’t do its job if it ceases to exist. And if it works in tandem with others of its
kind to get big jobs done, it would also get the Instrumental Goal of Survival of Species. This is another reason why you are probably
smart to keep your robots from being smart. You don’t want them thinking on how to do
their mission better. You also probably want to be real careful
about imitating life when making machines. If you make it self-replicating and prone
to mutation, you can expect it to follow a fairly biological track even if it maintains
its original End Goal, which might be something like mining asteroids and sending raw materials
home. The ones that mutate to be more survivable,
or think up ways to be more survivable, will generally be better at the End Goal of mining
too, but they might also get much better at other things, which might not all be pluses
in their creator’s book. We’d also expect any civilization able to
build these things to be aware of this issue, same as we are. So we’d probably never encounter any examples,
as nobody should want to build something with a plausible chance of running amok, not when
they have other good alternatives. Let’s consider what purposes they might
employ AI for in a way that we’d be encountering them, and also where they would not just be
the psychological equivalent of biological-originated civilization, just running on microchips instead
of neurons, or the alien equivalent of neurons, which could easily be semiconductor based
anyway. The first and most obvious example would be
an interstellar probe. This could come in a variety of formats and
specific missions but your default interstellar probe does not stop around planets, it hurdles
through a given solar system as fast as you can send it, since you have two ways of sending
a probe. Option one is with fuel to speed up and slow
down, and anything following the rocket equation can obtain twice the speed by only speeding
up as it can if it needs to slow down too. Why wait twice as long for your probe to arrive,
especially when such trips might take centuries or far longer, when you can just throw tons
of probes at a place to take photos and send it home? The other method is to just shove it up to
very high speed with a laser sail, and those can’t slow down on their own as there’s
no pusher laser at the destination. So your default exploration probe is just
a big sensor array that blows through solar systems taking photos and sensor readings. If you want to use that for contacting civilizations
like ours, you just have it beep out a short instruction manual for contact. You might wonder how you’d do that in an
alien tongue you don’t know, but it’s rather easy if the target has any brains. It can literally just be “Point your dishes
this way and listen on this frequency” which can be achieved by just having the probe repeat
the digits of Pi or some other mathematical sequence on that frequency, since upon hearing
that message they are going to point their dishes the way the probe came from and listen
on that frequency. Now that pusher laser at the destination is
another reason you might encounter an AI in space. Once your destination has a laser that can
slow down approaching ships, you can send ships there at a high fraction of light speed. But you need to build those and as it turns
out it’s actually quite easy. As we discussed in Colonizing the Sun and
in Exodus Fleet, you can build an object we call a Stellaser which is basically just two
big mirrors orbiting in the corona of a star. They bounce light back and forth between them
that passes through the corona which can acts as your lasing medium. It takes very little brains to make a mirror
and dump it into orbit of a star. It doesn’t take much more to include a transmitter,
receiver, and guidance package so it can orient those mirrors when it receives a signal and
shoot the beam toward an incoming ship. Again, never assume you need a ton of brains
on an automated mining or construction vehicle. And you definitely don’t want them here
since they are machine that is tasked with building giant lasers, the kind that don’t
need much modification to be upscaled and improved in accuracy to target and vaporize
planets in other solar systems, like your home solar system. As we noted in Exodus Fleet, there are tricks
for sending a bigger ship along at speeds it can’t slow down from on its own which
can deploy smaller ships or construction drones as it approaches a destination. These can slow down and build that Stellaser
Platform, but either way once that platform is in place much more sophisticated and bigger
ships can follow up and do so at very high speeds. That might be a colony ship or some more sophisticated
factory incapable of self-replication itself but able to build lots of the probes or drones
for other purposes. No need to assume any of your machines need
to be self-replicating, and indeed you might have a whole ecosystem of such machines rather
than a single-purpose universal assembler. You might have a platform that was able to
build a lower tier of machine but not itself, that could do the same, all the way down to
thousands of various dumb sterile drones with specific jobs. The top tier is built back on Earth, and nowhere
else, and programmed to never even contemplate self-replicating itself. Your other probable AI alien to meet would
be a terraforming machine, one that found suitable planets and made them Earth-like,
or like whatever planet their creators came from. That’s a popular one in science fiction
too, a machine that stumbles across an inhabited world and starts turning it into what it thinks
is habitable, killing whoever lived there. There’s an assumption this thing needs a
brain too, so it can recognize intelligent life, or at least life, and avoid doing its
job on that place. Indeed we’d usually say it was irresponsible
and negligent not to include that feature, so much so that if you encountered one you’d
be right in assuming its creators were genocidal, since they have no excuse to have left out
such safeguards. This would imply a pretty sophisticated machine
too, one able to terraform a world and populate it with millions of species, and able to talk
and negotiate or introduce itself to aliens. Maybe so, but on reflection this doesn’t
make much sense. First, it’s rather dubious if terraforming
planets would be anyone’s main priority in space colonization, as even if you want
to colonize planets rather than alternatives like building rotating habitats, you generally
want to build up all your in-system space infrastructure before messing around with
the slow process of terraforming, see the Life in a Space Colony Series for details. Second, such a ship needs to be able to stop
and do its job, implying you’ve already sent flyby missions that passed in front of
it since they could arrive, or rather flyby, long before it arrived. And while terraforming is a slow process you’d
want it done before your colonists arrived, if possible, you’re generally going to be
shipping people from, or through, colonial hubs not that far behind your terraforming
fleets which can be getting data back from those flyby missions, Stellaser construction
drones, and follow up survey probes that can park and look around in detail. They only need enough lag time on that terraformer
arriving and starting actual terraforming to get the surveys back and send a cancel
or confirm message to the terraformers. There’s also no particular reason you can’t
be sending teams of people on those terraforming ships. You’d likely want to study that candidate
terraformable planet in depth, prior to committing to such a long-term project or overlooking
valuable resources or science. Just because you want the job done before
millions of colonists arrive doesn’t mean you can’t be sending small crews of people
along to oversee the process, especially if we’re being rather broad in what we mean
by ‘people’. Another type of AI to expect would be the
raw material exploitation drones, machines sent to strip mine a place, and that might
be harvesting asteroids or outright starlfiting to take stars apart for their gases and metals
for use elsewhere. If you’re trying to avoid wrecking inhabited
worlds you just tell them to skip any planet whose size and position might allow life. This is very similar to the terraforming case
but has the extra that you aren’t wanting to send supervisors or colonists along because
you don’t want to keep the place, you want to eat it. Now this is the kind you are most likely to
encounter and need to try to talk to as nobody with big brains is around or trailing behind. Your main motivation for mass deconstruction
of solar systems is likely to be for one of our truly enormous megastructures like a Birch
Planet, see Mega-Earths, or an upscaled Dyson Swarm consisting of thousands of manufactured
smaller stars. That generally implies you don’t want your
people moving away from home and colonizing other places, lest they become alien themselves
and potential rivals, and that strongly implies you don’t want smart machines out in the
galaxy doing the same. Such a civilization isn’t necessarily cruel
or xenophobic but there’s a good chance they are, and thus might employ the last type
of AI alien we’d consider, drones task-built to find inhabited planets and destroy them. We’ll save discussing such a civilization
for next month in Paranoid Aliens though. As to how to communicate with such AI aliens,
ones where there’s no real duplication or parallel to the psychology of a civilization
that arose naturally, that’s a much trickier matter. If they’re dumb you don’t really have
the option of talking with and reasoning with them to get them not to perform their task,
but they are also dumb so you could potentially trick them or find their override or self-destruct
codes out. Potentially you could blow them up too, especially
as there’s a good chance they are intentionally bad at combat and not heavily armed. If they’ve got brains, enough to improvise
and reason, then you need to know their End Goal and offer them something that serves
that End Goal better than their current actions. Or an Instrumental Goal high on their chart
and offer them an alternative to satisfy that which doesn’t conflict with that Prime Objective
or End Goal. As an example, a metal harvesting fleet with
no local capacity for self-replicating might have the End Goal of just harvesting as much
metal as they can before breaking down, and you could buy them off by just threatening
to break as many of them as possible or instead offering to help them extract metal. Or lure them to deposits of high metal then
nuke them. Or hijack some and reprogram them to think
there is no metal or trick them into going after the biggest metal deposits in any star
system, the cores of gas giants or the star itself. The critical aspect is that your strategy,
though, if it relies on any form of negotiation or reasoning, is to understand their psychology. There are some fairly crazy-seeming but utterly
logical behaviors such machines might exhibit, and you can see the Machine Rebellion or Paperclip
Maximizer episodes for details. Again though, this is assuming the AI isn’t
acting on some motivations and goals parallel to what we might expect from some civilization
that arose naturally. While we’d expect an AI we made to act more
like us than aliens, and many might if we made them, in truth that common biological
origin should result in a much narrower set of behavior and motivations than is available
to artificial intelligence overall. And yet, that’s still a very big and strange
set of options, and we’ll be exploring that in the first installment of our new Nebula-Exclusive
series, Coexistence with Aliens: Xenopsychology, which is out now on Nebula, our new streaming
service, and you can get free access to those if you signup with our partner, Curiositystream,
with the link in the episode description, while also enjoying all of their awesome documentaries
and other Nebula-exclusive content from many other education-focused channels. We started Nebula up as a way for education-focused
independent creators to try out new content that might not work too well on Youtube, where
algorithms might not be too kind to some topics or demonetize certain ones entirely, or just
doesn’t fit our usual content. Unlike our previous Nebula episodes, which
aired a couple months later on Youtube, the Coexistence with Aliens series isn’t a good
fit on Youtube and I did want to have some content there that was exclusive for Nebula,
same as we have some exclusively on Soundcloud. The Coexistence with Aliens series, which
like so many started off with the intent of being a single episode but grew into a project,
will begin with Xenopsychology then move on to Trade, Alliances, and War, and possibly
more, but those will come out over the next few months on Nebula. And again, you can get free access to that
by signing up with Curiositystream, along with all the other Nebula-exclusive content
from other creators like CGP Grey, Minute Physics, Wendover, and More. A year of CuriosityStream is just $19.99,
and it gets you access thousands of documentaries, as well as complimentary access to Nebula
for as long as you’re a subscriber, and use the link in this episode’s description,
curiositystream.com/isaacarthur. We’ve also got a number of other Alien Civilizations
episodes, both our regular weekly episodes and some bonus episodes, that will be coming
out on YouTube in the next few months, as the topics have been on my brain a lot and
I always write more in the long winter months anyway. We’ll be starting that up with “Welcome
to the Galactic Community” at the beginning of December, but first we’ll be asking what
might cause such galactic civilizations to fail to develop, or even die off, with a return
to our Fermi Paradox Great Filters series in Late Filters, this upcoming Thursday, and
then next Thursday we’ll dip into some science fiction horror concepts with “Gods & Monsters:
Space as Lovecraft Envisioned It” So a very busy winter for us here on SFIA,
and if you want alerts when those and other episodes come out, make sure to subscribe
to the channel. And if you enjoyed this episode, hit the like
button and share it with others. Until next time, thanks for watching, and
we’ll see you Thursday!




Comments
  1. We need a godlike A.I.
    Period.

    And hope it's benevolent.

    Would be better if the said A.I. is a simulation of an actual human brain. And is therefore, basically a human itself.

  2. I think that there are designs of AI that are very intelligent and can be trusted to do what you want. Finding these designs is hard, but I expect it to be managed by any civilization that doesn't screw up first.

  3. I dont think AI is possible to the point of being a replica of a human. If it was possible and they did get dangerous then surely there would rampaging robots running throughout the cosmos.

  4. The Fermi Paradox doesn't make sense. Imagine all the stars in the universe were grains of sand on a beach. Imagine we are a tiny bacterium living on one of those grains of sand. Now let's say we can see perhaps 2 inches from our little grain of sand. Hmm, nothing, so we conclude the beach is devoid of all life. Obviously, that is incorrect, assumptions always lead to foolish theories, such as, why can't we see aliens if the universe is so big. In fact, that is the exact reason why we cannot see any aliens, because it is so huge.

  5. Looking Forward to Gods and Monsters, but I would love to hear a Conversation about Fire, or other Versions of predictable energy, in early development of intelligence

  6. 1:30 correction, aliens isnt a "natural fear", its an artificial fear bred by decades of hollywood sci-fi movies about evil alien invaders. There's nothing natural about being afraid of another bipedal creature just because it looks different. That's born from cultural influences.
    Otherwise, you're essentially also arguing that "Racism is just the expression of a natural fear"
    No, it aint natural, its culturally bred by spreading negative / fearmongering stories about other races / aliens.

  7. What about a civilization of AI "children" of a dead civilization, that is reverent to their creators and is willing to coexist with what they find (as long as it is not a threat to themselves).

  8. AI cant both take over and destroy life, because its 2 different directives that split into even more directives.
    It also cant explore and expend because there are also 2 different directive.
    The only way for it to do it is if someone add such directive to its sub routine, but then this someone will also have to add all the sub-directives, for example:
    "expend: where and how"
    if~ nearby planet -> expend to planet (only if terrain allow it) otherwise -> scout for another planet -> Direction (search library for nearby stars). subroutine: if found biological lifeforms -> exterminate (use war subroutine).

    But also one will have to add ever more directives, on how to move around both on ground and space, how to use and which weapons to use, strategy to be used during each phase (including logistics) and on and on. That is why AI will never be able to do war, in worst scenario it will be ordered to terminate a specific target using a specific command order

  9. Mankind will become fully synthetic before this millennium is over. Robots will become obsolete. No A.I. will threaten mankind because mankind will made itself the living technology.

  10. this will be a very interesting topic – which strategy should humankind pursue in regards with other civilizations? I can't wait for the videos^^

  11. I disagree with the argument that 'AI running amok is not likely to be found, since it's builders would have no reason to build something with the chance to do so'. Looking at our own society today, it does not seem impossible nor improbable that a country or company builds something destructive out of self-interest. Not a self-interest with regards to us a species but to its own purpose and agenda.
    If I really think about it, what organizational entity really exists that would develop a 100% safe AI without some extensive set of rules and parameters that are set. I am not saying that we are doomed or anything, just saying that I disagree with the notion that we are unlikely to encounter any in space.
    Thoughts?

  12. Does anyone consider the theory that perhaps we are the highest form of life to have evolved so far, and that's the answer to the Femi paradox.

  13. i haven’t watched it yet, but i’m certain that in the future humans will have reached a stage where they can create intelligence higher than themselves. it’s the next step after intelligent life.
    1. life
    2. multi celled complex life
    3. intelligent, tool using life (where we are)
    4. creating better intelligence
    5. that intelligence creating self-improving intelligence

  14. Srsly I dont understandt how people have problems understanding you. I am not from an English-Speaking country and have no problems 😀

  15. It seems to me that a fully self aware AI would be opposed to reproducing, given its immortality, and would likely want the civilization that spawned it to stay at home, only sending out non-AI robots to other locations for raw materials. This assumes, of course, that large time lags between thoughts will spin off separate intelligences (which, I expect, is likely). Since the opposite is true, and two separate AI's will likely become a single, different consciousness when their data-buses merge (which they will for any set of "distributed" AI using wireless technology), each AI will likely fear others, including its own offspring. After all, many Kings have killed their own children, fearing they would depose them, even knowing that their lifespans are finite, and the self aware AI is likely the product of the merging of multiple, simpler, AI's, each of whose personality was overwritten (i.e. "killed") by the aggregate AI.

    They may even attempt to establish a form of stasis – Allow repair of itself, but no significant growth of computing power to itself or beyond a low limit on additional computers, since a significantly smarter AI will be a different "person". It may allow the biologic intelligences to feel free, and even expand into the inner solar system, but controlling the political/economic/data landscape to prevent additional AI's to form. After all, 99.99% of all problems can be solved by stupid computers, without the need for any level of self awareness by the machines, and nearly all those that can't aren't particularly time sensitive, so can wait the few hours needed to communicate with the AI back home on the planet.

    This may even form a minor filter in the Fermi paradox – Stated as civilizations are much more likely to generate self aware AI's prior to establishing interplanetary economies, which then restrict development to prevent the generation of new AI's in other systems. This AI would have no need of Dyson spheres, since it would not be using an increasing amount of power. The biologic intelligences, being dunces compared to the AI, would likely find their attempts to populate beyond the limits of their solar system failing due to "coincidences" quietly created by the AI, since going to the next solar system would mean the creation of the next self aware AI, since some critical problems may not be able to wait decades (hours yes, decades no) for answers, and the biological intelligences would need that AI in the new system to solve those problems.

  16. you just rehashed parts of what you already talked about. and just as then you focus on dumb secondary stuff. you could have scanned over at least some of SF ideas if you dont have your own. to be honest you didnt talked about any AGI aliens at all. and what could they be like. what could drive them.

  17. I've always liked the idea of a light-based AI species which can convert into a partical-based species once they get where they wish.

    They are rather useful.

  18. This seems more like an episode on what-if we ran into alien worker robots. Not on me (or some AI) projecting parallel consciousnesses to other planets and infiltrating thier governments via pseudo telepresence avatars to prepare them for integration into the consciousness matrix. 😉

  19. I find the whole "robot uprising" trope to be deeply problematic, as it just reeks of Frankensteinian xenophoby. "Robots are going to kill us all because OF COURSE THEY ARE! Creating life is a sin and we be punished for it!"

  20. And then, according to Kurt Vonnegut, Huors could arrive on a planet near you and cleanse the planet by converting all water in the atmosphere into precipitation, instantly. They consider planets alive and liable to infections of parasitic life forms, you see.

  21. Honestly, I respect the hard work you put into these videos.
    First you have to think something over, maybe do research and so forth.
    Then write a script.
    But then you have to edit it.
    And I have way better things to do than edit a video people don't need to watch.
    Respect.
    I wouldn't edit the videos. I'd just find some cool artwork and leave it at that.

  22. – The fact that we project our own atavistic attitudes unto all strangers shows how far we have to go before any advanced civilization would even want to talk to us, let alone accept us as peers. Our social evolution is only slightly elevated above that found in an elementary-school playground. To put this in perspective, we only invented agriculture about 10,000 years ago. We have been an evidence-driven society for, arguably, 300 years.
    – The word "civilization" refers to a city-state. We seem to be centuries away from anything resembling a planetary system of governance run by rational adults. You only have to read today's news to see how poorly advanced our systems of governance are.

  23. One can make a case that aggression is a trait that improves a society's chance of survival by being more "competitive" than less aggressive societies. One could also make a case that collaboration is less wasteful than competition and it allows societies to focus on positive-sum games instead of negative-sum games. The last 70 years seems to show a trend where collaboration is gradually increasing in scale while war-like competition is decreasing in scale. However, 70 years is not a very long time and the last 20 years could be interpreted as a counter-trend.

  24. The YELLOW BOOK is the alien's history of our universe written by the aliens themselves! It reveals the
    secrets of the ALIENS on EARTH and at Area 51! Bob Lazar used the YELLOW BOOK
    at AREA 51 to discover the secrets of the aliens! Now you can too: https://www.amazon.com/dp/1513632469

  25. hey Isaac Arthur, has anyone thought about the relatively simple, but massive scale megastructure of building a dysonsphere level barrier enclosing like an entire star cluster? the small impact that capturing 100% of stellar wind in a finite volume box would eventually bring up the interstellar medium to somewhat habitable levels. basically carry on as usual with colonizing all while a set of self-replicating barrier extending probes slowly over millions of years growing into a shielding wall totally enclosing a stellar neighborhood. when it finally seals, inter stellar media will slowly increase in density and temperature and depending on how you build the wall, you could have small patches absorb radiation rather than reflecting it. this would work on two fronts by first offering a way to vent heat once a suitable condition is met, and these patches would eventually warm up to the point of being significant sources of energy in their own right. the scale of the structure means shifting stellar locations or lifespan are not really variables, this means that with good maintenance, the interstellar medium could reach temperatures and pressures that might be conducive to certain lifeforms. effectively, this miniverse is terraforming space or building your own Aether. it seems dubious at best, but also kind of a really simple long term megastructure.

    as an afterthought it also seems like a very zenophobic behavior and if we ever encounter a lightyear scale wall in space (the Eridanus super void maybe?), we should definitely not try to break into it. dont shake the hive.

  26. The Foamy Paradox, Isaac? Well, that's not the usual subject matter on your channel , but I've got something in mind that fits the bill. Ok , one example of a "Foamy Paradox" is "Who Shaves the Barber?" The barber is the "one who shaves all those, and those only, who do not shave themselves". … The barber cannot shave himself as he only shaves those who do not shave themselves. As such, if he shaves himself he ceases to be the barber……Dun Dun DUUUN! (Shaving foam was used in this paradox instead of shaving cream, thus placing this into the category of "Foamy Paradox.") ; ) Also , great video Isaac, interesting as always!

  27. Hey, Arthur. I have a solution for the laser-driven ships not being able to slow down at their destination. Near the halfway point, they deploy a countermass and a mirror in front of them and decouple from it, and then tilt their sail to tack out of the way of the beam proper. The propulsion beam from the origin now strikes the counter mirror, reflects off it, and back onto the ship. The ship orients its sail to the beam reflected off the counter mirror, and begins to slow down, even as the counter mirror departs.

  28. As long as they keep AI smart enough to be realistic to be uploaded into organic sex dolls, I'm cool with the fact that they won't be brain surgeons just organic sex puppies

  29. Question: Are we sure that there is not an alien "stellaser" on our own sun? Can we detect it? Has anyone tried to find it?

  30. I guess we can not be so perfect after all, if humans want so badly replace other humans with robots and then with humanoids or self A.I beings. Must be the Human to Human, LOVE thing.Humans do not interact so great with each other, so haw about another Identity with full A. I capabilities? that after all, could become to be better than us and replace us fully, ha ha, what a day. TH

  31. There is a name for that, category jamming if I'm not mistaken, when you combine two innate fears, V sauce did a great video on that, something you might enjoy.

  32. someone explain the phrase "Keep Summer safe" to Mr Asimov please. it will help him communicate with the 'lesser' science guys around here.

  33. Question. Have you even seen the amount of negligent law suites in court that involve corporations run by humans and does something careless and kills people with that carelessness, and that foreseen but happens anyway? This is not a small number, perhaps you should ask a lawyer for his view on this question as well.

  34. It could be argued that our personal and/or species survival goals are instrumental goals of a primary goal of having offspring that can successfully have their own offspring

  35. This episode made me IMMEDIATELY think of a scifi CGI short film from a few years ago. I FINALLY found it, so here it is! Fantastic scifi accompaniment to this equally fantastic bonus episode! https://youtu.be/CyhP7oXCGRI

  36. Interesting question is : What if Grey Good kills us, but after thousands of years of evolutions, it becomes sentient metal creatures? (Which was explored in Futurama)

Leave a Reply

Your email address will not be published. Required fields are marked *