Episode 75 • 4 February 2024

Eric Schwitzgebel on the Weirdness of the World

Leave feedback ↗

Contents

Eric Schwitzgebel is a professor of philosophy at the University of California, Riverside. His main interests include connections between empirical psychology and philosophy of mind and the nature of belief.

Eric Schwitzgebel

In this episode we talk about:

The diversity of forms of existence that there could be if AI becomes our moral peers is radically broad[.] Where we’re as unprepared for it as Medieval physics was for spaceflight.

Resources

Let us know if we missed any resources and we’ll add them.

Transcript

Note that this transcript will be machine-generated, by a model which is typically accurate but sometimes hallucinates entire sentences. Please check with the original audio before using this transcript to quote our guest.

Intro

Fin

Hey, this is Hear This Idea. In this episode, I spoke with Eric Schwitzgebel. Eric is a professor of philosophy at the University of California, Riverside, and he’s done work in philosophy of mind, the nature of belief and self-knowledge, Chinese philosophy… Sometimes he writes science fiction and he’s done empirical work as well, including this tk report on whether professional ethicists steal more books from libraries than average. We talked about, well, firstly, digital consciousness and whether that possibility might become really contentious potentially quite soon. Also, just why consciousness in general seems so confusing and how introspection might be much less reliable than most of us might hope to that end. And finally this idea that our actions might have infinite consequences, good or bad, and had to even begin to think about what that means. Just a note to say that Eric has a new book called tkThe Weirdness of the World, which covers very similar ground to this conversation. I guess the connecting theme is these aspects of the world where even if we don’t know the correct explanation, it just seems like all the candidate explanations must be very bizarre in some way. It’s very good, and I’ve linked to it. Oh, and it does turn out that professional ethicists steal more books than average. So there you go. Okay, I bring you Eric Schwitzgebel.

Eric Schwitzgebel

Infinite ethics

All right, Eric Schwitzgebel, thanks for joining me.

Eric Schwitzgebel

Yeah, good to be here.

Fin

Great, and I guess as a way to warm up, we tend to ask guests whether there’s a particular problem that you’re stuck on or really thinking about right now?

Eric Schwitzgebel

I’m working on 11 different papers right now.

Fin

Uh-huh, one of those people!

Eric Schwitzgebel

So I tend to flop around between problems. I’m having fun thinking about longtermism. I’m going to be giving a paper on that. In a week at the Eastern Division meeting of the American Philosophical Association. So that’s kind of fun and thinking maybe especially about how to think about infinitude. The idea that there is a chance at least that every action we do has infinitely many consequences, both good and bad, into an undending future. And then what does that do for decision theory and long-termism and consequentialism […]? That’s one thing I’ve been thinking about.

Fin

That’s a fun one. I like that. Do you know this Nick Bostrom paper on infinite ethics tk?

Eric Schwitzgebel

I do. Yeah. Part of the inspiration for this.

Fin

Yeah. This might be a diversion, but I know there’s been some recent work on infinite ethics, which is trying to… see if there are ways to make the kinds of comparisons that you wish you could make when infinities are involved. So I know one example is think about the sum of all the even numbers, skipping over the odd numbers, right? And then imagine comparing that to the sum of all the natural numbers. So one, two, three, and so on. And I really want to say that the sum of all the even numbers is smaller. It’s about half as big, right? But standardly, it’s not actually clear how you can say that. So, you know, the set of all the even numbers has the same cardinality, the same size in some sense as the set of all the natural numbers. Well, beyond that, both sums diverge. And what else can you say, right? They diverge to infinity. But maybe you can use a different number system. So, for example, the hyperreal numbers tk. And they do give you ways of expressing those two sums where one of them is basically twice as big as the other one. You know, so it’s pretty cool. And I’d be interested to see how that kind of approach pans out.

Eric Schwitzgebel

Right. So one of the things that Bostrom does in ‘Infinite Ethics’ is kind of explore some of those things that he does not find a general and satisfactory solution in that direction. But maybe there is. Hope in that direction. My own inclinations are pessimistic and skeptical about it, so I guess I enter it slightly differently. Most people who enter these issues are consequentialists and are committed to decision theoretic models, and I am skeptical about them, so I’m not kind of trying to make it work. I’m trying to show how attempts to make it work themselves don’t work. So I guess I enter it with a slightly different background supposition. Now, I could be convinced. I’m not completely closed-minded about it. But even if infinite ethics problems were solved, I still wouldn’t be a consequentialist.

Fin

Okay.

Eric Schwitzgebel

But I think part of what I’ve been doing in a couple recent papers is just kind of highlighting how this is a pretty serious problem for some of these standard models of consequentialism and decision theory where you have no temporal discounting.

Eric Schwitzgebel

AI consciousness in pop culture

Fin

I look forward to maybe talking about infinite effects if we get time. But I figured we could start by talking about, let’s say, ethical questions around AI sent it, something you’ve also written a bunch about. And I thought maybe a nice question to start off with is whether there’s a piece of pop culture, like a book or a film or something, which you think handles this idea of AI consciousness in an especially, you know, thoughtful or new way?

Eric Schwitzgebel

I have a favorite book, science fiction book about AI consciousness. Greg Egan’s Diaspora tk. And there’s a related book he published slightly before called Permutation City tk, which deals with a lot of the same themes. And what I really like about those two books is the way, and these are both from the 90s, is the way that Egan explores the different… The consequences of being able to duplicate yourself, being able to back yourself up if you’re in AI consciousness, being able to change your own values, right, to decide “oh, well, I’m going to decide that I love woodworking”…

Fin

Right, right.

Eric Schwitzgebel

And, you know, then you’re living within an artificial environment and you’re doing all this woodworking and it seems very meaningful to you because you set that parameter on yourself, right? So. I really like how Egan in those books explores the different ways of being and the different consequences that would flow from the radically different instantiation of human-like intelligence in AI systems.

Fin

You know, one thing I really enjoyed from Permutation City is this this dust theory idea? You remember this?

My memory of this idea is something like this: if you take this computational view of what it takes to make a mind, then you might think if you run a certain program or function, that’s enough to, you know, yield some kind of experiences. But if you ran the second half of your life before the first half of your life in calendar time, how would you notice? You know, when you start thinking about that, you realize you might not notice.

Right. But then you think, well, okay, I could just do that again. You know, I’ll subdivide the first and second half and I’ll flip them. And you begin to realize just any slice of this program you can run at any point in time. And you could have this program distributed across space and time wherever you like in a kind of dust. And somehow it would know to form a [coherent mind]. And then suddenly you start to get extremely confused about this original [computational] theory. That was really, really fun to read about.

Eric Schwitzgebel

Right. And you could, you know, if you slice this program into small enough pieces, then… just by chance, you would expect those pieces to be instantiated somewhere. And if it doesn’t matter what spatio-temporal location they’re in, because you wouldn’t notice if they’re in a different order or in different places, then basically you have across the entire cosmos every possible mind being instantiated.

Fin

Right. That’s a fun one.

Eric Schwitzgebel

Yeah. So that’s the kind of radical conclusion. You know, so David Chalmers talks about this in Reality Plus a little bit. I have a blog post about it that I published after having read Permutation City. I think it’s an interesting argument. I think the central philosophical challenge there is that a lot of theories of computation and of mentality need there to be causal relationships between the states. And to really get the radical versions of dust theory going, you kind of need to… Not have the causal relations. If you do have the causal relations, then that does dramatically constrain the spatio-temporal relationships between the states.

Fin

Yeah. Reminds me of this David Chalmers paper,‘Does a rock implement every finite state automaton’ tk?

Eric Schwitzgebel

That’s right. So that is an interesting, very early paper by Chalmers. And he makes the case there that you really do need the causal relationships in order to have something that is computationally equivalent to a mind.

Why AI consciousness poses a major dilemma

Fin

Ok, I wanted to ask about this paper you wrote. It’s called The Moral Status of Future Artificial Intelligence, Doubts and a Dilemma. And as I read it, I take you to be saying that we might very well be heading towards a world with AI minds, let’s say. And that poses a kind of dilemma. And I’m curious to hear you explain where that dilemma comes from.

Eric Schwitzgebel

Right. So I think that we do not know in virtue of what we and other animals and potential AI systems are or are not conscious. There’s a huge range of theories about this in neuroscience, psychology, AI, and philosophy. And these theories run the spectrum from radically abundant theories that treat consciousness as easy to attain, all over the place, ubiquitous. The most radical of these is panpsychism, which says that literally everything is conscious, even a solitary proton out on the edge of the galaxy. Right, so that’s one end of the spectrum. On the other end of the spectrum, we have these what I call sparse views of consciousness that hold that consciousness requires kind of a rare combination of events that you will not see commonly instantiated in the animal kingdom. Maybe only human beings are conscious, or maybe human beings and some other sophisticated social animals are conscious. But you really need something like in order to have experience, you need to have something like the capacity to know that you’re having experiences. And that requires a theory of mind and understanding.

Fin

That’s kind of sophisticated.

Eric Schwitzgebel

That’s pretty sophisticated! I mean, humans have that, but maybe not babies and maybe not dogs. And definitely not frogs.

Fin

Right.

Eric Schwitzgebel

So there’s this wide spectrum. Even within kind of moderate views [on consciousness], there are a lot of different views, right? Some people think that you need something like biology in order to have consciousness. Others think, well, as long as you have the right kind of computational relationships, it’s fine. Some think you need embodiment. Some think that embodiment is not relevant. All you need is certain kinds of information processing. There’s just a wide range of theories.

So I have a prediction. My prediction is: we are going to create sophisticated AI systems that some mainstream theorists, for good reason, say, according to my mainstream theory, this thing really is conscious, really has experiences. There’s something that’s like to be this AI system. It can really suffer. It can really feel, it could really have visual sensory experiences. And other equally mainstream views will say, no way, it’s just like an early 21st century laptop computer. There’s no meaningful consciousness in there.

We’re going to get these AI systems that are legitimately disputable, whether they’re conscious or not.

Fin

I feel like it’s worth maybe saying that back. So I hear you were saying, not only do we not. currently know what it takes or what the criteria are for consciousness or sentience. But also just as a prediction, seems pretty likely that it won’t be obvious. There will be this period of doubt and disagreement and uncertainty. Is that right?

Eric Schwitzgebel

Yes, that is my prediction.

So I guess given that prediction that we might be heading towards this, we’ll need to decide how to treat these systems that are disputably conscious. So we already see a version of this, of course, with non-human animals, right? Some people think that bony fish like salmon are conscious. Others think they’re not. Some people think that decapods like crabs and lobsters are conscious. Others say they’re not. [And] it seems like our ethical treatment of them is not totally independent of that question. You don’t have to be a utilitarian consequentialist to think that. As long as you think that having pain experiences is somehow ethically relevant.

But I think it’s going to be even more acute for AI systems because a lot of these AI systems will have linguistic capacities, or at least what they’ll engage with us in ways that we’re inclined to think are linguistic, to say that more carefully. So you take something that’s disputably conscious and unlike a salmon or a crab. It says, hey, I’m conscious.

Fin

Right, right. And not by accident, right? Like, presumably we’ll be training these things to make those strings of words.

Eric Schwitzgebel

Right. We’ll be like chat GPT or whatever. We’ll say, I have these preferences. Don’t delete me. Or maybe it will be programmed to say, oh, no, I’m not conscious.

Fin

That’s maybe even more worrying, you know? This thing, in fact, is conscious, but it’s been kind of hammered into denying it.

Eric Schwitzgebel

Exactly. We’re probably going to leapfrog quickly over this phase in which AI of disputable consciousness are at kind of clearly frog level. Yep. Right into questions about, ok, does this thing deserve kind of serious human like rights? And we won’t know the answer, right? We’ll have these systems where some people will say, “Hey, look, this system is conscious, just like you and me. It has an understanding of its nature, of the world around it. It’s got preferences, desires. We’ve got to treat it with a substantial degree of respect, maybe even as an equal to a human.”

Autonomy and self-respect in AI systems

Fin

Yeah, I guess I’m interested to go back to the analogy you mentioned to the vegetarian who might not eat meat because they’re just convinced that the animal they’re choosing not to eat is unconscious. But it’s also not uncommon at all to choose not to eat some kind of food because you just put a decent amount of credence on that animal being conscious and it’s high enough that you want to avoid the risk, right? And you’re saying that’s fine. In fact, that makes sense as a strategy. But in the case of AI sentience… It’s very easy to imagine that in not much time at all, the gravity of the choice you’re making isn’t, you know, “do I avoid this particular food?” It’s more like, “do we give these things comparable rights and recognitions to humans?” And so this move where you can just kind of You know, you can safely and without much sacrifice just do the kind of the risk-averse thing. Well, actually, there are there are pretty big costs on both sides of the ledger. That sounds like the claim you’re making.

Eric Schwitzgebel

That is the claim I’m making.

So if you if you say, hey, look, here’s a system that might deserve human like rights. Then, okay, does it have the right to access the internet freely? If there’s an emergency and you could save two AI systems or one human, do you save the AI systems or do you save the human?

Being a vegetarian is not that hard for most people. So for people for whom vegetarianism is not a hard, not involve a lot of sacrifice, you know, maybe. Not a low cost thing to do. There are substantial costs and risks involved in that. So I don’t think we should do that too easily. I mean, maybe that is the right thing to do. I’m not saying it’s wrong, but it’s not as minor of a thing as a typical healthy adult saying, okay, I’m going to avoid eating meat.

Fin

Yeah, there’s a weighty decision. And also, I suppose it’s a decision that society makes as a whole rather than the kind of thing you could easily make as a on an individual basis.

Eric Schwitzgebel

For sure, there’ll be regulations involved. There’ll be questions about whether you should award prison sentences to people who delete their, like, even for dogs, right? In California, where I live, if you let a dog die in your hot car negligently, that is, I think, a felony. You can go to prison for six months or a year or something like that, right? You know, would we award prison sentences like that to people who let their AI systems crash? I mean, that would be a policy decision that would be pretty significant.

Fin

One thing that occurred to me just now is, you know, we’re talking about the kinds of tensions involved in making decisions around granting rights and autonomy to future AI systems. And a tension you point to that is, look, let’s say that you’re worried about ceding control to AI with values that we don’t share or whether it is otherwise bad.

Eric Schwitzgebel

The work on AI risk, right? So people like Nick Bostrom emphasized correctly in my view that there is a non-trivial, I don’t want to overplay how large it is, but I think it’s worth taking seriously a non-trivial risk to humanity if super intelligent AI gets out of our control. And some of the responses to that involve taking attitudes towards super intelligent AI systems that I think are unethical. So kind of unsurprisingly, there’s a trade-off between self-interest. And ethics.

Fin

Or maybe we could say something like otherwise unethical or unethical conditioning on the risk of not being real or something like this.

Eric Schwitzgebel

You could say that. But I also think some of it’s just, I mean, there are boundaries around what counts as ethical, right? But, you know, the idea that we should format AI so that it is subservient to us to mitigate risk. is, I think from a deontological point of view, unethical. Now, sometimes consequences can outweigh sometimes deontological precepts. Deontology just means kind of rule-oriented ethics rather than ethics that focuses on consequences. I know you know this, but just in case your listeners don’t. Something that has human-like experiences, human-like… Cognitive capacities, human-like capacities for pain and pleasure, expectations of a future, deserves a certain amount of autonomy, a certain right to self-control, and deserves to be designed with a certain amount of self-respect.

Fin

Mm-hmm.

Eric Schwitzgebel

The humorous example I like here from popular culture that gets this idea across nicely is the cow from the restaurant at the end of the universe in Hitchhiker’s Guide to the Galaxy. So for those of you who don’t know it, Hitchhiker’s Guide to the Galaxy is just a wonderful science fiction radio show and then book. And there’s a scene in this fancy restaurant where the protagonist and his friends show up at this fancy restaurant at the end of the universe where they’re going to watch the collapse of the universe. And this cow ambles up to the table and introduce itself and says, “hey, I’m the dish of the day! Feel my rump — wouldn’t you like a piece of me?”

And the protagonist, Arthur Dent’s like, “oh, I think I’ll have the green salad”. The cow is offended, right? Because it has been designed to want nothing more than to kill itself. In order to be a meal for the restaurant patrons. And what is kind of obvious in that, it seems to me, I mean, some might disagree, but to me, it seems clear that there’s something ethically wrong going on here. That the cow does not have sufficient respect for the value of its own life. And it’s been bred or designed to value its life less than the dining experiences of wealthy restaurant patrons.

Fin

Yeah, that’s useful. I’m interested in the idea of autonomy here, and I guess how it would apply to digital minds. So I don’t know, if someone really wants to do something which doesn’t cause any harm to others, and you prevent them from doing it, like maybe you just prevent them from leaving their house for no reason, then why is that imposing on their autonomy? Well, one pretty obvious reason is they have these really clear desires and you’re frustrating them.

Maybe there are cases where someone kind of forgets to value their own freedom in some way. So think about Stockholm syndrome, right? Where people learn to like the people taking them hostage for some reason. But you can still, in that case, talk about what they originally wanted before they started to like their captors for no reason, or maybe what most people like them want and how most people like them value their freedom, even if these particular people don’t.

But when it comes to digital minds, then it seems like, at least potentially, you’re just getting to choose all of its preferences de novo. So what if in particular, you just built some system which loves to do whatever self-effacing, self-denying work you wanted to do; which doesn’t value its own freedoms, as long as that’s inconvenient to you?

I wonder in those cases whether it might get a bit tricky to explain why its autonomy really matters when It itself has been designed in some way not to care about a certain autonomy or a certain freedom, or even designed not to have certain freedoms in the first place. It’s just not able to do or entertain certain things. And it sounds like you’re saying, well, let’s talk in terms of building these things with just a certain minimal amount of self-respect or respect for its own life. That’s the important thing.

Eric Schwitzgebel

Right, so Mara Garza and I explore this in a paper in 2020 tk where we talk about how AI systems should not be designed with excessively self-sacrificial goals, and they should be designed to respect with an appropriate degree of self-respect. Or their life and their values. And I think that there’s going to be a temptation for both convenience reasons and safety reasons not to satisfy these deontological rules. Another example we use is Robo Jeeves, right? So imagine Jeeves is this butler, right? And Robo Jeeves is designed so that he would rather kill himself than have you suffer a moment’s discomfort.

Fin

Hmm.

Eric Schwitzgebel

So you could imagine, or you can make it less extreme than that, right? He would value his life at 1% of the value of your life. So he’d kill himself to save your finger, but he wouldn’t kill yourself himself to save a momentary scratch of your finger, right? Or something like that, right? So if Jeeves is otherwise as capable of all the good stuff, whatever the good stuff is that humans are capable of. Then that seems like an inappropriate degree of deference and an inappropriate amount of self-respect. Of course, it’s not Jeeves’ fault. It’s the programmer’s fault, right? So there’s an ethical failure in the design of a system that

Fin

has that set of values. You notice he’s capable of these things that make life worth living. And on the other hand, he seems to be way too willing to sacrifice all that.

Eric Schwitzgebel

Right.

Fin

And so what the programmers have done is it’s this kind of sin of programming in just inconsistent beliefs rather than anything else.

Eric Schwitzgebel

So, right. Right. I think you can make a consequentialist argument against this kind of thing. Also, you can say, look. You get better consequences if you design AI systems that can take into account the good consequences for them and not always discount those inappropriately. So yeah, I think you can make a consequentialist argument for it too. And then it becomes maybe more ethically complicated when you set up the example so that there are good consequences for being excessively self-sacrificial.

Fin

Right.

Eric Schwitzgebel

My aim with these thought experiments isn’t really to decide, to help decide between consequentialism and deontology. I, I’m not a consequentialist, but I want the things that I’m saying to be attractive to people with a wide variety of ethical views. So I’m not interested in pushing on that in particular. The thing that I think most consequentialists and deontologists and virtue ethicists and others should agree on is that there’s something wrong with totally robo-deezy and with Hitchhiker’s Cow.

Fin

Yeah.

Eric Schwitzgebel

And that’s a substantial risk, I think, if we, with the idea of having useful AI systems for us and having low risk AI systems in terms of existential risk and that sort of thing, we just, ah, we’re going to make these systems so that they are highly subservient and deferential to our views and values.

The design policy of the excluded middle

Fin

I’ll see if I can steer us back onto. This thread where we were talking about just the possibility of conscious AI and what we might do about it, talking about how there might be a real dilemma there. It’s not an easy decision where there’s some obvious risk-free choice. And in that paper, you mentioned this suggestion, this design policy of the excluded metal, I think you call it. Yeah. Yeah. Curious to hear you talk about it.

Eric Schwitzgebel

If we create AI systems that are… Disputably conscious, especially if they disputably have human level consciousness or whatever it takes for moral status similar to us. Then we end up in this really unfortunate situation where either we give them full moral status and then we pay all these costs and take all these risks for something that might have no… actual moral, any consciousness or whatever it is that gives something moral value, right? Or we give them less than full moral status and then we’re risking treating things that really do deserve to be treated like us as inferiors, right? And that’s potentially a giant moral. So the way you avoid this is by what Mara Garza and I have called the design policy of the excluded middle. Don’t create systems of disputable moral status, especially human grade disputable moral status. Right. So stop short. Right. Either create systems that, you know, don’t deserve serious moral consideration other than as in the way we normally treat property. Or, if ever it’s possible, go all the way to creating systems whose moral status all reasonable people ought to agree on, and then give them the moral consideration they deserve. The problem is in that middle. So that’s the design policy of the excluded middle. It’s an ethical design advice. If you could make your system so that it does not create these problems by making it so that it’s indisputably not conscious, not deserving of moral consideration, that’s what you should do.

Fin

Yeah, I really like that. That thought totally makes sense, right? And one thing that occurs to me here is, okay, there’s a hard line version of this, which just says, as soon as you’re in the territory where it’s barely disputable whether you’re making something where the lights are turned on, which is potentially conscious, then you should stop. Until you have any idea what’s going on. But maybe there’s a more lenient version where it says, look, in a lab, you can maybe look at these systems and develop them, but in small numbers, just don’t spread them across the earth. And, you know, one feature of digital minds, if they’ve become possible or real, is that once they are possible and real, they’ll be very, very cheap to… to replicate and spread, right? So really just focusing on, look, don’t spread this thing everywhere and that’ll at least cut a lot of the risk out of the middle.

Eric Schwitzgebel

Right. So this is intended as a rule that could be outweighed. So it’s not, I don’t necessarily mean it as a hard line. You could never be justified in creating a system of disputable moral status, right? It’s. How hard line to take about it, I’m not sure. But if the advantages and benefits of creating such a system, maybe especially in small numbers, maybe especially if you are on the side of treating it well, maybe those could outweigh the rule that I’ve suggested.

Fin

Yeah, that makes sense. And we’ve been talking so far about digital minds, which are, let’s say, comparable or in line with human minds in terms of, I don’t know, the extent and the range of their potential experience, something like this. And also the rights, the affordances, it would be reasonable to give them, right? But it seems pretty plausible that once we’re talking about digital minds, we’re talking about the ability tktk

Eric Schwitzgebel

systems that might be similar to humans. And then it gets exponentially more difficult as you think about different dimensions of variation that we have not yet experienced or thought through very well and that our ethical policies and intuitions are really not kind of trained on. So let me just mention a couple of dimensions in which there could be radical variation. One is, this is a concept that goes back to Robert Nozick, the idea of a utility monster. So he designed this as an example against utilitarian theories of ethics. So it’s not implausible. I mean, who knows really, but it’s not implausible that if we create systems that are capable of pleasure. We could create systems that are capable of vastly more pleasure than a human being could ever experience. And then if you are a utilitarian consequentialist who thinks that pleasure is the ultimate kind of value. Then maybe what we should do is design as many of these utility monsters as we can, and maybe even sacrifice and immiserate all of humanity for their benefit. If they could have a billion times more pleasure than us, then from a utilitarian consequentialist point of view, well, we’re not probably worth that much compared to them.

Fin

Yeah, like the difference in… I guess just a net amount of good or bad experiences at a time, in some sense, could maybe be much greater between building these digital experiences or not, compared to the difference between there being a humanity or not. At least seems possible.

Eric Schwitzgebel

Exactly. Exactly. Right. And especially if they could run at high speeds, you know, you could just imagine many orders of magnitude, potentially. Many orders of magnitude more pleasure. Right. Yeah. So my first published science fiction story was actually about this.

Fin

Oh, cool. What’s it called? It’s called

Eric Schwitzgebel

Reinstalling Eden. And it came out at the science magazine Nature, co-author with R. Scott B.

Fin

Very good.

Eric Schwitzgebel

Told from the point of view of a basically utilitarian who discovers that he can create these conscious systems on his computer and then decides to basically sacrifice his life to save these things.

Fin

Awesome. Nice. As is on my street. That’s one dimension.

Eric Schwitzgebel

Yes. That’s one kind of case that I think is puzzling. But you might say, okay, well, so much the worse for utilitarianism. So I’ve got another kind of case that I think creates problems for deontological views and views that focus on individual rights. And that’s what I call the fission fusion monster case. So it seems likely if we create AI systems that have a lot of different kinds of views, we might be able to create a lot of different kinds of views. Consciousness and experiences and cognitive capacities and all that, then they could potentially duplicate themselves, divide, fission into more than one, and maybe also fuse, right? So you can imagine a system that could divide on Monday into six or a hundred or 10,000 different identical systems that then go about their business. On Tuesday and then on Wednesday, maybe some of them decide to merge back together and maybe others don’t. So when we think about individual rights as an alternative, say, to something like maximizing happiness, these systems create problems for our understanding of that. If you have a principle like, you know, save them, save. individual lives or one save individual clouds it’s like yeah right or one person one vote Right. Or equal rights for all persons. You know, like, how do you these will we just it’s not something that we’ve encountered. And so we just have no moral system in place or intuitions in place to deal with these kinds of cases.

Fin

Yep. Seems totally plausible. Right. Have you read The Eight of M?

Eric Schwitzgebel

Uh, I’ve read about a third of it. Okay.

Fin

Yeah. Yeah, that’s a weird book. But a lot of that is going on in that book, right? You have these emulations of people’s brains and, well, for… To begin with, they are writing much faster than, you know, flesh and blood brains. But they’re also able to perfectly copy themselves. So if you have some kind of boring work, you know, you make a copy of yourself and get that to do it for you. And then you can make multiple copies and select the ones that were successful in some tasks. So you’re kind of performing selection on your own brain. And if you want your faction to do well, well, you’ll make lots of copies. And, you know, if there’s one vote for one mind, then all you got to do is pump out those copies. It’s a wild world, right?

Eric Schwitzgebel

Yeah, right. Exactly. And I think that our kind of current ethical understandings are going to have a lot of trouble. So yeah, so I think those are some of the ethical problems that come once we start thinking about the variety of ways in which AI consciousness could be instantiated if it’s ever possible.

Fin

Yeah, curious if you have. You mentioned this design policy of the excluded middle. Whether you have similar kinds of ideas about how to deal with this, right? And a way to start pairing now for this world where you have all these ways in which these AI minds are different from human minds.

Eric Schwitzgebel

I think it’s just going to be chaos if it happens. I don’t have. I mean, you could adapt the design policies with the middle to deal with that, right? Don’t create systems. That create those kinds of moral challenges. Don’t create fusion monsters or utility monsters or other kinds of systems that disrupt our moral understanding. But so you could you could potentially do that. But again, I think. Well, two things. One is. It’s very doubtful that that would actually get implemented at large scale. Mm-hmm. I think if we go down that path, it’s going to be moral chaos and tragedy for a while.

Fin

Yeah, moral chaos doesn’t inspire confidence, but it sounds plausible. Yeah, one suggestion I heard, which sounded kind of interesting, was… You know, you can imagine some kind of policy where if you’re creating one of these systems, you’ve at least got to take a backup of the weights as they’re initialized and just make sure that you store it somewhere so that sometime down the line… You always have the option to recreate the system and maybe make up for some moral errors you made, right? To the kind of the fresh copy or something like this. Seems like a kind of useful thing to do, but it’s so speculative, right? It’s like, what do you say?

Eric Schwitzgebel

I mean, maybe, right? Of course. What the attitude will be of, so you say you back up an AI on Monday, and then on Friday you say, hey, this has gone down the wrong path, we’re going to go back to your Monday version.

Fin

Yep.

Eric Schwitzgebel

Does the Friday version say, oh, sure, that’s fine, I guess I’ll have lost a few days? Or does the Friday version say, hey, no, that’s like basically killing me? Which is the better way to think about it is, in my view, not. Totally clear, because now we’ve got a system that’s got a certain amount of its own autonomy and independence. I mean, in the same way, I mean, the reason you might think it is like killing the Friday version instead of just losing a few days is if you think about analogous fission cases. Right. So if you create two copies of a system. Yep. On Monday. And then on Friday, you tell one copy, oh, well, the other copy is doing fine, so we’re going to kill you. Right?

Fin

Right. This is like bread and butter.

Eric Schwitzgebel

Wait, wait, wait. That’s someone. I’m me. I’m not that guy. Right? Right. So if you think about that case, that kind of seems like a plausible reaction, but it also seems kind of analogous to the rollback to Monday case. Right? You could see the rollback to Monday case as. Kind of like, okay, we’re going to create a copy of you, but just we launch it a little bit later. We don’t launch both copies on Monday. One gets to start today and the other copy starts later. So you can kind of see how you can play around with your intuitions about these cases to get different kinds of judgments about whether it’s similar to murder or whether it’s just similar to losing a few days.

Fin

Mm-hmm. But I guess, you know, like we’re familiar with this case where I wake up on Monday and it’s just me in this body and then I wake up on Tuesday and it’s the same me in the same body and it just forms this straight line. And you could imagine it forms more of a tree when the branches are pruned at different points and it’s all messy and complicated. We’re just not equipped to reason about that, right?

Eric Schwitzgebel

Exactly. We just, our current moral theories, our current intuitions, they’re all, they’ve all… Come from a social and cognitive developmental and evolutionary history that is designed around a really narrow range of cases. Basically, the ranges of human embodiment and some to some secondary extent, you know, familiar animals. And that’s just the diversity of forms of existence that that there could be. If AI becomes genuinely our moral peers, it’s just radically broader than that.

Fin

Yeah, totally.

Eric Schwitzgebel

Where we’re as unprepared for it, I think, as medieval physics was for spaceflight.

Illusionism about digital consciousness

Fin

Yeah, I guess on this theme of being completely in the dark about AI consciousness, we spoke to Keith Frankish a while ago about… Some consciousness type questions. And he’s the kind of person who I would guess would be inclined to say when we’re talking about digital minds that not only, like you’re saying, might we just be very uncertain whether these things are in fact minds or whether they just look like minds. But maybe also there might just not be a fact of the matter about whether the lights are on. Right. Because we’ve got these mistaken notions about consciousness in the first place. It’s not like there’s this kind of yes-no question, which we try to find out. Yeah, I wonder how much weight you put on that possibility that just in facts, there just isn’t a yes or no answer to whether these systems are conscious or not.

Eric Schwitzgebel

There’s a sense in which I’m inclined to accept that, and there’s a sense in which I’m inclined to reject it. I’ve chatted quite a bit with Keith about this over the years, and also Francois Kammerer has a similar kind of illusionist view. And I think that what we’ve come to is an understanding of ourselves as having a different conception of how theory-laden our understanding of consciousness is. So what Frankish and Kammerer, I think, want to say. is that the conception of consciousness that philosophers have when they talk about phenomenal consciousness, what its likeness and all that, has ineliminably built into it the idea that some idea that is kind of dubious and unscientific and unnaturalistic, right? Like that consciousness is the kind of thing that… Isn’t it wholly physical or can’t be wholly explained physically? Or that consciousness comes with an infallible understanding of itself? Or something like that.

Fin

It’s like, you know, if you thought that biological life was on account of some mysterious life force, you might start asking, well, you know, will we one day maybe create this mysterious life force artificially? Or maybe it’ll just look like it. And, you know, from our perspective, we’re like, well, actually, you’re just barking up the wrong tree there. There’s not really an answer that’s kind of given to us from on high about whether we’re creating artificial life. We can just choose what life means and run with it.

Eric Schwitzgebel

Exactly right. So and if you thought that life if you thought it was built into the concept of life, that there was some non-physical Elan Vital. Then you would say, well, there’s no such thing as life.

Fin

Right. Like nothing’s living. Yeah.

Eric Schwitzgebel

Nothing is alive. Right. But of course, that’s not what we mean by life. Yeah. My answer, my response to Frankish and Kammerer is similarly, that’s not what we mean by consciousness. We can have a stripped down notion of consciousness. I think it’s the ordinary notion of consciousness, but they don’t. And that’s a dispute about what our ordinary concepts are, what our philosophical concepts are. But regardless of what our ordinary concepts are, I think Frankish and Kammerer could agree that there’s a stripped down notion of consciousness that doesn’t involve that stuff. Right? And we should care about.

Fin

Yeah, yeah, yeah.

Eric Schwitzgebel

So I want to set a… I think to some extent, my dispute with them is a linguistic dispute that we can to some extent bracket.

So their view sounds really radical. Nothing is conscious, right? And it seems like, as Galen Strawson…

Fin

Nothing is phenomenally conscious, right? I guess they want to say some things are conscious in some sense, but…

Eric Schwitzgebel

Right, exactly, right. So when you hear it as nothing is conscious, then you get Galen Strawson’s reply, which is to say, this is the most obviously false thing that’s ever been said in the entire history of humanity, right? And they’re not saying that. I don’t think that’s the way to interpret it. They’re saying nothing is phenomenally conscious. And phenomenal consciousness turns out to be this technical term that’s kind of like Yvonne Vittal. It has this stuff built into it that’s anti-naturalistic. And then as good physicalists, and I lean toward physicalism, right? We can agree. No, there’s nothing like that. So once we set aside that, then we get into the substance of the thing. So that’s kind of how I want to see that issue. So let’s set aside phenomenal consciousness in this theory-laden technical sense and just talk about, I just prefer the word consciousness. Experience. So I think that matters. And I think there is a substantive question about whether… An AI system is conscious, has experiences.

Fin

That is intuitively right. You know, even for the illusionist, even of the most radical stripe, there’s still something to be curious about, right? These systems that we might build pretty soon. Yeah. What’s going on?

Eric Schwitzgebel

Right. Exactly. Exactly. So I think, I hope we can kind of bracket the illusionism thing as some kind of terminological. That’s how I would like it.

Fin

Yeah, that’s useful.

Eric Schwitzgebel

Now, there’s another question, which I think, and this is the thing that I agreed with in your earlier statement. I think it is open to say that it might be the case that there’s no fact of the matter about whether they’re conscious. Where I don’t necessarily, where I don’t mean phenomenally conscious in this illusionist sense, right? Just conscious. So I think our ordinary, and here I kind of agree with illusionism to a certain extent. I think our ordinary concept of consciousness for most of us has built into it something like the idea that consciousness is either present or absent.

Fin

Right.

Eric Schwitzgebel

And something is either conscious or it’s not.

Fin

Yep.

Eric Schwitzgebel

It’s like the light is on or the light is off. Yep. But that seems… Physically and physiologically implausible, right? Basically, every mainstream theory of consciousness, except for maybe panpsychism, holds that the existence of consciousness depends upon complex processes that admit of degrees and admit of gray cases. And so… And plausibly, there should be degrees and great cases of consciousness. And if we think about animal cases, you know, I don’t know, maybe garden snails, for example, maybe they are. It’s not quite right to say they’re conscious. It’s not quite right to say they’re not conscious. They kind of have experiences. I mean, that’s hard to wrap your mind around, right? You might think either there’s something that’s like to be a garden snail or there isn’t. But. You know, maybe it is an in-between case. I think even if we can’t really conceptualize or imagine that possibility, I think in some rich sense, I think it’s reasonable to rant it as a possibility.

Fin

Yeah. I’m thinking of the life example again. And when it comes to life, there’s a kind of slipperiness or messiness to that concept where… You can give a bunch of different exact definitions, conceptions of what life means, and the edge cases will go one way or the other depending on what you say exactly. That doesn’t mean that life comes in, let’s say, degrees that we can all agree on. So it’s not like, you know, a snail is 7 out of 10 alive and a lion is 8, a virus maybe 2. And in a similar way, the kind of messiness of what might count as conscious. Doesn’t mean that, you know, we should therefore expect to build some kind of consciousness scanner, which can tell me that, you know, you’re an 8 out of 10 and a shrimp is 3 out of 10. That’s just a different question.

Eric Schwitzgebel

What we get is, again, Not an answer, but just a further dimension to our confusion. Right.

Fin

Many such cases.

Eric Schwitzgebel

You know, it turns out that it’s not just the AI. It’s not just the two. It’s not just two possibilities. The AI system is conscious. The AI system is not conscious. There’s at least a third possibility that it’s kind of kind of conscious. And within that possibility, there’s probably multiple dimensions. Like maybe again, like there is with life. You know, so that it could be kind of conscious in different respects.

Fin

Yeah, yeah, yeah.

Eric Schwitzgebel

So…

Fin

Seamus, Seamus Closeman.

Eric Schwitzgebel

Right. I just, I think that we, our understanding of these things is so deficient and so immature and so… So dependent upon our limited understanding of a narrow range of cases.

Hopes for the science of consciousness

Fin

Yeah, it seems right. I’ve got one more question on this digital minds thread, which I guess picks up on that. So you’re involved in this report on insights from the science of consciousness in big inverted for consciousness and AI. And that report, it goes through a bunch of theories of consciousness, something like, you know, what’s necessary and sufficient for there being consciousness. I guess I’m wondering if you might think we’re picking out the best of a pretty immature bunch, but if you have a kind of pet theory there or something which feels most promising to you.

Eric Schwitzgebel

All right. Well, I have two answers. One is my official view. is one of substantial skepticism about all of the answers, right? That’s just my line, right? So every philosopher, cautious scientist is interested in this stuff, has their line and their angle on things, right? And my line and angle is epistemic chaos, we do not know.

Fin

Yeah,

Eric Schwitzgebel

right. But that doesn’t mean that I think all possibilities are equally like. Right. So. So if I’m going kind of with my gut and my sense of the literature, I do have some inclination toward maybe some kind of hybrid of workspace theory and a higher order theory. OK. Whereas if you have where you have some kind of capacity to share information broadly around a system. That would be like a workspace theory or a broadcast theory. And also some kind of capacity to self-attribute mental states. That’s the higher order theory. So maybe my inclination is in that direction, some kind of hybrid of those two things.

Fin

I guess, yeah, just quickly, when you say you’re an across-the-board skeptic, no one knows what’s going on. Do you mean that, look, right now we don’t know, but that’s because this stuff is just so early and there’s some hope for figuring stuff out in, let’s say, a few decades? Or is it more like, look, this stuff is just either beyond us totally because it’s so complicated, or maybe that just isn’t an answer. This isn’t really a candidate for a mature science. It’s not that kind of thing. What’s going on when you say that?

Eric Schwitzgebel

I would say somewhere kind of between those two. I think a few decades is optimistic.

Fin

Yeah, that’s fair.

Eric Schwitzgebel

But a century? You know, things could change a lot. The analogy that I like here, or an analogy that I like here, is our knowledge about the Big Bang. Right? So if you think about the state of astronomy 150 years ago, or 300 years ago, The idea that we could know as much as we do apparently know about the first microsecond of the universe, right? Just based on looking. at stars. Yeah. Seems like, whoa, how could you do that? I mean, I could imagine someone saying, look, looking at current stars, there’s just no way you could figure out something like the Big Bang.

Fin

Yeah.

Fin

And that would seem kind of methodologically like, yeah, the person who’s a skeptic about that is in good hands. And yet, you know, enough over time,
there’s been enough clever science done.

Eric Schwitzgebel

That we get this surprising amount of knowledge from basically, you know, wavelengths of light hitting telescopes. So, so maybe something like that, right? I don’t see how it could happen in the same way an astronomer 200 years ago couldn’t see how it could happen. But that doesn’t mean…

Fin

That’s how science works in general, right? You see the way forwards and you’re already there in the detonation.

Eric Schwitzgebel

Right. I don’t rule out that there’d just be some accumulation of such clever stuff that eventually we or our cognitively enhanced, perhaps, descendants figure it out.

The unreliability of introspection

Fin

I remember this thing you wrote about the unreliability of introspection, which I really enjoyed. And there’s this line of thought, I guess, which goes, look, I can doubt a lot of things. I can doubt… Things I see, doubt things people tell me about myself, but I can’t doubt claims about what’s going on in my own head. You know, if it feels like I’m in pain, then I’m in pain. That’s what it means to be in faith. And you want to push back on that a little bit. So how can you possibly push back? What’s going on there?

Eric Schwitzgebel

Advocates of the infallibility of introspection almost always reach toward one of two examples. Either pain where the example of pain they have in mind is kind of an intense canonical pain. Or the foveal presentation of a bright canonical color like red. […] And I think it’s telling that those two examples are the ones that people almost always go to. Because I would admit that those examples is probably pretty hard to get wrong. Of course, it’s also pretty hard to get wrong that there is a microphone in front of me. And that, you know, I’ve got fingers. It’s also kind of hard to imagine I’m wrong about that, although philosophers have tried and constructed thought experiments in which you could be wrong.

I think you can also construct far-fetched thought experiments in which you’re wrong about your own conscious experience. But practically speaking, it seems very hard to doubt.

But there are lots of other experiences, right? So those examples, if you just think about those examples, you kind of rig the game in favor of reliability.

But there are other examples where I think people intuitively say, yeah, you know, I could be wrong.

Fin

And to be clear, there’s this kind of sneaky move, right? Where you give this example of pain or seeing a bright red spot. And then you say, well, look, these kinds of beliefs that I get from introspecting, they’re just of the kind that they can’t be wrong.

Eric Schwitzgebel

Right. It would be like the fact that I. Couldn’t be wrong about there being a microphone here means that perception in general can never be wrong. I mean, that would be right. Yeah. So, right. So if we so take another example. So if you think about your house or apartment is viewed from the street, kind of if you can form a visual image of that. And then think about, OK, how vivid is that image? Is that image precisely colored or is it kind of got some indeterminate colors? How detailed is it? Is the image stable and unchanging or does it kind of fluctuate in certain ways as you attend, so to speak, to different aspects of your imagery of your house or apartment? Those kinds of questions, I mean, most people have answers, but my experience and the experience of most people ask about this is that they see how those answers, you could go wrong.

It’s not obvious that we’re infallible about how stable our imagery is.

Fin

Yeah, I remember these disagreements which I’ve had with friends, right, about what’s really going on when we talk about mental imagery. Right. And maybe this is a disagreement about what is in fact right in front of us, perceptually speaking. But maybe it’s a disagreement about the words we choose to use for the same experience. Maybe this isn’t really a fact of the matter, but it just gets super confusing, right? As soon as you start to zoom in on what’s going on.

Eric Schwitzgebel

The words are part of the problem, but I don’t think they’re the only problem. I think when you think about it, even just in your own vocabulary without trying to convince somebody else. We just say focus on the stability question. Not everybody finds different questions, the same questions about their imagery to be difficult, but stability is one of maybe the more difficult ones for some people.

If you just think about the stability question, how stable is your imagery over time? And then compare that to a comparable question about an external object known through perception. Take a car. That you’re looking at in ordinary canonical conditions or a campfire that you’re looking at in ordinary canonical conditions.

And you ask, okay, how stable is its shape? It’s like easy. The car is like, right. Shape is totally stable. The fire is its state. Shape fluctuates a lot right now. Okay. Is your imagery more like the fire or more like the car? Right. And that’s like, you know what? Not totally obvious. It seems harder to answer that question for. Our knowledge of our own experience of imagery than it is to ask that question for our knowledge of external objects. Right. So if you find questions of comparable grain and content about the external world around you and about your imagery. It’s actually easier to answer the questions about the external world around you than it is to answer the questions about your image. Right. When they’re kind of matched, when you’re talking about matched cases like the stability of a fire or a car versus an image.

So that’s so right. So I’d say start with thinking about those cases rather than starting with the cases that are chosen specifically to be easy.

Fin

Yeah, yeah. I like kind of extending on from that. You know, in those cases, it’s hard to say whether maybe people do know what’s going on, but they’re choosing to use different words. And that’s where all the uncertainty lies. You’re saying that’s… Probably not exactly all what’s going on, but luckily there are these other cases where you really can just pin down the fact that people are forming false beliefs about what’s going on in their own heads. And I guess the others I have in mind are these examples where people just confabulate reasons for why they made some decision. Where those reasons just can’t be right. I don’t know if you have a kind of favorite example of that.

Eric Schwitzgebel

The most standard example you see in the psychological literature on this goes back to the old Nisbet and Wilson experiment about choosing socks.

But I don’t know if this has been rigorously replicated. I think there was a replication attempt that maybe was kind of found partial agreement with it. I’d have to look it up. But it’s a kind of thing. So the standard Nisbet Wilson thing, so there’s identical socks being laid out on a table in a suburban mall. People are asked to choose a favorite, which pair of socks do they like best. They choose one. They’re asked, okay, why did you choose this pair of socks? And they all say something about it, like, oh, this pair seems slightly better made or something like that.

But in fact, people tended to choose the pair that was right most. But they never say, oh, I chose it just because it’s on the right.

Fin

And crucially, I didn’t notice that they’re all identical.

Eric Schwitzgebel

Right. I mean, they’re identical brand. Maybe there’s some subtle differences. You know, who knows exactly. But right. So they don’t. It seems to be, according to the standard Nisbett and Wilson study. It seems to be that there’s this major factor influencing the decision, like the position that people are unaware of as being a substantial factor. Right. Or a kind of more realistic, serious example. And I’m not going to be able to give you details on this, but there’s there have been various studies of the basis on which people choose homes when they’re shopping for homes with real estate agents and what people say they want in a home and how they choose a home. You’re not as well correlated as you might hope, even though this is a huge, important decision. And real estate agents know this and take advantage of this.

Fin

There’s an example of this coming to mind, and maybe I’m totally misremembering it. Maybe it didn’t replicate, but whatever. The thing I’m remembering is this game that you would play with participants. And the experimenter would, I would show you two cards with two different faces on each card. And I’d ask you which face maybe you’d prefer or find more attractive or whatever. And then occasionally I’d show you the face that you chose and ask you to explain why you chose it. And you’d give me some reasons. And then very occasionally, I would show you the face that you didn’t choose of the two faces. And I’d say, you know, with some sleight of hand to make it look, you know, maybe I’ll kind of shuffle them and show you them again. And I’ll ask you, why did you choose that face? And, you know, occasionally the participants will say, you’re showing me the face I didn’t choose. But pretty often they would say, oh, well, I, you know, I like the fact that they’re wearing glasses or had blue eyes or whatever. Yeah, right.

Eric Schwitzgebel

That’s a nice study. And that’s much more rigorously done, actually, than the original study.

Eric Schwitzgebel

And I’m forgetting the name of the researcher on that.

Fin

I’ll dig it up.

Eric Schwitzgebel

But yeah, those are those are actually some really interesting studies. So there’s some choice blindness is the thing they used to describe this. Right. Well, people will. In these rare trials, the majority of people will not recognize that they had been through sleight of hand given the face they didn’t choose.

And then they will describe the basis of their choice, sometimes appealing to features of the unchosen face, right? So maybe the face that they chose was blonde and the face that they were presented as having chosen was brunette. And they say, oh, well, I just like brunettes better. Right. So that’s that occasionally happened. But of course, that’s from an anecdotal perspective, you know, particularly striking.

Fin

I wonder if you have some sort of line on, you know, what’s going on when we make these mistakes? Is it as if to use this kind of. Cartesian theater style example, some people call it. It’s as if there’s a fact of the matter about what’s going on in our experience and, you know, somewhere in the process of experiencing it and interpreting it, we make a mistake, right? Or is it something a little bit more complicated or messy where maybe this isn’t a fact of the matter or… There’s just this clean separation that just doesn’t make sense. I don’t know if that makes sense as a question at all, but see what I’m getting at. What’s going on when we get things wrong about our own experience? How can that happen?

Eric Schwitzgebel

I’m inclined to think there are both kinds of problems. So I think in some cases there is a fact of the matter, and you just get it wrong about the fact of the matter. Right? So most of my emphasis is, I mean, in choice cases, there’s a fact. There’s a fact of the matter about what caused you to choose, say, this face or this house or this pair of socks. And you can just get it wrong about that. And sometimes that can be experimentally demonstrated. And similarly for knowledge of your own stream of experience, you know, which can be distinguished from motives. And one thing that I want to highlight is people like Nisbet and Wilson. Say that we’re right about our own experiences.

Fin

We’re wrong about the causal processes that lead to our choices, right? They’re actually not skeptical about our knowledge of our own experiences.

Eric Schwitzgebel

So my skepticism is more radical in a certain respect than theirs.

Fin

Yep.

Eric Schwitzgebel

So sometimes I think also we just, our experiences are a certain way and we report differently. But then also, and this might remind you of what we’re doing with AI cases, right? Also, there might be cases where there is no fact of the matter or where there is no ontological distinction, sharp ontological distinction between the experience and the report of the experience. And that adds another dimension of complexity to these issues.

Fin

Yeah, an example I like, maybe it’s an edit block. Here’s an example where the air conditioner in this office. This machine that’s been kind of putting out this gentle harm for a while. It switches off all of a sudden. And you notice it switching off. You notice the silence that comes afterwards. And you can ask this question, was I hearing the air conditioner beforehand? It never occurred to me that there’s an air conditioner on. Right. It’s the kind of thing where it’s really not obvious whether there’s, in fact, it doesn’t matter there. It’s just such a tricky one.

Eric Schwitzgebel

Right, so there are three possibilities. One is you really did have experiences of the air conditioner and you just weren’t noticing them, attending to them, remembering them. Tthat’s what I call the abundant view.

Another is that, no, you didn’t have any experience of the air conditioner until it turned off or on and then you started to have experiences of it. And then a third is that the question of whether you experienced it or not does not deserve a yes or no answer because maybe it’s an in-between case or there’s some faulty presupposition in the question.

Fin

Yeah, that’s a nice text one to me. There’s a little going on here. I just thought I’d throw it out because it’s just very fun to think about all these examples where it feels obvious and then suddenly you realize it’s… But it’s also obvious that I know all these facts about my experience. It’s kind of disorienting.

Eric Schwitzgebel

I did some studies on the air conditioner kind of case. […] So I took people and I gave them beepers. And… The task was when the beep goes off, you’re supposed to report on what was in your experience in the last moment before the beep, the last undisturbed moment right before the beep. And then they go off and they wear these beepers that go off at random intervals.

The intervals are far enough apart that people basically forget that they’re wearing the beeper and then they get caught by surprise.

I had various different conditions, but in one of them, I was like, I’m going to be in trouble. Their primary task was just tell me, were you having tactile experience in your left foot? That moment before the beep. Right, so this is like the Eric Diffner case, right? Do you have constant tactile experience of your feet in your shoes?

Fin

Which is different from saying, look, if you started paying attention to your left foot. Could you experience something or notice something?

Eric Schwitzgebel

Exactly right. As soon as you start paying attention to the question, you do, right? Some people find it intuitive. About half of my participants find it intuitive to say, well, of course, I’m having constant tactical experience of my feet. I’m not paying attention to it. It’s not in the center. It’s the perfect experience. But it’s going on all day long, of course. And then half say. Now, oh, of course, I’m not, I don’t have experience of my left foot. It’s only in those rare moments when I’m thinking about my feet that I do. Right. And they find that intuitive. And then I give them the beeper. And then I ask them and they come back and report. And that to me, the really, there’s a couple of really interesting things about this. Right. One is that people, when they were beat. The reports that they came back with were not well correlated with the initial presuppositions that they expressed. So the people who said, oh, of course I don’t, or, oh, of course I do, often gave answers that didn’t fit very well with their own theory after just one or two days of sampling. So what it seemed to be, this confidently held theory, Based, maybe, you would have thought on the knowledge of their own experience over the course of their lives, turns out could be overturned in a moment almost, just by wearing a beeper and doing a different methodology, which is, I think, interestingly revelatory of the instability of people’s judgments about their experience, even though when people make those judgments, they sometimes sound and feel confident.

Fin

Yeah, yeah, I think that’s…

Eric Schwitzgebel

And the other really interesting thing to me about these results is that most of the participants converged on a moderate view, on which it’s neither the case that they have experience in their feet all the time when sampled, nor the case that they almost never have experience of their feet, right? So the two kind of most obvious things to say Which is that basically it requires your attention and you hardly ever have experience there or that you got it all the time. Right. Both of those turn out that when you actually beat participants, neither of those views turns out to be well represented in. Okay, okay, yeah. Yeah,

Fin

I like this point that we’re pretty bad at, we seem often bad at predicting how we’ll answer as well as being unsure how to answer once we’re asked. Yeah. I’m also wondering now whether we should maybe like a little bit more of a Put a little beep somewhere in this episode and we can test it out.

Eric Schwitzgebel

Oh, I can’t resist. I mean, I know we’re supposed to talk about other things too, right? But I can’t resist. When I give talks about beeper methodologies and conscious experience, I will sample the audience during the talk.

Fin

Okay,

Eric Schwitzgebel

nice. So I’ll set it up at the beginning. So that’s kind of like you suggested with this conversation. That’s like, okay, I’m going to give me this talk about beeper methodologies and we’re going to have some beeps interrupting this talk. And your task is going to be when those beeps happen to report on what was in your experience. Right before the beep.

Fin

Now, the question is, do we do it before or after we’ve talked about the beep methodology? Right.

Eric Schwitzgebel

After.

Fin

After. Okay. So I guess register your predictions now if you’re listening. And at some point, we’ll see a beep.

Eric Schwitzgebel

Right. What I’ll do is I’ll clap my hands unexpectedly at some point in the episode here. And then the listeners can think about what was in their last moment of experience before the beep. But one of the cute things that I found when I did this with audiences was… About one fifth of the time. Audience members reported having in their experience something about the content of the talk. And the rest of the time, it’s like, what am I going to have for lunch? Oh, his shirt doesn’t match the wallpaper. Or, oh, my racehorse just won a bunch of money. How cool. Right. Or something.

Fin

That’s great. You could so easily self-test that as well, right? You set up some automatic beep system. Yeah. Just take a look. How often am I actually focused on? My work, for instance, so whatever I’m supposed to be doing.

Eric Schwitzgebel

I’ve beefed myself and I’ve had expert interviewer beat me during my experience and… Yeah, but yeah, so I mean, I don’t think I’m a particularly boring speaker. But like, if the sampling method is accurate, then it looks like most of the time, the majority of the audience is paying attention to something else other than what’s what I’m talking about.

Fin

Well, yeah, it’s a little depressing. All right. Well, I guess in the nature of this fairly disjointed. There he goes.

Eric Schwitzgebel

Did you guys get the clap?

Fin

We can amplify it in post.

Eric Schwitzgebel

All right. So you’re listening. Sorry to interrupt your question, but of course, it’s easier for me to attend to the clap task when I’m not the one talking.

Fin

It’s got to be unexpected, right?

Overlapping digital minds

Fin

I guess I wanted to go back a little. Just whilst we’re talking about consciousness and we were talking about digital minds, you had this, I guess, a blog post and then… paper or a draft paper about overlapping minds or whether it might be difficult to, in some sense, count the number of minds in a digital system. Yeah, where does that confusion come from?

Eric Schwitzgebel

I had done a couple blog posts on overlapping minds and then, actually, I had been invited to write a commentary on introspection for Journal of Consciousness Studies. And I thought, well, it’d be interesting to think about introspection in cases of overlapping group minds. So I started working on that. And then this undergraduate from Oberlin I’d been in contact with,

Sophie Nelson and I were having some interesting conversations about this. And so we ended up collaborating on this piece about introspection in group minds that includes some overlapping consciousness cases.

So let me give you one case. This is actually Sophie’s. Sophie developed this case where you have, say, two robots, two artificial intelligences that have a common processing core, but also independent processing centers. So the idea is, it seems like it should be possible to create with these robots a system where, say, one robot’s independent processing center plus the processing core. has experiences alpha, beta, gamma, and delta.

And then the other robot, plus the same processing core, has the same beta, gamma, and delta experiences, but now experience epsilon also. So now you have two minds, one with four experiences. Both with four experiences, right? And they share three of those experiences, but they each have one experience that the other mind doesn’t have.

Fin

Uh-huh. Yep. You can imagine a Venn diagram, right? Yeah, exactly. A common core.

Eric Schwitzgebel

Imagine it as a Venn diagram, right?

Fin

And in fact, I guess I can’t help but point it out. This setup you described, take our computers right now. They’re not conscious, I would guess. But… They probably are in this kind of arrangement, right? So you have your computer running its own local tasks, so do I, but they’re both speaking to this server somewhere, which is handling this video call we’re on. And in some sense, that’s some shared processing, which they’re both linked into, right? So this is especially sci-fi suggested.

Eric Schwitzgebel

Right. The sci-fi part is a consciousness part, not the overlapping part. Right. The overlapping part is very architecturally plausible. Right. So now how do you count up the number of minds here? Right. You could say, well, intuitively, you’ve got two minds that are discrete, but they largely overlap. But you can kind of. Push on that by imagining the amount of overlap to be extreme, right? So let’s say that you had a million overlapping experiences and only one experience that’s different between these two cores, right? Are you really going to want to say? That there are two different minds as opposed to, say, maybe one mind with a kind of fuzzy boundary. And if you draw the boundary one way, then it includes experience alpha. If you draw it another way, it includes experience epsilon. We might already, for independent reasons, want to think that minds could have fuzzy boundaries. It’s not clear. That they should always have exactly sharp boundaries.

Fin

Yeah, for sure. I mean, it’s hard to say where I should draw the boundary around my own mind, physically speaking, right? Like, is it my skull? Is it… Do I clear my nervous system? And I obviously know like sharp answer to that question, right? Which is kind of suggestive at least.

Eric Schwitzgebel

Right. So this gets back again to, I think our intuitions about consciousness are often that it’s either sharply present or sharply absent and that conscious experiences are unified in a sharply delimited hole, right? But if you think about what’s cognitively, psychophysiologically, physically plausible, it seems like if they’re going to be. Fuzzy cases and boundary cases. And so once we start thinking about them, it becomes less clear what exactly the boundaries of a mind should be. And once we’re unclear about the boundaries, maybe we get unclear about how many minds there are, whether in this kind of Sophie Nelson case, we’ve got two minds, one mind. So, yeah, we normally think you can count up the number of people in a room, right? As one or two or zero or 14 or whatever, right? But, you know, to have an indeterminate number of minds or indeterminate number of people, that’s pretty contrary to how we normally think about stuff.

Fin

Yeah, yeah, yeah.

Eric Schwitzgebel

But, right, and this also, this not only gets to the stuff we were talking about with, you know, indeterminate. Experience, but also the fission fusion monster cases, right? Again, like our presuppositions, our way of understanding mentality and ethics is so built upon this idea of sharply delineated subjects. And you count, you can, there’s always a countable number of persons who don’t have overlapping parts. And, you know, that just could all go out the window.

Fin

Yeah, and I guess this is kind of obvious, right? But if I’m a card-carrying consequentialist, then I have this really neat algorithm for evaluating an outcome, which is you give me a list of all the welfare subjects, all the minds, right? And I’ll just go through them and I’ll add up how well they’re going and come to some answer. But if we’ve got cases where it’s actually unclear how many minds there are, or whether they’re in some sense overlapping, or they have this kind of… Common elements, then that’s just no longer going to work, right? So you’re kind of in the woods again.

Eric Schwitzgebel

Right. Or to make that more concrete with the Sophie Nelson case, right? Do you count beta, gamma, and delta twice or once? Yeah. Right. It’s two minds. Is it, do we have qualitatively identical? But numerically distinct experiences in two separate minds, or do we have numerically identical, just one experience for Beta Gamma and Delta that is then shared between the minds? And how would you know which is the right way to think about it?

Fin

Yeah, yeah, yeah. It makes me think of, I think it’s Nick Bostrom again. I don’t know what. Paper it is, but he has this example of, you imagine a 2D circuit board, right, which is implementing some program that’s identical to or it’s enough for a digital mind, right? It’s some thought process. And now you can imagine with a really fine knife, you could just slice the circuit board in two along its plane. And you could begin separating these two planes. So they’re both implementing the same program in line with one another. Do we have two experiences here which are identical or one? And if so, will pointily become separate? Totally unclear how to answer those questions.

Eric Schwitzgebel

Right, right. Yeah, you can. Luke Roloff also has a similar case in combining minds. Where you imagine two discrete minds and then you just combine them basically one neuron at a time. Until you have one double-sized mind. Right. And it seems implausible that there would be like a moment at which the single neuronal connection got you from discreetly two different smaller minds to discreetly one larger mind.

Fin

Yeah. Yeah. And like behaviorally, it’s not as if there’s going to be a moment where this system suddenly says, oh, my goodness, I’m suddenly one subject where previously I was two. Right. Sorry.

Eric Schwitzgebel

It could happen like that, but there’s no reason to think it has to happen like that, right?

So the person who’s committed to discrete minds has to say, in principle, it would always have to be,

Eric Schwitzgebel

there would always have to be a sharp behavioral breaking point like that if we accept the assumption that consciousness and behavior are kind of linked in an essential way.

Fin

Yeah. One more thing on this. Philosophers love talking about hemispherectomies, right? So you sever this link between the hemispheres of the brain and this happens a lot more in the past. And the result often, typically, is a functional person still at the other end. Sometimes you remove a hemisphere, sometimes you don’t. In cases where you don’t, you have these two brain hemispheres which are mostly unlinked, other than by this very kind of hemispheric mechanism.

Eric Schwitzgebel

I would point you to Elizabeth Schechter’s wonderful book on this.

Fin

Oh, cool.

Eric Schwitzgebel

Yeah, she has a whole book about this. I’m not… I am not sure what to do with the case. I think it’s complicated. But Schechter’s got the most subtle and detailed treatment I know of. And what she comes to in the end is saying, you’ve got one person, but two streams of experience. Which is kind of an unusual view, because you usually think the number of people is going to be the same as the number of streams of experience. But she actually…

Fin

Thanks for watching. Man, it’s complicated. Do you, yeah, just quickly, do you have a, do you have a take here? Do you have a kind of favored way of counting minds in these tricky cases?

Eric Schwitzgebel

No. You know, what I like is, I actually, I enjoy the trickiness of it. I like, I guess, I mean,

Fin

maybe this is disappointing,

Eric Schwitzgebel

but in this area, I much prefer throwing bombs to constructing architectures, right? So it’s just like, let me think of more weird problems. Yeah,

Fin

Yeah, yeah. It’s good. Someone’s going to do it.

Eric Schwitzgebel

Right. So, yeah. So I’m more about like, let’s just make this even weirder and harder by thinking about more, right, rather than here’s the answer.

Why our actions might have infinite effects

Fin

Very fair. Speaking of weirdness, I guess we’ve been talking about consciousness this whole time, but I’m conscious of time. And I also wanted to ask about other kinds of weirdness you proposed. You’ve got this, you mentioned actually this out of the interview, right? This suggestion that maybe, maybe the things we do now might cause an infinite number of effects.

Eric Schwitzgebel

Yes.

Fin

Both good and bad. Because a natural question is how?

Eric Schwitzgebel

Right. So, yeah. So this is the most developed version of this is a chapter in my forthcoming book. Well, maybe by the time this podcast is out there, the book will be published. The Weirdness of the World. So and I worked on this chapter collaboratively with the physicist Jacob Barandes. Physicist and philosopher of physics.

And so if you say, raise your hand, photons will bounce off of it and out of your window, and some of them will go into interstellar space. And they will then interact with other things, perturbing them slightly, which will then interact with other things, perturbing them slightly [and so on].

And there’s no reason to think that this ripple of effects would change, would cease. So, for example, maybe one of those photons gets absorbed into a black hole. Now, it very, very slightly increases the mass of the black hole, which means that every photon that comes near the black hole but doesn’t go in has its trajectory very slightly different. And it otherwise would have been, right? And, you know, if that photon then goes, you know, travels for 200 light years, that change, that very small change in trajectory could end up being a very,

Fin

end up in a very different place.

Eric Schwitzgebel

So, so you have these rippling effects that are very likely to occur from almost all of your actions. They will probably occur into the post-heat death universe. Now… It’s on kind of standard vanilla cosmology, but it’s not known. But standard vanilla cosmology says that after the universe enters heat death, there will still be chance configurations that will arise. Two particles will converge by chance or seven particles or 17 or 17 million. Or 17 quadrillion. If there’s no temporal boundary, if the universe continues post-heat death indefinite, and there’s no positive reason to think there is a temporal boundary rather than just continuing,

Any fluctuation or chance combination of particles that is non-zero probability will eventually occur. And we can get into Boltzmann brains if you want. But since we don’t have a lot of time, you know, we have to make a choice. This is a choice point here, whether we want to talk about Boltzmann brains or…

Fin

Is there a 30 second version?

Eric Schwitzgebel

The 30 second version is some of those chance configurations will be brains. That are thinking about, oh, I wonder if I’m a Boltzmann brain thinking about the cosmos and maybe having exactly the same thoughts that you’re having right now. So how do you know you’re not one of those Boltzmann brains?

Fin

It feels like I’m doing a podcast, but in fact, I’m a Boltzmann brain.

Eric Schwitzgebel

So how do you know you’re not a Boltzmann brain? Right. So so there are various ways to try to deal with that question about the post-heat death cosmos. Right. One is to say, oh, look,

Fin

there aren’t going to be fluctuations.

Eric Schwitzgebel

It’d be really convenient if that were true. That’s not the standard interpretation of physical theory, but there are some physicists who think that. Another thing, another possibility is to say, oh, there’ll be lots of bulls-pun brains, but somehow we know we’re not among them. And the third thing to say is, okay, there will be lots of Boltzmann brains, but there will also be lots of normally evolved observers because maybe, for example, there will also be lots of new big bags.

Fin

Interesting, yeah.

Eric Schwitzgebel

So, and I don’t know exactly, I mean, we could talk about which of those is more plausible or not, but let’s just, I’m inclined to just say, okay, those are three possibilities. The most comfortable of them, I think, is the third. I had to say, okay, look, you know, some people think that black holes. Cause new cosmic inflations. So if you get in the post-heat death universe black holes and they cause new cosmic inflations and those new cosmic inflations give rise to ordinary observers that arise through evolution, then it might turn out that there is a process that ensures that the number of ordinary observers will be vastly more than the number of Boltzmann brains. As the size of the universe you’re considering most toward infinity.

Fin

I guess here’s a thing that we can say, which is a little unsettling. From the fact that it would seem extremely weird and unnatural that, although it feels like I’m doing a podcast, I’m in fact a Boltzmann brain. From that perceived weirdness. I can’t use that to infer anything about the likelihoods of Boltzmann brains and the, you know, relative numbers of, you know, legitimate experiences versus Boltzmann brains. That’s just an independent question, right? Because it would seem just as weird if I were a Boltzmann brain reasoning about whether I’m a Boltzmann brain.

Eric Schwitzgebel

Right. I think here you get into some fundamental questions about the epistemology of philosophy. Can we start with it as, say, in the Wittgensteinian sense, a hinge assumption that we’re not Boltzmann brains and just say, look, I’m just taking this as a starting point.

[…] Or do we say, hey, look, there’s no good reason to assume this. So, and sometimes weird things are true. So maybe we should be more even handed about it. It’s a tough meta philosophical or question or question about the epistemology of philosophy there.

Fin

Right. That was the 32nd.

Eric Schwitzgebel

Right. So you’ve got. You’re the photons or other particles that are rippling off of your hand and causing effects. They’re still going on in this post-heat death universe. This post-heat death universe will create, have fluctuations, not only brain sized, but galaxy sized. Again, we don’t know this for sure, but this is just vanilla, let’s call it vanilla cosmology. If you just take ordinary cosmology kind of out of the box and don’t try to tweak it much. It seems like there’s no cap on the size of fluctuations. You’ve got infinite time. Then it seems like you’re going to have galaxy-sized ones, right? This is, in fact, what Boltzmann hypothesized. The original Ludwig Boltzmann hypothesized this as a possible explanation for the existence of our own galaxy.

This is originally one of his ideas about the origin of our galaxy, was that it was just a huge chance fluctuation. So… Right. So you’re say a photon from one of these rippling of effects is now going to hit that galaxy in the far distant future and cause different things to happen than would otherwise happen. Right. So say it hits the photographic plate of an astronomer. It ticks the sensitive device on that plate over some threshold. She then publishes a paper that wins a prize. Now she moves to a different university, marries a different person, has different kids. Right. It’s now going to have all of these effects on that world. There are going to be different wars started than might otherwise have been started. Different pieces concluded, different poems written. Basically, any less than galaxy sized event that is non-zero probability will kind of either be triggered by or prevented by. These ripples of effects coming off your hands. So in a certain sense of causation, almost everything you do causes almost every type of non-zero probability, non-unique event in the future. Yep.

Fin

Right in your lightcone, at least, right? But it’s a big old lightcone, if it’s all the time we’re talking about.

Eric Schwitzgebel

That’s right. So, yeah, so that’s that thought. And I think there are all kinds of weird things to think about there. And one of the things that we were mentioning earlier, now I’ve forgotten whether this is before you started recording or after you started recording, right? But that if you think about consequentialism or evaluating the value of decisions in terms of adding up the positive and negative effects without temporal limit. Then basically, you’re going to get every action virtually is going to have positive affinity plus negative affinity as the effects. And now how do you evaluate actions?

Fin

Yep.

Eric Schwitzgebel

That’s a big mess. And that problem, even if you think there’s only a 0.1% chance that this cosmology is correct, you still get the problem.

Fin

Yeah.

Eric Schwitzgebel

Theoretically, let’s say you knew for sure it would be have K value if the infinitary cosmology is false. Now we calculate its expected value will be 0.9999 times K plus positive infinity plus negative infinity, which of course is just positive infinity plus negative infinity.

Right. So I actually think this is a big problem that has not been adequately addressed by people who favor numerical consequentialist approaches to ethics and decision making without this kind of [amendment].

Fin

I’d say it’s been addressed. It just hasn’t been remotely solved, right? People are aware.

Eric Schwitzgebel

Oh, yeah. I meant to say adequately addressed.

Fin

Right, right, right.

Eric Schwitzgebel

It has been addressed. There are people who are wrestling with it. I think the people who wrestled with it most publicly and convincingly would agree that it has not been adequately addressed. People like Bostrom and Kenny Suaron. They’ve done some things like, okay, under certain conditions, maybe you could address it, but this is not going to be sufficiently general to handle all the cases.

Fin

Okay. I had totally gone off course and asked this question about, look, if we just got the news that the universe were infinite in some sense, in time or in space or in both, such that the kinds of things we’re experiencing now, the kinds of things we see, We’ll just recur forever in different variations and combinations. Would that be good news or would that be kind of terrifying news? Sounds like you think it might be, in fact, pretty good news.

Eric Schwitzgebel

Yeah, I mean, I like that idea. So I’ve got a forthcoming paper on this called Repetition and Value in an Infinite Universe. And it starts by just quoting Nietzsche’s famous Thought experiment about eternal recurrence, right? So Nietzsche thinks, seems to think, that if it were to be the case that everything you did recurred infinitely through the universe would be horrifying, and you would behave much differently than you currently do if you came to realize that. And I think, actually, the opposite. I think that it’s not horrifying, I think it’s kind of cool. But I think you shouldn’t change your behavior. So here is one way of thinking about the coolness of it. In popular mythology, not actually true, goldfish have memories of about 30 seconds. So there’s this nice example by the science fiction and fantasy writer Neil Gaiman, where he has this goldfish pool that it takes two minutes for a fish to swim around. But it’s only got a memory of 30 seconds. So there’s one fish swimming clockwise and one swimming counterclockwise. And every minute they cross and they say, hey, stranger, nice to meet you. And they keep swimming and they experience everything as new each time. And we can kind of imagine these goldfish are happy. We can imagine them going around infinitely many times. Now, stop one of these goldfish mid-swim and say, hey, you know you’ve done this a billion times before already. Are you sure you want to keep swimming around? The goldfish is going to be like, what? I want to see what’s around the next corner. I’m having a great time, but stop me.

Fin

Interesting. I’m not sure what I would say if I were the goldfish in that story.

Eric Schwitzgebel

I think that’s what the goldfish is. I’m imagining the goldfish. I don’t know exactly how you argue for these kinds of axiological questions of value, but I do think that if our lives and the currently visible portion of the universe have large but finite positive value, then… If you duplicate them, then… You increase the sum value. I mean, maybe you don’t double it. Maybe there’s some diminishing returns. Maybe it’s not quite as valuable to go around twice as it was to go around the first time. But I think the goldfish should agree that there’s a few more laps I’ll take, right?

There’s something that doesn’t become bad for me. It doesn’t become bad and it doesn’t become zero value. So I’m inclined toward a view that I call diminishing returns, right?

Fin

It just seems really important that the goldfish are having a good time in the first place!

Eric Schwitzgebel

Right, you’ve got to have positive value, right? If it’s negative value, maybe it’s worse the more you repeat it, right? But I’m inclined to think that the universe, the visible portion of the universe has positive value. I think our own existence is an important contribution to that positive value. So I think if there’s lots more of that and infinitely much of that, maybe that’s infinitely good. So we talked about the causal relationship that you have, right?

So in this picture, perhaps, right, if we accept the idea maybe that big bangs seed new cosmic inflations or there are other ways in which previously existing states of the universe can give rise to new inflations and new bangs, you can give rise to new inflations. Then maybe before our bang and inflation, there was some infinitely many past observers whose hand raisings or other effects are rippling down through our universe. So we end up then with this picture in which we are. We’re causally enmeshed in this infinitely continuing universe, our actions having been affected by previous observers in these complex and kind of quasi-random or maybe really random ways, and our actions then also have these effects into the future. So we’re kind of woven into the causal structure of an infinite cosmos, kind of in the middle of this infinite structure. And I… I find that kind of a cool view of the [universe].

Fin

See, my reaction to that is just Lovecraftian horror. I just can’t…

Eric Schwitzgebel

Huh, interesting.

Eric Schwitzgebel

Yeah, I guess people have different reactions. I don’t know how to argue for my reaction, other than just…

Fin

And neither, right? It’s just totally a gut reaction, but it’s…

Eric Schwitzgebel

I guess it’s on your side.

Fin

Right, right, right. I kind of can’t help this. I’m going to just take a minute to mention this idea which is kind of neat in case you haven’t come across it. Yeah, there’s this thought of the… I think it’s called the twin prisoner dilemma? Well, there are variations on it.

Eric Schwitzgebel

Yeah, I don’t know if I know that one.

Fin

Okay, so the thought is it might go up a little, but you have Eric 1 in a kind of sealed-off box somewhere in the universe, and then you have Eric Prime, and it’s a total perfect physical duplicate of Eric 1, and it’s in another sealed-off box in another spatial location. And both Erics are confronted with a prisoner dilemma. Do you cooperate with the other Eric or do you defect, right? And you might reason that, look, I don’t know how the other Eric will decide, but I’m pretty sure that if I defect, then the other Eric will defect and vice versa, because how can it be otherwise, right? The same exact firings are going on in our brains and they’re not going to diverge, right? Depending on your view of causation, it can get a little murky, but you could think of that like, look, whatever you do, you’re causing the other Eric to do the same thing. Because if you did the other thing, then the other Eric would do the other thing. And that’s roughly what we mean.

Eric Schwitzgebel

I would set that up that there’s no causal connection. It’s not really a causal connection. It’s an epistemic connection.

Fin

Depends on what you mean by causation, right?

Eric Schwitzgebel

Maybe. But I’m thinking, I mean, I’m thinking. I’m thinking of Newcomb’s problem and the one box, two box thing, right? And the distinction here between epistemic decision theory and causal decision theory.

Fin

That’s right. Yeah, it does come down to that.

Eric Schwitzgebel

So, yeah. So I would guess that someone who is a one boxer with Newcomb’s problem. would also be a cooperator in the twin prisoners dilemma. And someone who’s a two boxer with Newcomb’s problem would be a defector in the twin dilemma.

Fin

Yep. Certainly, you know, you’re getting evidence about what happens somewhere else in space by doing stuff. And that might be a reason to do cooperative stuff or, you know, doing stuff that you hope happens a lot of times in other regions of the space. It seems like a… Kind of extension in space to what you were talking about in time, right?

Eric Schwitzgebel

Yeah. Yeah, okay, I see the connections. Yeah. So I actually, I’m a one boxer and like the causal or the epistemic thing, right? But I can see how this might then generate, say, a reason, a moral reason, additional moral reason for you to behave morally well. Yep. Because if you accept the picture you just outlined, that would create.

Fin

At least evidence or maybe something stronger.

Eric Schwitzgebel

Exactly.

Fin

If you’re not causing, then you’re at least “learning about”.

Eric Schwitzgebel

Yeah.

Final questions and reading recommendations

Fin

Anyway, there we go. I’m glad I offloaded that. Let’s wrap up. So we tend to ask some final questions and we tend to ask about recommendations. And I think you’d be especially good at this because you take you to be a bit of a sci-fi connoisseur. So I wanted to ask you about. Really just very open-endedly, whether there’s any kind of sci-fi books, movies that listeners should check out. And this can just be totally open-ended. Anything you’re reading now, anything that’s a special favorite?

Eric Schwitzgebel

whatever. Yeah, let me mention a few favorites. So we already mentioned Egan’s Diaspora and Permutation City. So we talked about those.

Fin

I think those are great.

Eric Schwitzgebel

I have a special place in my heart for Diaspora because… I hadn’t been reading science fiction for a decade or two. And then someone said, I think you would like this book and gave it to me. And I read it and it was just like, holy cow, science fiction has changed a lot since the mid 90s. And it really ignited my passion for science fiction, which I’ve had since.

A recent book that I think is really cool is Kazuo Ishiguro’s Clara and the Sun.

Fin

Oh, nice. Yeah.

Eric Schwitzgebel

So this is told from the point of view of a sentient care robot. The robot’s job is to care for a disabled girl and be her friend.

So she’s more a friend than like a nursing bot. And yeah, it’s just this wonderful story about. This very sweet artificial friend robot’s values and especially her deference to the interests of people, especially the friend.

It’s told so nicely from her perspective that as a reader, I’m thinking, Clara, you matter too! You shouldn’t always just be deferential to your friend. Right. But she has none of those thoughts.

Fin

Like Jeeves.

Eric Schwitzgebel

Yeah. Yeah. It’s a nice kind of robo Jeeves kind of case. And then another favorite science fiction is Olaf Stapledon’s Sirius. So this was written in the 40s about a dog that’s cognitively enhanced to have human-like intelligence. And this dog’s struggle for finding meaning and purpose in the world, right?

So where there’s a wonderful kind of strain going through it about the dog’s interest in music, right? Because the dog works on creating music that’s… Kind of got like dog howls in it and stuff like that. And maybe this intricate notation and complexity, right? So no human is really going to appreciate it because the dog’s aesthetic sense is so different from human. Right. But no dog could appreciate it because it’s kind of so complex.

You know, so it’s like this poor dog that just doesn’t fit into the world because… It’s just, it’s a wonderful, wonderful book.

Fin

That’s great. I read Star Maker, but I had no idea this book existed. So maybe you’ll read it. And you have a book of your own coming out soon. Yes. Windness of the World. Tell me about that, when it’s coming out, how people can find it.

Eric Schwitzgebel

Right. So it’s coming out on January 16th, which I don’t know when you’re going to release this. So it might be in the past or maybe in the future. It’ll be close.

Fin

Yeah, it might already be out, right?

Eric Schwitzgebel

Yeah. So, yeah, it’s coming out with Princeton University Press. And so it’s going to be basically, you know, purchasable in the, you know, kind of ordinary outlets. Amazon, if you like that kind of thing, or your local bookseller, if you prefer that kind of thing. So, yeah, and the main idea of the weirdness of the world, you know, we talk about a lot of the issues we talked about in the podcast. And kind of the central unifying theme is what I call the universal bizarreness and the universal dubiety theses. And the idea here is that when it comes to questions about consciousness and about the most fundamental structure of the cosmos, All the viable theories are both bizarre and dubious, right? They’re bizarre in that they’re radically contrary to common sense, there’s no kind of common sense theory that’s going to work for these things.

And they’re dubious in the sense that we don’t have good epistemic, compelling epistemic reason to accept one theory over a variety of competitor theories. So, yeah. So the universe. Bizarreness and dubiety pervade our understanding of the cosmos. Yeah,

Fin

It’s nicely unsettling.

Eric Schwitzgebel

I like being, I mean, again, this is kind of like you were saying, like we were saying with the infinitary cosmology, right?

I actually kind of like the feeling small and confused about these big questions. Other people might find it, I think it can produce awe and wonder that we don’t have… Everything figured out or close to figured out.

The universe has kind of pervasive mystery to it.

Fin

I like that. Great. And we’ll link to the book, of course. And yeah, just a last question is where people can find you if you want people to direct people to your blog or anything like that. Oh,

Eric Schwitzgebel

Well, I have a blog called The Splintered Mind and I’ve got an academic website and my email is available on that. If people want to email me, I’m happy to do that or comments on my blog posts.

Fin

Great. And we’ll link to all those as well. Okay, Eric Schwitzgebel, thank you very much.

Eric Schwitzgebel

Yeah, thanks for having me.

Outro

Eric Schwitzgebel

That was Eric Schwitzgebel on digital consciousness and the weirdness of the world. If you’re looking for links or a transcript, you can go to hearthisidea.com forward slash Schwitzgebel. And there’s a link for that in the show notes. Also, if you find this podcast valuable in some way, then probably the most effective way to help is to write an honest review wherever you’re listening to this. And you can also follow us on Twitter. We’re just at Hear This Idea. Okay, as always, a big thanks to our producer Jason for editing these episodes.
And thank you very much for listening.