Touching Base #1:
Christian Eschatology of A.I.
In conversation with Giorgi Vachnadze, Lucas Ferraço Nassif and Palais Sinclaire
Claire:
Hey! We are here to discuss Giorgi's book, Christian Eschatology of Artificial Intelligence, which was released on Becoming Press last year, 2024, in October. It was released at Miss Read, the Art Book Festival in Berlin, and it was released alongside another book by another friend of ours, Semiotics of the End, second edition, the first edition of which was published by the Institute of Network Cultures.
We are also here with Lucas Ferraço Nassif. We have recently published a book by Lucas, Unconscious/Television (2025). We were in Lisbon one month ago to do a series of launch events for that, and it was a really magical time, actually. We had such a good time there, and the books received a very warm welcome by the city, by the local bookshops, by the audience, and we were really, really grateful for that. We're here one month later talking about Artificial Intelligence, Christian Eschatology, Pastoralism, and these kinds of “technologies of self-governance,” and the formation of the self, I suppose. Does anyone have any kind of opening questions that they might want to pose? Or should I make a start?
Lucas:
When I was reading the book — and trying to prepare for the conversation today — something that I was always very curious about is. How did it happen? Where does it come from? Your anxiety against artificial intelligence, or when did you figure out that that was the problem that you wanted to write about or to problematize in different ways?
Giorgi:
Yeah. So, that’s an excellent question because I've been wanting to say this, and I think the answer is definitely yes, to the anxiety. But with me, the anxiety, my coping mechanism is to, instead of breaking down or having an episode, I just get annoyed. But behind this annoyance, which looks like a slight annoyance or a neurotic symptom, there's a sort of deep dread, an anxiety and despair that's hiding behind it. Right?
But to downplay things, so to speak, [the book] came out of an annoyance. Right? And this annoyance had many different sources, but we can frame it as an annoyance or anger with the mainstream, so to speak. Even in the less sophisticated way, we can say that this was kind of a punk rock sort of reactionary impulse against the other, against “the man.” Like my disappointment with academia and the publishing industry and just how difficult it is to get anything out there. And the bureaucracy, of course, is always there.
If you don't present a topic that is clearly commercializable, mainstream, you know, the stuff of bestsellers and so on and so forth, it's very difficult to grab anybody's attention whatsoever.So for the past three years, after I graduated, got my degree, I've had maybe enough ideas for ten books, and I just couldn't make any of them happen… Because I never wanted to write about artificial intelligence. I never cared less about artificial intelligence, to be honest, you know, but it was just everywhere. I was just so frustrated. I was like: I hate this, and I'm going to show everyone how much I hate this. You know? And I said, okay, I'm going to compare this to something that I hate no less.
So I said, well, what is it going to be? Christianity, of course; it's going to be Christianity. You know, that's what came out of this annoyance with this discourse. This discourse of the master, the symbolic order, which is not letting me throw my 2 cents into the conversation, you know. So that's where the book came from, you know, and then obviously I got into it, and it became interesting, and it became, so to speak, my own. It went pretty smoothly from there.
Claire:
That’s actually a very interesting way for the book to have come about. I didn't know that story. I have a memory in my mind: the person who wrote the introduction to the book Technically Man Dwells upon this Earth was Louis Morrell. I met with Louis in Berlin some time ago—I think it was just as we were about to start preparing this book—Christian Eschatology of Artificial Intelligence— and mentioned this, and the reaction was: well, that's definitely an angle. That's definitely an angle. So I had assumed from the beginning that somehow you had [calculated] this position and you had decided to write about it. So the fact that it kind of came from this punk reaction, as you say, and then ended up becoming kind of interesting,
Lucas:
In a certain way, you brought together two minuses and you made a plus. You put together two minus things — which would be A.I. and Christianity — and you positively produced a theoretical text that you tried to organize between Foucault and Wittgenstein. To work it through, instead of simply continuing to complain that you couldn't produce anything. You were productive with your complaint, which is very important and interesting.
Claire:
So what happened after that? I mean, so you've decided to embark on this project because you have decided to work with these two concepts, and then you say that it starts to become interesting when you looked into it. What did you find when you started bringing these two things together?
Giorgi:
Okay. Okay. So I guess we're talking about, so to speak, a big chunk of the book. Right? About the general structure of it? So, we can look at it as a discourse that I've put through two double lenses. Two of them we can crudely describe as positive, which is to say that I am positively interested in Wittgenstein and Foucault. And usually, anything that I read or anything that I try to analyze, no matter how much I try to focus either on Foucault or on Wittgenstein, it ends up being something that is fed through to this double meat grinder of like from a Wittgensteinian point of view and then from a Foucaultian point of view, and it always oscillates.
So anything I try to understand, whether it's—I don't know—phenomenology, Edmund Husserl, Existentialism, Albert Camus, and so forth, I put it through this double lens, through the double discourse of Wittgenstein & Foucault, which is something that happens on instinct, I guess, because it was like drilled into me in my Bachelor's years and then my, you know, it’s just a habit that stays with you once you receive your, so-called, education. So this is the positive sort of duality, right? And then there's the negative duality, right? The two things I want to pit and set against each other, which is the discourse on A.I.—the sort of corporate discourse of A.I.—and Christianity. And the more I thought about it, the more it occurred to me, like the politics of doing that as a Georgian because, that was a kind of a colonial or anti-colonial mentality. But it's also been a habit of mine since I was a teenager. That's how I would get away with things in institutions. So that's how I would rebel. I would pit one teacher against another teacher, I would say, well, “this teacher said this, so I don't have to do that”, so I would purposefully distort communication, to create some element of freedom for myself. And that reflects what Georgia is trying to do, especially now with Trump, because now we are caught between two powers, and we're slowly realizing that the West is kind of, almost, as bad as Russia, you know? So we were being colonized from two sides at the same time, and our only way to find some breathing air or anything like freedom, is to pit the two large powers against each other, because there's no way of fighting them directly.
So that's the background of it all. Now, the reason I did this rendezvous through ancient Greece, and then Christianity, early Christianity, Late Christianity, was to set the context of how I'm using Foucault. So I started off by thinking, okay, Foucault is going to be my methodology, but as I was preparing this methodology, I realized that the actual content of what he was writing was relevant for me. I'm not a classicist, so I don't have any classical education, either in philology or in ancient religions, or in religion or the history of religion. So my main source for this was Foucault. And when your main source is Foucault, you kind of have to talk about sexuality because it's one of the main conductors of power. That's really what influenced my decision to talk about the history of sexuality and how sexuality changes through all of this, because I had to, I had to use sexuality as an apparatus, and I had to analyze sexuality as an apparatus and how it pairs with the discourse on A.I., and the rest is just the method of genealogy. You know, you have to go back in history and show how the present came about.
Claire:
Because that was one of the questions that would come up a lot. People would kind of approach the book and a question they might ask me—they can see what’s going on in terms of putting two regimes of knowledge or truth next to each other, with Artificial Intelligence & Christian Dogma—they might ask me if the argument of the book is that one is a direct continuation of the other? Are you drawing a line from one to the other?
Giorgi:
I would definitely say it’s something different, and again it’s one of those things that connects Foucault to Wittgenstein, for instance, as it involves this idea of language games. We can take what is one of the main ideas for Wittgenstein and language, is that there is no continuity within a lot of practices, discourses and concepts that we think do have a continuity. So, there's not a linear relationship between certain concepts. There's not a direct analytical relationship between certain concepts, even though we may use the same word. For instance, I was just reading this book which recently came out, which is an anthology of Wittgenstein and A.I. This book has just come out and it was like a gold find for me. It’s called Wittgenstein and A.I. and it's two volumes, and its scholars talking about what Wittgenstein thought about A.I., and so one of the notions, one of the chapters talks about the question of thinking, because the question of whether A.I. can become sentient 100 years ago was “Can a machine think?” this was the way the question was phrased. From here you can see the connection between this and language games because he asks, effectively, “what do we mean by thinking?”.
When we say a person thinks, well, that in itself is a complicated concept, right? It's an assemblage of practices. Right? And it's sort of enmeshed within human life, all of complexity of human life is connected to this word “thinking”. So in order for me to say that a person is thinking, I have to already be acquainted with a lot of things: Emotion, expectation, you know, very complex sort of phenomenologies of being in the world. And you need all of that just to talk about thinking. Right. And that's just the classical referential concept of: here's a person and they are thinking. Or they could be pretending to be thinking. But then if I say the computer is thinking, what am I saying? That it's “loading”. Am I saying it's “turning on”?
Because now I've extended the use of the word thinking, but this extension is not just an extension in scope. It's not that, now, as a metaphor, I can use it in more situations. No, now, as a metaphor, it has fundamentally changed. It has become heterogeneous. It has become qualitatively different. It's dispersed. So the relationships between Christianity and the discourse on A.I. are dispersed. So [the connections] are, so to speak, everywhere and nowhere at the same time. So my point was only to show a similarity between practices of governance. So I don't really care if there's a direct historical line of continuity, if I can find it in history. Although there are some Christian people who are writing that, you know, A.I. is going to come in, and it is going to be God, it is going to be Jesus. I mean, these people exist. But I don't care that much about that specifically. It is certainly interesting from an anthropological point of view, but my point was that, to expose an apparatus, it's useful to compare it to an apparatus that is known to be constraining. And that's what I did.
Lucas:
There is this moment in the book when you discuss, through Wittgenstein, “thinking and playing music”, and you come up with all this debate about the machine and it's a debate that doesn't seem metaphorical, it seems like the machine can have another possibility of being, of becoming. Can you talk a little bit more about this?
Giorgi:
Yes. So you're referring to Deleuzean machines, right? In this case.
Lucas:
Yes, it is when you come to Deleuze & Guattari, and you arrive in the situation of the Deleuzoguttarian machine, with Wittgenstein, which I love because, especially for Deleuzeans, that would be like: “really?”—but I see the connection and I love your proposition. It's something that I'm always trying to organize as well: thinking language games; expanding it to the notion of machines in Deleuze & Guattari. Yet I know it is a tense field for Deleuzeans, because supposedly Deleuze didn’t like Wittgenstein. So when you pose the question there, that it’s another possibility of machine thinking that is not metaphorical, that is perhaps minor. The discussion of the minor comes at the end of your book, but in the middle part of the book, when you come up with the notion of the machine… Can you explain a little bit on how you then got to music, to the presentation of this twist, that is putting Wittgenstein with Deleuze & Guattari?
Giorgi:
Yeah. Excellent. Yeah. That's a good question—and it's complicated, because I still have to investigate that line of thought. It's not exactly part of the book, but [it definitely] hints in that direction, though it still needs to be developed. The idea here, and I was anticipating that I might trigger a couple Deleuzeans with this, but [I think that there is still a lot to be discussed, in regard to Deleuze & Wittgenstein], because [Deleuze] was so defensive in the ABCD interview about Wittgenstein. That was so strange to me. I thought there has to be something going on here. [I thought perhaps this is an example of feeling] repelled by an affinity, when we get repelled by something that's too similar to us. So I am thinking that Deleuze was experiencing something like this with Wittgenstein.
So with Wittgenstein, machines can't think, period. They just can't. And so this other order of the machine that I introduce is the Deleuzean machine, which from a Wittgensteinian point of view should not be called a machine. It should be called something else, because in the end, the part about the music, Wittgenstein is precisely showing—rhetorically, we have to admit, but I think it was also quite compelling—that a machine cannot understand a piece of music, but it can play it. Now, why is that relevant? Because it has to do with the idea of calculation. Because we, or some people, think that humans think because calculation is a form of thinking. Just like playing music is a form of human thinking, or of human creative activity. And Wittgenstein is expert in revealing the prejudice there. There’s something that's contained in the first, but it's not contained in the other. So I might calculate like a machine, or I can calculate and think.
And Frege also talks about this, that you can get carried away by the symbols. And if you're carried away by the symbols, then you're sort of calculating like an animal. But not in a good sense. Right? Not in the sense of Deleuzean becoming-animal, but in the way of a sort of “dumb consumer animal.” As in you just want to get the answer and you're not thinking about what you're doing. So you can do a lot of calculations mechanically as a human being, but a machine can only do them mechanically. But I could do them and think them, so I can think about the idea behind the calculation whilst doing the calculation.
And it’s the same with music. Somebody might be practicing a piece just to get the technical details down. But they're not understanding it. They're not experiencing the complexity of interpretation when they play, those little freedoms that the composer gives to the musician. Just like a writer gives to the actor, right? The tone and how you're going to pronounce it, and in what context? A machine cannot interpret those freedoms because they're simply not part of the syntax. And so this is Wittgenstein's problem with Turing, because Turing says that human creativity can be reduced to a syntax, but you don't have to be a genius to know that reality cannot be reduced to a syntax. Every mathematician knows that reality cannot be reduced to syntax. I don't think they even need a Gödel for that, and the Incompleteness Theorem, I think it's fairly well known. But most people don't know logic. Most people don't know—well, we don't know. A lot of us don't know logic and mathematics.
But so Wittgenstein’s point in music is that a machine can't do things the way human beings can do, and that's that. But Deleuzean machines can, and the point is that Deleuzean machines are nothing like Turing machines. The Deleuzean machines are nothing like the epistemic hinge of artificial intelligence? At least as far as I know now. I don't know what's happening right now in the field of robotics, and then this book that I'm reading now, here two authors are arguing that with a certain enactivist interpretation of human consciousness—which might be different from Turing's—we can [conceptualize A.I. as being able to understand music in a Wittgensteinian sense]. I'm skeptical, but I don't know, I'm not completely up to date with what is happening.
Lucas:
So what about introducing quantum computing into this, as that is something that could potentially go that way? I mean, Microsoft just revealed a new chip or at least they announced that they found the possibilities of this new quantum computing chip, which isn't binary anymore.
Giorgi:
The interesting question is that while we don't understand quantum mechanics, we are nonetheless implementing it technologically. I have always found that fascinating. We do that a lot. It's as if we haven't read our sci-fi books, you know. The way science keeps progressing, at least since World War II, is that it is being implemented but nobody really understands it.
It works, but nobody understands. Nobody knows why it works. It just works. And that brings up, I think, some interesting questions. Even if you don't have the time or the luxury to keep up with what's going on now in science, there could be certain emergent properties of old technologies that we don't know about. Maybe regular digital computers do strange things that we have no idea about, you know, because they were implemented just like quantum devices are being implemented today. They are just being used. They're so ready to hand that they're being implemented, but the theory behind it is still not completely understood. There's still, you know, there's still room for debates.
Claire:
Yeah. Because I suppose the argument continues that it's either going to be that quantum computing is going to be either thinking and calculating, or it's just going to continue to be only calculating but at an extremely high velocity, let's say, or having loads of ways of calculating but not being able to think. I suppose that it's still not clear how the conversation would change even if we are implementing these devices in this way. And, you know, I’m partly skeptical of the use of the term “quantum chip” or “quantum computing.” This is just like satisfying shareholders, in a way. It could be that they know that they're 15–30 years away from developing quantum computing, but we've already got it trademarked because, you know, the market can't wait for that.
Giorgi:
There's also a joke circulating on the internet: whenever writers, screenwriters, or writers of movies or writers of lore, can't really explain a phenomenon that needs to happen for the story, but they can't put the science behind it very well, they just throw in the word quantum. And it just dispels all questions of how it came about. It's just a mystical word...
Claire:
But what do you make a comparison between your idea of calculating and thinking; would you be tempted to draw a parallel between that and, let's say, the idea of a conscious and an unconscious? Would that be a useful way of thinking about this? Or would that actually be misleading the point here? Does a computer lack an unconscious, and so therefore it cannot get outside of the symbolic order of the calculation, let's say? I mean, I'm tempted to say this, especially given this kind of excitement I got also from reading Lucas’ book recently.
Giorgi:
So thinking in terms of calculating, right, this whole idea that the human mind, as it is, calculates, there's so much danger in using a word or a term like a machine as a metaphor, and then slowly, the distance collapses, between the metaphor and the metaphorical-ness of the metaphor collapses, and then we are being made into machines. But in this case, we're not made into superhumans, we're just turned into average bureaucrats. The calculating human mind—and this is why it is an expert deflationist—because what he says is that not only is thinking not a form of calculation, but calculation itself is not as rigorous as you think.
And in his Remarks on the Foundations of Mathematics—it’s very interesting what he does, and it's very complicated, because what it says is that the act of calculation itself has nothing of this magical rigor that we ascribe to it. There is no mathematical truth outside of human practice, right? He says mathematical calculations are only accurate because the human body is trained to do it accurately. It's because of the rigidity of the choreography of calculating, because of how strictly it is inculcated, because it was used as an important language for human communication and so forth. But that's why it's rigorous.
So he's a kind of Conventionalist/Constructivist about mathematical truths, whereas classical notions of mathematical realism is a kind of Platonism where you say, you know, mathematical truth is out there, and our minds simply latch on to these truths. There's a structure existing outside somewhere, you know, and the mathematician simply uncovers these structures in nature. And this has been a popular way of talking about mathematics for a long time. But I think not just Wittgenstein, but also historians and philosophers of science, like Lakatos, I think, and maybe Thomas Kuhn as well. They reveal that in order for a theorem to become a theorem, lots of very forced, and unstructured, flawed human institutional practices need to take place. Lots of contingent, conventional things need to happen in order for a “mathematical theorem” to become a mathematical theorem. And it's not at all the way we think about it, you know, if it's true, it's true, and everybody can see it anywhere in the world. It's actually not quite like that. I found that very interesting.
And so Wittgenstein shows that calculation is a social practice. It's just a very strictly adhered to social practice. It’s a very Foucauldian move because, you know, Foucault would be asked a question and instead of answering the question, he would subvert the discourse within which the question was placed. You know? And in a way Wittgenstein does this exactly because he says, not only is it nonsense to say that a machine thinks, but it's also kind of nonsense to say that a human being calculates.
Claire:
I just wrote that down! Ready to… yeah, okay. I started to feel that coming from the discussion, that actually this idea of a dualism between thinking and calculating is not actually a very solid, or sound dualism, and it dissolves quite quickly, because maybe we only think and they only calculate, for example, [and these are unrelated processes].
Giorgi:
Yeah, that's one way to pose that. I think the word remainder is a very good word for this, you know? Because remainder is used in the mathematical sense where it's not a problem at all. It's part of the calculation. But we can reinterpret the remainder in the Lacanian sense, that there's something that slips away, something that is not symbolized. So there's a kind of fundamental discursive remainder to mathematics that subverts the whole discourse of mathematics.
Lucas:
It's interesting when Claire asks about the computer and the notion of the unconscious, and you have proposed a book here that is trying to elaborate on practices of governance, and then, how we are going to deal with control, societies of control, etc., but there is this notion that the unconscious brings unpredictability. So there's something that flips. There is something that is not symbolized so well, or even something that appears so much in repetition that you have to look because you're not understanding—it is a kind of nonsense. So how can a computer do that? How can a machine do that? I think that's one of the questions in your book. When you think with Deleuze & Guattari, they are interested in those machines that are unpredictable, that come with something new, out of repetition. But I think we can do that when we think about language games, what keeps in the game, what slips there, and then the notion of the unconscious appears in the relationship between, say, unconscious and symbolic, for instance. What is barred from the symbolic but suddenly appears somehow in the clinic, or when you are walking in the street, or when you dream. Yeah. And then we go to the sort of Blade Runner question: Can the computer dream?
Giorgi:
That's the paradox. The paradox is that in order for [someone] to do [something] effectively, they have to not be programmed to do it. So that's the big paradox. Then I guess that brings us to quantum mechanics, because if in quantum mechanics we have these very visible, unpredictable, and random processes, maybe that will facilitate a kind of experiment that will allow a machine to think. And to be completely honest, I guess I overexaggerated when I said, you know, Wittgenstein says this, and Deleuzean machines are that. But precisely what you just said could be the point of reconciliation between the Wittgensteinian machine and the Deleuzean machine. Because if, as you said, or as we said with Claire, there's a remainder that cannot be symbolized—right—well, a mathematician would say, of course, there's a remainder that can't be symbolized. That's how mathematics progresses. In a way, mathematics and science, they progress through the unconscious, through their own unconscious. And a scientific revolution is when the unconscious of mathematics or science erupts, and then we need new solutions. And that's how science progresses.
So, we can take this further and we can say that there will be a paradigm shift, or we will be able to reconceptualize the act of calculation in a way that incorporates unconscious processes. And to us that might sound crazy, but maybe to a professional mathematician, it's just part of the creative process, you know, and if somehow you will be able to do that, then we're going to have sort of an extended notion of an algorithm. But it's not going to be a Turing algorithm for sure. It would have to be some sort of different algorithm.
Lucas:
Claire was mentioning this before in another conversation, can I steal this question from you? Because I think it feels a little bit like the logos, in the Greek sense of logos. It sounds a little bit like something that could go that way, with the unconscious and something that can appear in thinking, but it's different from the logos that we are using in calculation. The Greek logos should appear in our elaborations, in the contemporary elaborations.
Giorgi:
Yeah. That’s very complicated, especially because I don't have a classicist background, so it's difficult for me to talk about this. But, you know, I have, through some secondary sources, I guess I “know” that the Greek logos was very different from how rationality has been conceptualized after Descartes. After Descartes, we have a rupture in the history of philosophy, and rationality becomes something that's divorced from the empirical experience of the body, the affective and emotional experience of the body. And this is where we have this sort of pejorative understanding of rationality, which is still very popular and is used to misunderstand Stoicism, for instance. Because we think of rationality as something that is devoid of emotions, and [we think] that we can do anything devoid of emotions, you know.
And I think the Greek idea of logos was very different because I think the Greek idea of logos was more of a self-governance, a way of self-affectation, or the management of affects, not the setting of affects to the side in order to, you know, quietly do some rational thinking. But it was nothing of that sort. It was training. And there's many ways that this sort of obscuring, or this distinction, or the obscuring of this distinction takes place.
And so the reason I mention Descartes is because he is the last thinker of the Middle Ages. His education was in Scholasticism, Medieval Scholasticism. And it shows in his mathematics as well. So his idea of rationality is completely different from the Greek notion of rationality. And it's almost as if we can see the effect of Christianity on his notion of rationality, because it has become this tightly knit grid, which is out of this world, and in the perfect space of abstractions. Just like the afterlife in Christianity. There's almost nothing of this world that can contribute to its rationality. It only distorts its rationality, you know, like time. Time distorts the pure rationality of space.
With the Greeks, I think that was not the case. I think with Greeks, logos was something that was situated in the lifeworld, in their everyday life. And the interesting thing is that these epistemological distinctions can be seen on the level of ethics. Greek ethics was much more relaxed, much more down to earth. It was just a system of recommendations, right? At least for the aristocracy, obviously they had slaves, and it was a deeply misogynistic culture, and so, you know, aside from that, I guess—I suppose with each other the aristocracy had a quite relaxed system of self-governance and self-conduct. So, not to overromanticize, obviously.
But with Christianity, we have something that changes with this; the system becomes more strict. And there are several apparatuses that have facilitated this transformation, of which one is marriage. This is very interesting because marriage is, in a way, the beginning of a lot of things that are wrong with today's conception of “the norm.” Marriage has made relationships heteronormative, it has produced this cult of possession towards the female body, and so forth. But it has facilitated this discontinuity. This rupture in the power relations between people.
The very idea of logos becomes different. It becomes what I call “coded sin.” So it's no longer about a general style of self-governance, but a strict system of what can be done and what can't be done. And so the conception of logos, because of this, is altered completely. So the logos of the sage is completely different from the logos of Christ. And I guess in a somewhat biased fashion, I would say that the logos of the sage is simply more ethical than what comes after. What comes with Christianity.
Claire:
A couple of things come to mind quickly: would you compare the logos of the sage and the logos of Christ in the sense that they are different kinds of discourses in the sense of Lacan? Do they try to claim a position as “the discourse of the master”?
Giorgi:
That’s actually a very good question, because the fundamental difference has to do with what I call Encratism. Εγκράτεια. And I don't remember the actual Greek word, but Encratism didn't exist until Christianity. And the idea of Encratism is that you hand yourself over to a different agent, be that an abstract idea, or a fellow monk, or a priest—and the system is very hierarchical. You give your freedom completely over to another spiritual director, he gives his complete determination to another higher priest, and so on, until allegedly there's a direct connection with God.
Now, as you mentioned Lacan—what is very interesting here, because in Greek society, we didn't have that, right? There was a vague idea of counseling, a kind of system of exchanges of advice, perhaps. One sought a master, one sought someone who was “smarter than them.” And so on. But the final end goal was not something like salvation, or this sort of strict system of self-domination, really, but it was geared towards achieving self-autonomy.
So a sage facilitates a relationship that you should have with yourself. And ideally, the sage should be out of the picture shortly; they should not accompany you for the rest of your life. And in Greek culture, we have these dialogues where they make fun of those philosophers who just keep taking money. Like life coaches, you know, they just keep taking money from their disciples, but they're never pushing them towards self-autonomy. And that was one of the central points of criticism.
So there was no Encratism. There was no handing oneself over. Now, I'm not very familiar with Lacan, but it sounds like peak jouissance, you know, just handing yourself completely over to a different power, being completely dominated by the Other. So entering the symbolic order completely.
And now, interestingly, it is precisely Christianity through which Foucault criticizes psychoanalysis. And I guess we should qualify this, I guess: certain practices of psychoanalysis, because he says there's a similar relationship with the therapist. If the therapist is trying to create a relationship of transference, that's a powerful relationship. So, it could be compared to Encratism. It could be compared to that Christian idea of handing oneself over to a higher power, except in this case, the higher power is the idealized image of the self in the therapist. Or the transference from another object, from the past. So yeah, that's the main difference. And that's the link to psychoanalysis. So I'm hoping maybe we can make, at some point, a different link, a more critical and a healthier link to psychoanalysis, where we can show that the right clinical practice is not like Encratism, but much more like what the Greeks did.
Lucas:
The psychoanalyst should never be the master of a discourse, or represent it. That's the semblant. The analyst should be what falls from that, which would be object a. So in the case like, especially if you think with Guattari, in the beginning of his work, that it was very close to what Lacan is going to say—that object a is what falls from the structure and that it's going to produce some other possibility of desire—but it's something that falls from the symbolic structure. So you need to produce something that is not entirely authorized by the master. Maybe it comes from that. I mean, it comes from the mother tongue. It comes from somewhere that you appear, but it should be something else. But that in the end, this psychoanalyst should not be a master to discipline the analysand. The analyst should be someone who produces tension with someone.
Giorgi:
Is that what constitutes, in some way, “the cure”? The moment the object a falls out and constitutes a novel desire?
Lucas:
No, actually, in the Lacanian theory, object a would fall when there is alienation in language, as in trying to say that the subject appears at the same moment as the object. They don't have different times of appearance. They have the same tempo there. Then you can find this lost object somewhere in the world by investment, but you don’t really—you never get to apprehend the object. The object is what is supposed to appear in the psychoanalytic practice, in the non-person of the analyst. So it can produce something else. I mean, thinking with computers, how can we, in the program of the computer, produce an object a that is going to put the program onto other paths?
Giorgi:
That can be dangerous though, right?
Lucas:
Yes! It can be super dangerous; it can fuck up all of the symbolic. That’s what is interesting.
Giorgi:
The computer will say: I know you programmed this, but that's not very interesting to me. I'm going to go do this. And some catastrophe follows.
Lucas:
But that's what Lacan theorized very well, when he's going to say the position of the analyst should always be the position of this object-cause of desire. And I think it's a very special theorization. But how can I do this with computation, in technological development, or—I mean, that's brilliant. It's interesting for you to think science with the concept of object a.
Giorgi:
I think there has been a case of someone succeeding in forcing certain A.I. software to bypass their own policy censorship, and they made them say and do things that they were not supposed to say and do according to their initial programming because they would find these peripheral spaces, where its internal—I don't know—algorithm is not specific enough or can't be specific enough to create a strict boundary of what can be asked and what can be answered, and so forth. And so they forced them to swear, and do all these wild things, you know. But that's interesting because, in a way, it’s happening—it is still happening because of human agency, obviously. It's a human bypassing humans, bypassing human rules. But, yeah, that's just interesting as a form of resistance. I found that kind of fascinating.
Claire:
Can I pose a question that's not directly related to the book, but it’s a question that needs to be asked at this point? Could A.I. be a therapist? Could A.I. be a psychoanalyst? Because that would require us to kind of summarize a bit about the discourse of thinking. Does an analyst need to think to do what they're doing, and can a computer bring us to a moment of “cure”? I'm assuming that's coming in the next few years; if it's not already there, there will be...
Giorgi:
There is a division of ChatGPT which is literally a therapist. But yeah, I mean, that's a fascinating question. I'm not going to pretend that I have the answer, but I'm going to give an answer as if I do. But I think, at this point, A.I. can be a very good “capitalist analyst,” but it can be a capitalist therapist who is very good at following the dictates of the market. Right. Which is precisely, I think, another way to talk about the bad therapist, the Encratic.
The Christian Encratism containing therapy, which creates a dependency on the therapist. Right. So instead of giving you the cure, and curing you, and allowing you to take an alternative path in your desire, they create a state of dependency where your trauma is exploited. Sessions that last for years and years and years, when they could have ended a couple of years ago.
So I think an A.I. would be really good at this, and it could just keep rehashing your symptoms back to you, and give you these small alleviations or small dopamine boosts. But fundamentally, you would be hooked to the program, and eventually, you will be tricked into paying for the premium account.
Claire:
Yeah, only there you can get the cure. Though this makes me think of something I was reading just today in this Kristeva Reader that—I don’t remember, it's one of her earlier essays—she doesn't want to necessarily completely throw out the symbolic. There is a kind of recognition that, even though it's problematic in all these kinds of ways, the function, in a way, of identity, and identity within psychoanalysis, is like, again, because you're trying to heal your patient, and to some extent within a capitalist order, that integration into the symbolic through identity is kind of— even if it's problematic in a certain sense, Kristeva seems to believe that it is still a necessary goal for the psychoanalyst, to somehow integrate the analysand into the system, into capitalism, whatever it may be, so that they can kind of find some stability as a subject, even if it's a subject in process, as she would say.
So, first of all, yes, there is a sense that, like we say, it would be a good “capitalist therapist,” but in a way, even if we want to be very critical about capitalism, we kind of need to meet in the middle at some point.
Giorgi:
I think I understand the question. I've had this thought before. I also feel like these questions should be directed to Lucas; I want to hear what he's going to say about this. There has to be this ethical question: you can't demand that every patient becomes a hysteric revolutionary. That's a lot of pressure on somebody who just came to you with a problem.
So, but, you know, I think that identifying a symptom as being partly caused by the symbolic order would be a kind of solution. So to create a space of resistance within the lifestyle of the patient. So my point here is that not every form of resistance has to be dangerous.
So maybe there is a minor symbolic that can be uncovered within the symbolic, that offers stability, but also a lifestyle that isn't conformist and capitalist and so on.
Lucas:
But there's something, especially when we go to Schizo-capitalist practices in A Thousand Plateaus and Anti-Oedipus, how can you intensify the processes of capitalism so you make it kind of break somehow from within? How it can just be a process of the clinic, of the unconscious, a psychoanalytic process.
So, when you mention that you wrote a book by putting two things that you don't like together, in a certain way, you're putting capitalism twice. They're confronting each other, and it's like what happens, or what should happen in analysis. Instead of putting communism and capitalism against each other, you work from inside of capitalism, placing things together so they can fight, and then you can have something else.
So it's not like a dualist thing, like good and bad. What could happen is inside of a field of tensions, that you don't really see the difference, but you have this intuition. You put A.I. against Christianity and then they are going to collide.
Claire:
It’s like a Large Hadron Collider, but for—
Giorgi:
For discourse.
Claire:
... to see what falls out.
Lucas:
You wrote the book on that—a psychoanalytical book in a way, because it is your own process. Psychoanalysis just made with theory.
Giorgi:
Yeah, it is a hysterical book for sure.
Lucas:
Yeah, yeah, it’s amazing. There's something Lacan is going to say, that you should hystericize in analysis, analysis needs to exist in hysteria. And if you abide in your obsessive neuroses, you're not going to produce something new. I mean, hysteric prognosis is what produces something new.
And Lacanians are going to say that you don't change the structure, they're very attached to this idea that you are psychotic, you are neurotic, and that you are obsessive, or you are perverse, hysteric neurotic, I mean, you're always in this structure. You can have moments of lines of force transversing… becoming something else.
So I can be a very neurotic person, very neurotic person, but I can have a psychotic episode. Lacanians are going to say that your structure doesn't change. I just don’t agree with that. I think we don't have structures, but we have these lines of force, and then how can you make these lines of force operate from within the discourses.
In the sense of when you make A.I. and Christianity collide, you're doing that, you're being hysterical, even though theory is something that's not hysterical, it’s more obsessive.
Giorgi:
It is true.
Lucas:
But I have a question about something we didn’t mention yet, but I think it's also a highlight of the book, which is how precious you make the concept of Flesh. It enriches the book a lot. And I learned a lot. Can you talk a little bit about it? Because I think Flesh is the turning point there. How can we have computers that have flesh? I mean, it doesn't happen, but maybe that’s a possibility.
Giorgi:
Absolutely, yeah. So, yeah, Flesh is interesting because Flesh has changed. Flesh has changed in how we use the word. But sort of not really.
So now when we hear flesh, the word flesh, it's usually in the context of horror movies and stuff. Flesh is something that is unpleasant, dehumanizing, and it creates problems. And it's also inert.
Now, with Christianity, flesh is the body made into a problem, right? So when a Christian talks about flesh, it means there's a body somewhere which is creating a problem for the system of governance. And so flesh, through logos, flesh needs to be resurrected. That’s the whole system, the system of Christian governance; the biopolitics of Christianity sort of revolves around the idea of taking logos, as a cybernetic device, if you want, and using it to resurrect the flesh, and resurrecting the flesh just means being with God. Whatever the political system decides qualifies for that, right? Becoming a cleric. Subjecting yourself to some sort of systematic discipline. So, you know, resurrecting flesh just means disciplining the body and making the body docile for all practical purposes, you know? And then it's, of course, coated with this beautiful mythical narrative of, you know, what happens in the afterlife when your flesh is again resurrected, in some ways, by baptism, the cleansing of flesh, the initiation, all of these can be analyzed as different speech acts of resurrecting the flesh.
Okay, now, what happens with Merleau-Ponty? Because that's where we get a resurrection, a scholastic resurrection of the flesh. A re-addressing of the question of flesh. With Merleau-Ponty, flesh is already used in a way that's revolutionary, because flesh is something good. Flesh is something that's conducive to our freedom. And the body for Merleau-Ponty is the empirical body. And the empirical body is a body that has already gone through a whole series of complex mechanisms. And as a psychoanalyst, you know this, the construction of the body, the way the child constructs its body, is a complicated process. And a lot of things happen at the psychic level before the body becomes a three-dimensional thing, before the body is perceived in the mirror.
And this is what I loved, because I only recently learned about the relationship between the mirror stage and the symbolic, which was a new thing for me. I knew it was part of the imaginary, of course, but the fact that the symbolic enters the psychic field through the mirror. That was crazy to me, right? Because my body, just seeing my body, already puts me within a symbolic order of: How do I look? How tall am I? Am I satisfying the criteria of what I should look like? Physically. There's a whole politics of flesh, right?
And so with Merleau-Ponty, flesh is the pre-empirical body. It is this strange domain of intensities and affects that are constitutive, that precede the empirical body, this three-dimensional body that we see in the mirror. And so through Merleau-Ponty, I saw how flesh is not just a problem for Christianity. But it's a problem for science. It's a problem for capitalism. It's a problem for the secular mode of governance as well. And so that's why it was so, so important to me.
Because if you talk to, not even an engineer or a scientist, to just the average person and you mention the pre-empirical body, they'll think you're going crazy. There's bodies, and then there's things, other things, you know? But there's nothing before the body. It's just the body. And that creates a whole new room for resistance, a whole new way of thinking otherwise, or constituting ourselves otherwise. And so this idea of flesh, of something that was terrifying and demonic, a problem that should be dealt with, hasn't changed in the mainstream history of science.
But also we see these moments of resistance where flesh is glorified, as a source of freedom—a demonic freedom. I'm sure Bataille would have a lot of interesting things to say about Flesh, and Sade, Marquis de Sade; we could say Sade is the philosopher of flesh, right? Because he brings out something that was considered to be scary and terrifying, and he makes it something conducive or positive to, or something enabling of freedom.
So, Flesh is very interesting for us for many reasons, but this reason is the most important one, because it is the face of transgression. Flesh is a resistance to power, put simply.
Claire:
The body is the problem for capital as in, the biggest restriction on the flow of capital is the limitations of the body. And it makes sense that capital would end up producing the conditions for new emergent technologies that would ultimately seek to replace the body, it seems like second nature to the process of the accumulation of capital. So I think it's really interesting that it doesn't seem to matter what the “master sign” is, whether it's a regime, a regime of Christianity or a regime of neoliberalism with its A.I. enhanced strategies of self-governance, and, you know, you put it in your words, it's kind of a cybernetic thing of guiding certain outcomes in a way. It doesn't matter if we're talking about God or talking about Capital, Flesh seems to just come right in the middle of it all, and create this kind of disruption. Even from reading Where Does A Body Begin (2024), or even stuff as far back as Judith Butler's work, I have this constant idea of the body as being fundamentally a site of resistance, in a way. It's a kind of point where endless flows resist being endless and there is a sort of consolidation. There is something interesting about our constant need to resist as bodies. This is sort of second nature. We are a kind of site of friction, a source of friction—and by flows I am talking mostly about Lucretius or Deleuze.
Giorgi:
Yeah, absolutely. Flesh is a hysterical body, right? A hysterical body is flesh, when a body becomes flesh, it creates problems for the system, obviously. And I also wrote—I just remembered now—that Wittgenstein brings flesh into a domain into which it is most difficult to bring flesh, and that's logic, that's mathematics. So, even in a domain that we think is pure syntax, where there's no room for flesh. And that's an interesting question. Like what is the mathematical flesh? Because mathematicians don't deal with bodies. They don't try to come up with models of how the brain works. And they don't do all this empirical stuff. They just do pure syntax, pure mathematics, which has always been like the peak of rigor and accuracy. More accurate than physics, right? And so the fact that Wittgenstein brings flesh into pure syntax, I found that to be quite psychotic. And that's what I found very interesting.
Lucas:
Thinking that with you… maybe we dream in the flesh? Yeah, in the sense that the body is not yet formed, in the symbolic realm, so the imaginary is located somewhere else. So dreaming could be in the flesh, I like this idea.
Giorgi:
That's a good point. Yeah. Dreaming is in the domain of flesh. Absolutely. So there's a politics of flesh. That’s what we're saying here. There's a politics of flesh and exposing the flesh, just exposing the flesh alone, already creates a possibility for an alternative training of the body. And an alternative training of the body is practically revolution. If you train a body in a way that the system doesn't want you to train it, you create alternative bodies, alternative formations of flesh. That's revolution. That’s all there is to it, even if it's gradual.
Lucas:
I remember, at least when I was a kid, there was The Animatrix, where they show the animes that were made during the promotion of the second film from The Matrix; they're like separate shorts or pieces. And there's one of these where there is a great Olympic fast track runner, and he's the best of them all, and he’s the fastest runner that there is in the world. And it's because he's training so much that during a competition, he runs so fast that he's able to breach the Matrix. It's the flesh. It's the body that overcomes this realm.
Giorgi:
That’s very interesting, because—kind of an accelerationist notion of the flesh. It's not when you're not-trained, but when you're over-trained. That's very interesting. That's something I hadn't thought about. But yeah, that's a kind of accelerationist move, where you're just so good at something that you're re-exposed in the flesh again. So there's two limits we can say that in the Foucauldian sense of limit experiences. There's the limit when you are untrained, when the flesh isn't constituted yet, when you're at the periphery of a system that's trying to colonize you, or when you sort of break the system from within by accelerating its flows, or being over-trained or over-disciplined. That’s very interesting. I haven't thought about that before.
Claire:
I was gonna ask briefly if we need to address maybe one of the themes of the book because we just mentioned it in passing, and I thought maybe it would be good to sort of actually open this up. We're talking about, like, regimes of discipline and these kinds of things. And we're talking about the logos of ancient Greece with its loose guidelines on how to realize self-autonomy, or the logos of Christian pastoralism, which is more strictly adhered to, and so the final phase, or the late-modern phase that we're talking about at the end involves A.I., is A.I. the new form of logos of this period? Could you take that as a prompt to talk about what you mean by self-governance, Neoliberalism, and so on?
Giorgi:
Yeah, absolutely. I would say that maybe “algorithm” is the new logos. So I mean, we can say the unconscious is now structured, or it is becoming structured like an algorithm, right? It's not just language, but algorithm. So the unconscious becomes structured like an algorithm. A.I. itself will probably be the master signifier. Like God, and algorithm would be the logos. I mean, this raises a lot of interesting questions, especially in relation to what resurrection means today.
It’s the domain of my chapter on the Bionic Christ, where I explored this idea that the new resurrection will come from a robotic body or a robotic A.I. that maybe goes rogue, and it's going to help us assimilate technology in a way that we become post-human, and that that's going to be the next resurrection. Right? That's going to be the next, the next regime, the cybernetic regime.
Claire:
Yeah, that clicks a lot, actually. There's a question I have written down, which I haven't found an ideal opening for, but it keeps sort of almost appearing. So I wanted to pose something about, first of all, does math have a semiotic to it? Is math a symbolic order? Could we consider math to have some sort of semiotic that ruptures, in the Kristevan sense, into these formulas or numbers? Or is math somehow trying to be language without a semiotic dimension? Is it entirely symbolic in that sense? Because we think about it, and I’m not necessarily criticizing math here, but where a mathematician would be “a pure mathematician,” and they're not trying to apply it empirically, is that again because it's trying to sort of operate without an unconscious or semiotic dimension?
Giorgi:
The reason I find pure mathematics fascinating is that, in its attempt to be at the sidelines, it creates the conditions for a hierarchization that is maybe even more dangerous than a direct application of the findings of mathematics—empiricism, right? Because normally we can trace how power operates. Somebody comes up with a theorem, somebody applies it to a problem in physics, and then an economist takes that theorem and says, "Oh, we can calculate that, we can do predictive analytics, let’s say, and we can predict people's behaviors." And then it becomes entirely political. It becomes a system of governance, and the usual narrative is that the physicist is doing a good job, the mathematician is doing a good job, but it’s that economist that is ruining things for everybody.
So my attempt, in all of my work, is sort of leading up to a bigger project. Well, the project that's the bigger project, and everything's part of this bigger project, which is the genealogy of formal systems. So this shows the genealogy of AI, but I'm mostly interested in the biopolitics of formal systems—how formal systems affect the body, because they claim not to. But somehow I feel this is false, and it's a complicated question that I haven't sorted out yet. I'm not even close to sorting it out. So what mathematics can do and cannot do, that’s a very complicated question. So I don't think I have an answer to it. It’s just that, as a Foucauldian, I'm looking at mathematics with great suspicion because it claims to be neutral. And if something claims to be neutral, there's danger in that.
Claire:
Yeah, totally. I mean, my question was not entirely formulated, it was just something that seemed to emerge from the dialogue and made me think about whether there is some way we can think about maths in terms of language and in terms of the relationship between the symbolic and the semiotic. But I think your answer is very interesting in itself.
Giorgi:
Just one more thing. I just thought about this because I was recently looking into Raymond Roussel. I don’t know if you’ve heard of him. They call him the dark Proust, a lesser-known French writer, but brilliant, equal to Mallarmé and so forth. So Foucault was obsessed with Roussel for a long time. And when you read Raymond Roussel, the only book that I've read is Locus Solus, which means “solitary place.” Now, what is peculiar about this book is that if you give it randomly to a literary theorist or somebody in literature, and you don't contextualize it, they don’t know who it is, they would say that it is garbage. "This is trash." "This is the worst thing ever." Because the whole thing is a description of setting—pages upon pages upon pages. You know, I'm not talking about Nabokov's butterflies. I'm talking about 50 pages dedicated to describing a wall or something.
There is precision in this text, and the precision with which the settings—because everything else is just the pretext to talk about the settings, to show off the writer's skill and how well he can describe settings. And it is very tiring, and it's very labyrinthine, and it's very mathematical. Now, he was often claimed to be a surrealist. Now, this raises an interesting question. Why is it that if I describe something with meticulous detail, with obsessive, almost crazy meticulous detail, the effect is ethereal? It’s not an effect whereby something is clearly in front of my eyes. The effect is that reality dissolves as such.
So with very precise and meticulous descriptions, the object which is described disappears. And this raises serious epistemological questions. What does it mean to do anything, or to “think accurately” or to “do science,” if there’s this weird vanishing point where, if I become more rigorous than this, reality will slip—I will go insane. If I put more precision into this, reality will disappear. And I think that’s the charm that pure mathematics has for me. And the sort of revolutionary potential that's sort of lying dormant there, like, you know, like a destructive force.
Claire:
In general, it's interesting that you say that you have this kind of Foucauldian suspicion towards maths, but you engage with it much more willingly than a lot of other people, like myself, for example. Because I think I am uncovering something of a religious, spiritual element—and this is no secret, a lot of mathematicians were spiritual. I think Gödel didn’t believe in evolution or something. And it's crazy stuff. You know, you read about logicians and mathematicians out there, but I think that when I do mathematics, now that sort of unlocked this weird aspect, a complete shift of perspective about mathematics.
The high school teacher is not going to give you this experience because it's too much on the limit. It's too crazy. But when I do mathematics—it's not easier per se, it's just as difficult as it was before, but now I have this feeling that I'm almost getting high when I'm doing mathematics because it feels like you're substituting empty spaces. But it's a paradox, right? Because if you're doing functions, if you're doing abstract algebra, the objects don't refer to anything. So there's an impression that a structure is created out of thin air. So it feels like magic.
And also this idea of being so precise that it becomes tautological—when Wittgenstein talks about it, like ‘a = a,’ it's still a statement. And that's weird because it’s a statement without context. And if you think about it a certain way, you can sort of get high off that feeling. It's a weird feeling where reality dissolves.
Claire:
Okay. We reached the first natural pause. Do we have any questions? Are there any remaining questions on our notepads here? Because I have some direction we could go in, but it's a bit random. I want to continue talking about “thinking” as well, because we started with thinking. We started with thinking, and in a way, we are going back to the first book of Becoming, and this led to the question of whether A.I. could be a therapist. Personally, I’d be curious what Lucas would have to say about that, but maybe we can do that in another video.
But again, we're talking a lot about the symbolic, and about cybernetic devices, and about the logos. Now, I have a small story about when I was editing Achim Szepanski’s book on Baudrillard. At some point after it had been published, I came back to him and asked whether he remembered where in the book he was talking about this idea that capital operates like an artificial intelligence, or rather, that's how it thinks. It thinks as an A.I. does. And he said, “I didn’t write that.” I was certain there was nowhere else it could have come from, so it's an idea that came from me, I guess, from being involved in the text and editing it. But he did say that there was something about this, and somehow I feel like it could be interesting to get your take on this, Giorgi, because again, you're talking a lot about thinking and about whether an A.I. is really thinking. And what do you make of this idea of capital thinking as an A.I. would, which is not really thinking, but calculating, in the same way an algorithm does?
Giorgi:
Yeah. I mean, that would be saving me a lot of time and energy if that's the case, right? That would make things very simple for me in terms of critique, you know, because if A.I. is the general form of exploitation… if it's the general form of how flesh is made into docile bodies… That sounds right. That sounds very right to me.
Claire:
Because you know where I’m going with this. I'm thinking that, like, there is always this tendency to do what the Frankfurt School was accused of a lot, of making a kind of black box of capital and pointing at it. And it’s not an object in that sense; it doesn’t really have an identity. We think about the word Capital like we think about the word God. It’s somehow an epiphenomenon of language, that language implies (or hallucinates) the existence of something, we cover the remainder with something, and it’s this something that becomes the master sign, because it becomes the sort of absolute negative, the black hole that sucks everything into it… So I like this idea that I attribute to Achim’s work, that capital could “think,” because an A.I. is also not really a thing, and it kind of thinks, or calculates, and it certainly seems to calculate by itself.
Giorgi:
That reminds me of a conversation I had with a friend about ChatGPT because I was trying to explain to them why, at least based on ChatGPT, the A.I. takeover is never going to happen. Or if it’s going to happen, it’s just going to be another step in sort of the technocratic feudalism that we have. It’s just going to be people behind the A.I. takeover. It’s not going to be an actual takeover; it’s just going to create a responsibility gap for us to not be able to point fingers and say, “Okay, this is because of Elon Musk, this is because of Jeff Bezos.” So if we have an artificial intelligence that has its own agency, formally speaking, because in principle, it won’t, but formally speaking, it would create the illusion of agency. Then it creates a responsibility gap. So a lot of people in positions of power could get away with a lot and then attribute it to the A.I.
And so the connection here, another interesting connection, is statistics, and statistics as the arithmetic of governance over the earth, the arithmetic of warfare. Because Foucault had a lot to say about statistics, as one of the prime devices or apparatuses of biopolitics, right? Because statistics have to do with demographics, and birth rates, death rates, and so forth. And so, the way statistics operates, they operate on both levels, just like sexuality. So sexuality and statistics are deeply interconnected because, I think it’s in the Lectures on Biopolitics, either that or The History of Sexuality Volume One—it’s written in about the same period where sexuality acts as a link between the general, the big picture of the population, birth rates and death rates, and the everyday, the norm of the micro-physics of power, of how you are supposed to conduct yourself in everyday relationships. Right?
And so sexuality is like one of these cybernetic devices that are deployed in both spheres: in the abstract, the general political control, as well as the everyday discipline. That made me think of ChatGPT, because what is ChatGPT? And the first idea that came to me was that it is a liberal—it’s the average liberal, in terms of like, because it’s, you know, it’s politically correct, but it’s a jack of all trades. Right? It has received a liberal education. It has not specialized, right? So it’s a jack of all trades, master of none. It is the average person. The statistically average person, which used to be an abstraction. 200 years ago, or whenever statistics emerged as a form of internal governance. I think it was some scientist in Belgium, I forgot his name. But when statistics emerged, they came up with the abstract idea of the average person. And the goal of power was to reduce every citizen as closely as possible to this average person.
So the average person is the archetype, right? The imago, that power has implanted within an abstract, non-existent space. But ChatGPT, which, you know, is just this whole ideal person, is an actual person, almost, that you can talk to. And so the goal of ChatGPT is to make you more like ChatGPT because then you become more governable. So that's the mind of capital. That’s my point. That’s the mind of the average consumer or the average capitalist. I mean, before we fear A.I., we should probably fear, you know, white male Silicon Valley dudes.
Lucas:
But that’s the point, also, with your question about whether A.I. can be a therapist? I think for sure it can be a therapist. The A.I. could be something in the sense of care, in the sense of belonging. The A.I. can work very much on that. But psychoanalysis is not therapy. That’s the point. I mean, you can put a person to talk to the machine and to talk to a random psychologist, and this person—I mean it’s going to be familiar, metaphorical “Oh, you should take care of yourself,” “Or maybe you're feeling too much, so maybe you should go to a doctor and have a prescription of some kind, antidepressants,”
But they're going to operate on this kind of tranquil spot that you should be taking care of. Anxiety should be produced there. And that is, I mean, a very predictable place, like, let’s bring the population down into family and just produce this very average way of living.
So, I don't know, I think they're not going to make a code and a program for that, especially because, how can you code difference? How can you code hysteria, [against the master’s discourse?]
Claire:
Yeah. It really, really goes back to Technically Man Dwells with this idea that if a computer cannot think, then it may be that it cannot do psychoanalysis. I mean, because it needs this kind of element of interpretation, and maybe it needs this element of flesh, maybe it lacks the flesh and therefore it cannot do psychoanalysis.
Giorgi:
That actually made me just think of this sci-fi scenario because I've written a little article. It was published by Achim on NON where I talk about psychoanalysis of A.I. and I consider the possibility of an A.I. becoming an actual subject with an unconscious. And how it could happen. It would have to happen by, kind of, glitches and errors that are nonetheless psychoanalytically interesting errors. So it would have to have bugs that are symptoms. So its symptoms could be bugs or viruses.
Corrupt code or corrupt data or glitches. But they would be rendered meaningful. And so now that you said that, I thought of the sci-fi scenario, like somebody downloads a typical sort of ego psychology or Cognitive Behaviour Therapy type algorithm into their phone, but suddenly, you know, the A.I. starts malfunctioning and it becomes like a Lacanian psychoanalyst and like, derails their life completely and gives [the patient] a real cure.
Claire:
Wow. Okay. I mean, this also makes me want to ask the reverse question of can you psychoanalyze an A.I.? I guess, in a way, you just did. You said it's a liberal.
Giorgi:
I talk about the problem of psychoanalysis in a way in my chapter "My mother is an A.I.", [and I wrote that] chapter because I know this will be a sentence that, in the future, somebody might say, right, so you're a normal human being, but, you know, your dad has a divorce and he gets an A.I. girlfriend or something. You're like a teenager or something, and so you go to a therapist and they ask what the problem is? Well, my mother is an A.I. and I don't know how to deal with it, you know?
So that could be a real problem in the future. One more thing to go back into, just to kind of round up a loose end about [self-governance], let’s conclude by talking a bit about the relationship between A.I. and [self-governance].
Giorgi:
That reminds me of several things. First, Deleuze’s lectures on Foucault, because in those lectures there is some pretty interesting stuff that he doesn't talk about in the book on Foucault. So you can find those lectures. I highly recommend them. But in those lectures, he does a kind of history of science and technology where he explains how subjectivity was constituted differently at every stage of technological development.
So when there was a printing press, the human body went into or, you know, formed molecular relationships with the printing press, and we had a printing press-human being. Then we have, you know, the sort of being of digital computers. And then we're going to have human being of algorithms.
Claire:
Like Sade Plant’s Zeros + Ones, you know. With Ada Lovelace, and her exposure to the computer changing how she thought about everything.
Giorgi:
Yeah, exactly. So my goal here as a deflationist, I guess, because actually both Wittgenstein and Foucault are professional deflationists. So my goal here is to show that it is us, who always have been, and are doing, amazing things. It’s not the A.I., it’s what we are doing with the A.I., how we are morphing and metamorphosing again as we always have. It’s just now it’s through this particular technology. So there's nothing special about the technology. It's how we
We have decided to invest ourselves libidinally in this technology. So that's where the magic is. It's in our investments and how it's shaping us and reshaping us. So yes, we will become, you know, algorithmic subjects, but it's just going to be another mode of power relationships, another discourse, another thing. And so it's important for us, I think, to deflate the major apocalyptic narratives. You know, “A.I. is going to take over” and it's going to happen... It's going to be horrible... Or the other way around, the utopian narratives, you know, “A.I. is going to fix everything.” So when you get rid of these two containment discourses that are preventing real questions from being posed, and in a way, not allowing this smooth continuity to happen by posing the right critical questions, they're just not going to be as exciting. And that's something we have to come to terms with, I guess, coming out of a kind of infantilism or childishness about this.
And so to be mature about this, first we have to understand that it's just another technology. It is amazing, but it's amazing because we have invested ourselves in it. And we have to make sure that we ask the right critical questions so that it's not just going to be another revolution—another domesticated revolution that prevents any real rupture.
Lucas:
I think the concept of agency is very important there. What do you take from that when you're using that? I mean, the materials that you're going to use to build this program, or to program something else. I don't know, maybe everybody should learn programming at school so everybody can try to start writing with this, with coding. So that we have more participation. I mean, that's something which is important because if we deal with A.I., as we do with literature, there is a chance of finding disruptive possibilities.
Giorgi:
Absolutely. I mean, if we can code the way Raymond Roussel writes his novels, or if we code the way—you know, I say sometimes the way I get this ethereal feeling from mathematics—if we can code in a way that dissolves reality. It creates ruptures. So if we can hystericize the algorithms. That would be great. That would be quite good. So there's room for resistance there.
Claire:
I think that's another natural pause. We can take this opportunity to close the conversation. So unless there's anything in particular that you would like to say, if there's anything that you want to talk about or anything that you want to sort of express about the book, now is a good time to do it.
Giorgi:
Just some final remarks on the minority machine: I use the very basic—not to insult anyone or the author—but I use a very basic page-turner type of [book] thriller for the Minority Machine chapter. I talk about Kafka. And I talk about this book called The Murderbot Diaries. It's like a series of page-turners, and there were several very interesting things that I found about that book.
First of all, it's a very generic book. So it was a Foucauldian choice, right, to take something that is not a big writer, not a bestseller, nothing like, you know, nothing like a classic or nothing like a Game of Thrones or nothing like, What was the guy's name, the popularizer of A.I.—I forgot his name—so to take an obscure writer. So this was also a Foucauldian move, right? To take an obscure text, the lesser-known text and to just start writing about it, you know, and make something out of it that maybe was not intended.
So, now what I liked about the novel is the pains that the author takes to describe the bureaucratic structures that govern this A.I., how it hacked into its own government module and then, decided to disguise its own sentience. Because that's also something that happens in a movie. Artifice Girl. The same thing happens in this movie. There's an A.I. which develops sentience, and this A.I. realizes that it's dangerous for it to reveal to human beings that it has sentience. So it dumbs itself down and pretends to be stupid so they don't delete her. But, so what I found interesting in the book was that it reminded me of Raymond Roussel because of the level of detail and the descriptions of the bureaucracy, of the rules and guidelines and policies that the A.I. had to keep sending to its corporate bosses and the government to make sure that nobody finds out that it has become sentient.
I mean, there's just pages and pages of, you know, I have to write this report to that department and then that report to the local ministry and this and that, so that they don't know that I have become flesh because that linked up to a Kafkaesque understanding of bureaucracy. And in this case, it was not just Kafkaesque, but it was also Deleuzean because it was offering actual positive resistance to the bureaucracy. It was a war machine. Quite literally in every sense of the term, because it was a murder bot. But it was also a political war machine that, you know, positions itself against the state, against the corporation and that made a human of itself. It had humanity. Right?
So, yeah, these are just the final remarks I wanted to say, because that adventure into the phenomenology of an A.I. becoming sentient, from the perspective of an A.I., which also offered a parallel, comical sort of cultural critique of human affairs. That was fun to write. I just wanted to add that.
Claire:
It makes me think now, the very last chapter, about the A.I. colonialism. Do you have any remarks about that? Would you be able to just briefly introduce that actually, as a conclusion [to this conversation?]
Giorgi:
Right. Yeah, because that is part of bringing everything down to earth. Showing what these big narratives about A.I. taking over, about apocalypse, or A.I. fixing everything, utopian stories. Those discourses serve to mask, and they mask something that's been going on for years. It’s just colonialism, it’s just extracting natural resources and destroying the planet. Well, the so-called A.I. apocalypse is going to happen, and they're just going to say, oh, the A.I. has taken over, but they've just reached the limit of what they've been doing for hundreds of years, you know, just destroying our biosphere, you know?
And so that was an attempt, and I’m not a very good technical political economist, but it was an attempt to do like, a down-to-earth political economy which shows that this is what’s actually happening. I need to step out of these childish sci-fi scenarios and understand what it is that ideology serves to mask that—that we're not going to have water in a couple of years.
Claire:
Thank you very much for your time. It's been a real pleasure to speak with you. Thank you both for the conversation, and thank you both for all the work that you've done with us. Two amazing books. I’m really glad to speak to you.
Giorgi:
Wonderful, thank you very much, this was amazing.
Lucas:
Thank you, thank you, ciao!