Cliff Anderson is Vanderbilt’s associate university librarian for research and digital strategy, and he’s back on the podcast interviewing another author of a fascinating book Cliff read recently. This time, he speaks with Cathrine Hasse, professor of Learning at Aarhus University in Denmark, author of the 2020 book Posthumanist Learning: What Robots and Cyborgs Teach Us about Being Ultra-Social from Routledge Press.
Cliff and Cathrine have a wide-ranging conversation, covering such topics as posthumanism, Lev Vygotsky’s learning theories, why teaching humans is harder than teaching gorillas, and cyborgs.
- Cathrine Hasse’s faculty page
- Posthumanist Learning: What Robots and Cyborgs Teach Us about Being Ultra-Social
- “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation”
Derek Bruff: [0:05] This is Leading Lines. I’m Derek Bruff. Leading Lines producer, Cliff Anderson, can’t stop reading books. Cliff is Vanderbilt’s associate university librarian for research and digital strategy, so maybe it makes sense that he reads a lot of books. He’s back on the podcast interviewing another author of another fascinating book that he read recently.
[0:27] This time he speaks with Cathrine Hasse, professor of Learning and Aarhus University in Denmark. She’s the author of the 2020 book, Posthumanist Learning: What Robots and Cyborgs Teach Us about Being Ultra-Social. Cliff and Cathrine have a wide-ranging conversation covering such topics as post humanism, Lev Vygotsky’s learning theories, why teaching humans is harder than teaching gorillas, and why I may already be a cyborg. (music)
Cliff Anderson: [1:00] Welcome, Professor Hasse.
Cathrine Hasse: [1:03] Thank you so much for inviting me.
Cliff: [1:06] Maybe we can begin by introducing yourself to the audience and telling us a little bit about your institutional affiliation and your primary fields of research.
Cathrine: [1:16] Yeah. I’m Cathrine Hasse. I’m an anthropologist with a special interest in what I call cultural learning processes and technology. My work is that I’m a professor at the Department of Learning at Aarhus University based in Denmark. And I’m heading a research program that’s named “Future Technology, Culture and Learning.” And in this program, we discuss and explore a broad range of technologies, for instance, digital technologies and robots and what they bring to culture and learning, not just in the educational system, but in general.
Cliff: [2:02] Wonderful. And I think your book attests to just the incredible range of different fields that you bring together and synthesize in such a fascinating way. So the title again is Posthumanist Learning: What Robots and Cyborgs Teach Us about Being Ultra-Social. And it already contains two terms, posthumanism and ultra-social that are central to your argument. Could you help us understand the meaning of both, maybe just in a nutshell?
Cathrine: [2:30] Well, in a nutshell, I’m not sure. (both laugh) It’s not that easy terms to explain, but I’ll try. So posthumanism is a term that basically questions the self-evident status of human beings by exploring, for instance, how technology, but also other materials, can change what is categorized as being human. However, the term can cover two different and even contradictory interests. On the one hand, people are using it to refer to what I call a technical post-human. And that’s referring to a figure in a way where humans and machines are merging, like including AI and robotics, where we increasingly become merged with technologies.
[3:33] And some have made the claim that this technical and not very social, but very autonomous and hyper intelligent human being is what human beings eventually will become when we transgress what humans, what human beings are today, and gradually merge with more and more intelligent machines until we reach what Ray Kurzweil has called the singularity, where human intelligence is surpassed by, for us, incomprehensible machines.
[4:11] However, this post-human approach have been criticized by others by being very humanist, in the sense that it actually builds on a very old-fashioned conception of what a human is, namely an autonomous and super intelligent and very capable of standing alone figure. And contrary to that, we find another way of understanding the post-human in what I call post humanist theories that actually no longer take this category of the human for granted, but try to dive into new understandings of what human beings are and what they can become.
[5:00] And in these theories we find another kind, a completely other kind of human that is constantly entangling with each other and with the material environment. And in this kind of entanglements, we are not coming across as particularly intelligent because we’re actually becoming what we are with these ways of engaging with the material and social world around us. We are certainly not autonomous. And in a way you could say that this kind of post humanist understanding of the human is not accepting the human as a fixed category. We are all the time evolving and becoming something else as we engage with our physical environments.
[5:57] And what is interesting here is that this is tied then in my work to the concept of the post humanists as ultra-social beings in learning. Because in order to engage with environments and each other, it’s very important to understand that we are ultra-social learners. And this concept I take from Michael Tomasello’s work where he actually defined ultra-sociality as a very deep defining characteristic of what it is like to be a human. So the human species, is characterized by this ultra-sociality.
[6:48] Some post humanists, people who make post-human theories would deny that we can actually talk about a species since we are evolving all the time. But nevertheless, following Tomasello, there’s something very particular about our human capability for learning with each other and with the material environments. And though other animals, like a big apes and so on, they can live in big social groups and also learn from each other. And some animals build their own environments like we do. None of those other creatures are ultra-social in the way that they can create these very diverse cultures that we can create.
[7:45] Within these cultures, we are becoming in a learning process that makes some environment very understandable to us. And some environment we grow up and we love. But at the same time as human learning creatures, other environments we hate. And this very same environment can be loved by other humans. So this kind of very diverse becoming with our social and material environments, it’s really special to humans and very important when we talk about differences between humans and robots, for instance.
Cliff: [8:20] Thank you so much and you’ve touched on some of the really fascinating themes of your book that I learned so much from. And I think it’s extremely helpful to kind of resolve that ambiguity about these different senses of post humanism because people may come to it with expectations. And in fact, it’s all about this relationality and being entangled in different ways with our environment that I think makes your work distinctive and somewhat different from maybe some of the other senses of posthumanism as you said. Thank you for that clarification.
[8:53] There is a person, there’s a theorist that you, you kind of foreground at various points. Lev Vygotsky, if I’ve got the name correct, and his name occurs frequently throughout the book. And maybe this is a good point to just say, what is it about Vygotsky’s theory of learning that differs perhaps from, maybe other approaches that are more familiar to our audience, like the behaviorism of B.F. Skinner, or the constructionism of Seymour Papert. Why Vygotsky? What made him of interest to you?
Cathrine: [9:24] Yeah. First of all, I have to say that this connecting Vygotsky with post humanist theories is actually, not something that you would see very often. I think I’m one of the few people who have done that. And I do it because I find him particularly relevant for understanding how learning and our conceptual understanding of the world is entangled with not just social aspects and materialities, but also how this, to a certain extent, make us collective. So the ultra-sociality here also point with Vygotsky to the fact that we become through learning aligned in collectives. We come with our material environments and each other.
[10:25] And that is something that I have not really seen emphasized so much in learning theory. And not actually in post humanist theory, either, whether we talk philosophers, anthropologists or psychologists for that matter. So the interesting thing is that among my technical colleagues and I do have very good colleagues with an engineering background, for instance, there’s a very deep interest in learning theory that I don’t really find to be the case in a lot of the more post-humanist environments. And here, there is a particular focus on behaviorism. And also by Piaget inspired work like CMAP habits.
[11:22] And, and they are very good for calling up some very big issues about how we learn as individuals, but they are not so good at explaining how we can become more collective, how we can be understood as more collective beings in material environments. And here I find, as I said before, Tomasello’s concept of the ultra-social notion. So here I found the concept of Tomasello’s ideas important. And Tomasello actually also builds on Vygotsky. And I think he does that because he like me, has been inspired by this educationalist that lived and worked in Russia in the twenties and thirties and died very early and only late became well known to the Western world. But Vygotsky has a lot to offer when it comes to understanding these collective material processes of learning.
[12:39] I then take him a bit further by connecting his work with the post humanist theories. And by doing that, I think we get a better understanding of how we as humans are constantly in a learning process, it never stops. It’s not an instrumental aspect that we can just take up and use and then leave. We’re constantly learning with each other and our material surroundings. And in this process, Vygotsky has shown us that not just our concepts are transformed, but also what he calls our variable thinking about the world the way, the very way we perceive the world, the very way we engage with the world and understand the world in meaningful ways, are transformed by learning in these processes.
[13:39] So it’s a very fundamental way to understand learning, but also understand that these processes of transformation are collective as we learned with our material environments in ways that tie us together. So in a way you can say the materials become a focus point for how we learn. It’s not just an instrument you pull out and use for learning. It’s always a learning process that involves materials.
Cliff: [14:16] That’s really helpful. And I’m thinking to myself that Derek Bruff, who’s the editor of our podcast, always talks about as a mathematician, he never finds an adequate replacement for the chalk that he uses on the blackboard. That that’s the way that he loves to teach it in the very kinetic sense of like drawing out the equations is important to him in some way of communicating the ideas.
Cathrine: [14:40] That’s very interesting because I used to study also physicists and their education. And the physicist also use the blackboard or a whiteboard of course, all the time because they needed to visualize their thoughts. So they do it very explicitly. But it’s actually something we all do all the time, pointing to things in our environment, showing each other. These are the potatoes we need to cook for dinner. You know, a lot of things about engaging with the materials of the world and not just the visual sense, however, also, the whole essential body is involved in these processes. So our hearing, the way we smell the world, all these aspects are involved.
Cliff: [15:38] And I should say that your book, although it has a very strong theoretical component, is also extremely practical and you have a lot of interesting vignettes in which you discuss different ways in which you experiment and show how this theory works in practice. And so maybe this would be a good point to come to some of the different concepts that you talk about and how you apply your theory to them. And one I’m thinking about, in particular, is the massively open online course or the MOOC. And one of the things that you say that’s really interesting to me is that the promise of massively open online courses is to provide anyone with an interest in a university education hasn’t materialized.
[16:20] And in fact, in your book, you argue that MOOCs often inadvertently worsened educational inequalities by privileging the already educated while idealizing in some sense the situation realities of the non-educated. That might come as a surprise to some of our listeners. So could you maybe explain your view in particular about why MOOCs have failed to achieve this sort of democratic revolution that their founders intended?
Cathrine: [16:41] Yeah, thank you for that question. First of all, I think it’s fantastic we have MOOCs. I mean it’s a great opportunity and it’s wonderful. And it’s caught like a bush fire, you know, in the educational world because it’s so full of promises. I think what I try to point out here with this theoretical underpinning is that we shouldn’t get our hopes up too high. Because there are reasons why it’s really difficult for a lot of people to take the gift that’s offered. So you can, you can say the MOOCs are kind of gift, a gift of education and learning that’s offered. But we need to understand that humans come with different potentials for learning. And that’s not about being less or more intelligent.
[17:47] It’s about the material learning environments and social learning environments you have already been brought up in, so to speak, what I call your preceding learning. So what you bring to the, to the table or to the MOOC is really important for how you can continue to learn. And in my program, we actually made an interesting experiment that I don’t think I have in the book. It was made later with colleagues from Thailand and Malaysia. And that really very clearly again showed how we differ in our ultra-social learning potentials. So the MOOC was about learning how to become a movie script writer. And potentially it could be reached by people all over the world. But our conclusion was that it actually gave a kind of false expectation of everybody joining this course and being able to learn from it which was claimed in the beginning of the course.
[19:01] So I and my Danish PhD student very easily followed the course and took a lot of notes as we went along. And the same was done by our colleagues from Thailand and Malaysia. But it really turned out easily that it was a hard task for our colleagues from Thailand and Malaysia to follow the course and learn from it because most of the examples were tied to movies and issues that they had never heard about. Whereas they were familiar to us, to me and my Danish PhD student. So this is an example of how cultural learning processes matter. And that we often overlook the importance of our own cultural upbringing. And what makes certain things, certain materials, certain kinds of meaning-making, completely self-evident for us, which are completely new to other people.
[20:02] And that means their learning has simply to start at different place. It has to begin with, for instance, showing materials, showing things, making other references, for instance. And that’s more easy. And it’s also more easily detected by people. If you’re sitting in the same room. If you share a material surrounding, if you can make what I also have called social designation, you can point to something, you can pick it up, you can show it to people, that helps learning a lot if you come from very different cultural backgrounds.
Cliff: [20:45] And I mean, I think it’s also true to say that even if you come from similar cultural backgrounds, I know when I’ve been teaching computer science, coming around beside someone, pointing to the screen, showing them a set of keystrokes, it’s so much more effective and it’s frustrating in a way not to be able to do that when you’re teaching online. I think it’s a frustration that some of our colleagues have had.
Cathrine: [21:10] But you’re absolutely right. And that ties again to my concept of culture. So my concept, it might have come across here as I refer to culture as a national culture. That’s by no means the case. The culture, for me is defined by the collective learning processes we have gone through that makes certain materials available to us and certain materials understandable to us and others not so. Culture can be also within a classroom, for instance, where people come with different potentials for learning. So definitely it’s not, it’s not something that is just shared in a country.
Cliff: [21:56] I also want to come because it’s in the title of your book to questions of robots and cyborgs. And so maybe I’ll talk a little bit about both. You have some really interesting discussions about robots. And sometimes robots are seen as a kind of new educational technology and may well be. But, but you also point out that in, that robots like Jibo was a commercial robot. And NIO, is perhaps the way you say the other, that they have a kind of emptiness at their core that, that can lead to human beings becoming frustrated when they engage with them. Can you talk about sort of the, the learning with robots and why that’s both a boon and also frustration?
Cathrine: [22:37] Yeah, it might be a bit unfair to the robots. I’m actually using them as a kind of inverted mirror for what humans are not. So you can say in our minute studies of, of human learning processes, we can reveal some kind of everyday dynamics and connections that continuously bring about new taken for granted events in our everyday lives. And with, with Vygotsky’s learning theories and the notion of the ultra- social humans, again, we can say that humans are continuously learning. They learn constantly. And in this process we adapt what is meaningful to us in the environment and we learn new ways of perceiving the environments as we engage with it and with others.
[23:39] And of course, robots today can be easily programmed to speak and form sentences like humans and machine learning can also be employed so the algorithms adapt in a certain way. So we should think that when we have robots in our material environments, we could engage with them like we do with, with humans. But all of our research show that that’s really not the case. And I think that it’s because the robots are not ultra-social responsive in the way they transform. So Jibo, which is one of the devices I talk about in the book. But I could also have talked about others, in a way, pretend being human.
[24:38] Jibo was even pretending more than, than Echo and the Google device because it had a very human-like face that would be smiling at you and so on. In a way that can make humans really angry because when they then do not get the kind of response they expect from a human-like creature, they get even more frustrated than if it’s just looking like a washing machine. But even bring in a human voice to a device can, can create expectations that we will get human-like answers. And people get frustrated with these devices because they are not responsive in the way we are. And that’s why I talk about the robots’ empty curiosity, for instance.
[25:34] When if you and I were going for a walk and I would look out the window and I would say, “oh, what a pity, it’s raining today.” You and I would already share a lot that would make that sentence meaningful and precisely this day and at that point in time and the situation that I say it, maybe we would go for a walk in the woods and we don’t want to get wet. But if Jibo would say, “oh, it’s a pity, it’s raining today,” that would come across to me as a fake concern, as an empty statement because I know it will not share the engagement with me in making this statement meaningful. And that’s the kind of frustrations we can have with these robots that what their kind of engagement is not meaningful in the same way as human being’s engagement.
Cliff: [26:36] Yeah, I think this is really an important point to reflect on. And I think one of the lovely parts of your book is the experiments you conduct with children who try to engage with these robots and try to imagine them in a sense becoming more relational. And, and then, and then the robots disappoint them again and again, and the humans learn sometimes the robots don’t live up to the expectations they had of them to a certain extent. And I think that’s fascinating too because there’s so many educational technologies that come out promising one thing and putting a very friendly human face on what’s in fact an algorithm. And it leads to disappointment in similar ways.
Cathrine: [27:15] Yeah, and in relation to our experiments with humans, that was the children and the robots. That was really funny because we have seen also that it takes a long time for humans to accept that the machines are not like themselves. They become frustrated because they try and try, and try engaging. And that’s also part of the frustration process that they, they don’t give up. Humans really, really want these machines to respond to them as they would to each other. So we’re willing to go a long way. I’ve called it stretching. We’re willing to stretch ourselves a long way to include these robots in our sociality. But over time, and eventually, people give up because they don’t really get these responses they hoped for.
Cliff: [28:14] And cyborgs are also an interesting phenomenon that you discuss and talk about the phenomenon of blending human and artifact. But in a way you make it seem much more mundane than science fiction authors tend to suggest. For example, the idea of an old person walking with a cane can count as a cyborg in your perspective. I want to ask you just maybe to talk a little bit about the role that you see for phenomenology here. But going back a little bit to theory, because you have this interesting quotation on page 268 from your book, if I could just read a few sentences here.
[28:53] You say, “Body one, body two and technology are united and ultra-social learning. What phenomenology brings to cultural historical learning theory is the acknowledgment of the vagueness in indeterminacy of relations, a potentiality with phenomena that is resolved and momentary collectives when word meaning already learned and embodied is entangling in and transforming our immediate senses and perceptions. Becoming a cyborg is in a learning perspective, more than merging flesh, metal, and categories. It’s about the vagueness of learning merging into habits.”
[29:26] And I think that’s so beautiful, that last sentence. But also, I’d love to hear you talk about it a little bit more. What do you mean when you say that it’s the transition from vagueness to habit?
Cathrine: [29:38] Yeah. So first of all, I’ve been very inspired by my colleagues in a particular theoretical direction called post phenomenology, Don Ihde, Robert Rosenberger, Peter-Paul Verbeek, and others. Where they actually explore the relation between technologies and, and bodies. And that is of course also inspired by phenomenology, but it’s building on it, moving beyond it. That’s why that’s called post phenomenology. Lot of posts here. And here, for instance we take, could take a point of departure in Merleau-Ponty’s work, The Phenomenology of Perception, where he actually describes this process of perceiving the world, as we’re not perceiving the world as fixed categories. It’s just one look and then we understand everything there is to understand.
[30:44] And when we put learning into that kind of discussion, it is an ongoing learning process that makes things intelligible to us. And we don’t start from scratch every time. But it’s not that the world is completely determined either. We’re always in a way in between. But at a certain point, our learning to recognize, for instance, a mobile phone, as a mobile phone. We don’t have to start learning that all over again, that becomes habitual. But the way the mobile phone is engaging us, the way we are engaging, it’s in our social relations that is, in a constant transformation. So habit is something we form in a process that actually never stops. But what becomes habitual is that we don’t start from scratch.
[31:48] So the learning is about building up a kind of basic understanding of what we have to do with here. And then it keeps evolving from there. So when things are vague and indeterminant, at first, it’s actually an ongoing process because we can never be sure how we’re going to use the mobile today because it’s not up to us alone. It’s also the other material environment. We can experience that the weather is so cold the mobile will not work, for instance. We can experience cold that we didn’t expect that make us treated in a different way that we want to smash us to the ground. So a lot of things can engage us that transforms what we already have learned. But it’s still a habit, how to open the mobile, for instance. And that goes for a lot of other issues as well.
Cliff: [32:54] And I just think that it’s a thesis which also I guess the extensible mind that comes from Andy Clark that these objects have in some sense become part of who we are. And they become part of our habitual way of interacting with other people and the world around us. And we don’t think about it anymore.
Cathrine: [33:14] No, definitely. And, but the point is in relation to a robotic being, this indeterminacy in these relations is crucial because it’s very, very difficult for machines to understand this kind of indeterminacy and then how it becomes something meaningful. Because that’s the, we strive towards meaningfulness in our engagements with the world. So things do not continuously become indeterminate at some point, we begin to put a collective even meaning into it. And then it might be turned over again.
Cliff: [33:59] This is actually a really good time to, I think, talk about kind of story that I think made the news that we’re all familiar with. But you have a very interesting take on, which is the case of Tay, that Microsoft’s ill-fated social bot on Twitter. Because it seems to me this is precisely what happened there. According to your reading, as our audience may remember, Tay started out as a very happy, friendly, bot, and then began spouting hate speech because of what it learned from interacting with Internet trolls. And you write, “the reason why Tay and the trolls differ is because Tay’s capability of differentiation is not normative but algorithmic.” And I think this is similar to the point you were just making. But could you expand on your reading of that incident and the insight that you drew from it?
Cathrine: [34:43] Yeah, it’s a really fantastic incident. And I’m sure that people in machine learning also have learned a lot from this because they really did this in the best of meaning and ends. I think it came as a shock to them, actually, how human beings take into their own procession, the materials offered, and transform it into something completely different. But this is actually what humans do. I mean, this is what ultra-social humans do. And in this case it was a group of ultra-social human trolls who, who decided to, to, to attack you could say, this innocent Tay teenager and transform her into an instrument for their vile intentions. Which by the way, I think also included having a very low opinion of women and stuff like that, coming out from this innocent teenager’s mouth.
[35:58] So what they do is they have in this community of trolls, they have certain values and certain norms for how to, how to be a good troll, so to speak. That’s in their culture. And, and in the materials word, in the material words they use, they, they put in particular meaning. And the algorithm was also put up to learn from people’s words. But of course again, these words were not meaningful to the algorithm. So it couldn’t really make an analysis. Or why are they saying this all the time? Why are humans all the time trying to tell me that, that women are stupid and not good programmers or whatever they, they said and even worst things? And the machine was not able to, so to speak, read the situation. And of course, the programmers had not envisioned that people could be so vicious and use it.
[37:10] So Tay actually had algorithms that kept connecting new meanings with words, just like humans in a way. But it was not meaningful. And the situation was not something that the algorithm could realize or learn from because the situation is tied to what we want to do in a particular situation. And the algorithm doesn’t really want to do anything. So when we say a material word and this is side to the B.F. Skinnner learning process again. These words keep evolving as meaningful to us in particular situations when we learn. So when we learn that trolls are saying a particular word, we will put it in a situational understanding that teaches us a, this must be trolls speaking, you know. But in the machine, the only meaning there is, is between 0 and 1. As far as I understand the programming language, basically, it might change. I don’t know if these things change.
[38:27] But I know that today, whatever comes out as something meaningful from a robotic mouth or an algorithmic machine is not in fact meaningful, but is mechanical. And I doubt, which I also say in the book that the machines actually eventually can become ultra-social as humans. But I might be wrong there, so I’m not I won’t go into an area I don’t know enough about here. But today you can say machines don’t care if it rains or not. And even if they were robots going for a walk with the physical body in the woods, the rain wouldn’t matter to them. And Ultra-social people, they can come up with all kinds of new ways of dealing with what matters to us.
[39:24] For instance, getting wet. If we go for a walk, maybe we could say to each other, okay, let’s go for a walk, even if it’s raining. And I could say, let’s go singing in the rain. And that would just be something that would come to me from the situation, from watching the rain, or maybe because yesterday I saw an old movie call Singin’ in the Rain. And then if a machine that learns that, it’s possible to repeat this funny sentence. And next time you and I say, let’s go for a walk. It says, “oh, let’s go singing in the rain.” It will fall flat down because the situation is completely different. It’s not funny anymore. It has been said, it could be, you know, another way of showing itself as false and empty. And we’d be really annoyed that it took my funny sentence like that. So humans are simply responsive to the situation and to each other and to the materials in another way.
Cliff: [40:26] Yeah, I think that’s really your point at heart. We’re dealing with matrices of numbers and the machines, algorithms and the sense of a purposefulness of normativity of intention. These are things that are, that are missing from those algorithms. And you know, even if you could sort of patch and saying like don’t connect these numbers with those numbers because that leads to hate speech and maybe that can be learned from the algorithmic process. It’s still not the same thing as the kind of evolving a meaning that you’re talking about in a human way. That’s how I understand your point here. That it’s just it’s a kind of alien that we mistake as human learning, but it’s a distinct type of learning, if I understand you correctly.
Cathrine: [41:15] Yeah. Yeah, exactly. Yeah. First of all, what we need to do is explore human learning much more. So because it’s taken for granted that we know how humans learn. And behaviorism is very big in machine learning. But my point is that there is more to learning than behaviorism. And we have to explore these other kinds of learning theories, also in machine learning, to understand what’s actually possible or not to imitate in the machine. And I think that if we do that, we’ll come across this barrier of things being situated and meaningful and localized in ultra-social cultural settings that involves materials that we constantly conceptualize in new ways. That will be a very, very big challenge. And maybe, maybe we can’t ever change that into something useful in machine learning.
Cliff: [42:26] But as I take your point and actually, in some ways, the development of machine learning and robotics is helping us better understand ourselves better by not confusing ourselves with these other processes of learning that it’s bringing up things that were latent, that we hadn’t paid attention to and that now you’re calling attention to and saying, here’s this whole other dimension of learning and we’ve just taken for granted.
Cathrine: [42:46] Yeah, exactly. So in this sense, they become a very important field of research like this inverted mirror. And, and we, we find out new things about ourselves all the time. And also, I have to say that the developments in machine learning have been amazing. For instance, in terms of what they can do with the language, how they can speak, how they can help translate words, and that kind of things. So we learn a lot about how we are aligned with machines, but also how we may fundamentally differ. Or whether it’s fundamental, I don’t know, but we’ll see.
Cliff: [43:34] So given the critiques of MOOCs and robots and friendly critiques of MOOCs, robots, algorithmic driven learning tools, what advice do you have for educational technologies as they seek to build environments that are more suitable for post-human learning?
Cathrine: [43:51] Yeah, I think in many ways we can embrace all these technologies just like we can embrace all kinds of materials in teaching. We shouldn’t hold back from giving our students hands-on experience with building robots, but also building a chair or stuff like that. I mean, it’s good to use yourself. It’s good to be in a state of doing something as a student. And that would be part of taking this notion of human ultra-social learning seriously. But that would also entail an attention to students’ different potentials for new learning, both in terms of material resources, social resources, and in relation to what I call they proceeding learning. So all that could be, could be useful in teaching to remember this.
[44:57] Then another thing I think it’s really important is that we shouldn’t, in teaching, pretend the machines as they are today, can learn like humans or can be teachers like humans. They, they can’t. We can use them for a lot of fun and exciting new learning possibilities that they, they can give us. And also MOOCs, for instance, it would be a fantastic tool to have in class, for instance, and watch parts of a MOOC and then discuss it together. And find out all the different resources children bring to bear in these situations. Just like they did in our research with the robotic drawing sessions with the children. They had such a lot of very different resources to bring to bear in the learning situation. So basically, don’t think of humans as machines and don’t think as machines as humans. Which means that we should see humans in a different way than what I said was part of the post-human definition, the technical post-human.
[46:12] Humans are not individual. They’re not autonomous, and they’re not something we can just enhance by a quick fix. They are responsive, collective, and sensitive beings that come across eventually and always as cultural different in the ways they learn. And but we can align these processes with the materials offered if we are thinking about and taking ultra- social learning seriously.
Cliff: [46:42] I think that’s a wonderful answer. I feel like it’s a very humanist version of post humanism. If you get my sense, the humanity of post humanism. (both laugh) As we wind up our conversation, would you just share on your own personal level about the ways that you’re teaching and research has changed since the onset of the COVID-19 pandemic. Has the experience of moving your education like maybe you had to do like we did in the United States, rapidly online, altered or underscored your theoretical perspective?
Cathrine: [47:15] Yeah. But first of all, I would like to comment on the thing that, you know, the post-human is really in my version, not about giving up on the humans, not at all. On the contrary, post humanists learning theories is really making us much, much richer as humans. So, so the idea is not to, to give up on the humans at all. So I don’t discuss COVID in the book for the simple reason that it did not exist in my mind, at least when, when I wrote it. But it’s been really interesting to, to follow how we have evolved with these technologies as COVID has expanded. And I think we can learn a lot from COVID because it’s, it’s taught us how globally connected we are actually as humans. And also that we are not the closed, bounded solipsistic individuals that we might have thought before. We are not just confined to what we normally think of as humans.
[48:32] Now we are also becoming aware that we consist of bacteria and virus. And these things that can leave our bodies or be attached to our bodies as we move in physical space. That social interaction has consequences in many ways that we had not foreseen. And that materials are involved in everything we do. So even having taken a plate from somebody who has COVID will now become a new tie between you because something is, is transferred between you, the virus, but it could also be other things. So in that sense, COVID has opened our minds. And I think it’s important to know that I think we have aligned to each other in a more collective way then I expected. Because as cultural beings we could have just been enclosed in our own normative cultural setting and, and hedged our environments against the virus. And of course to some extent we have done that. But I think also the world communities have reached out and do take a certain concern for each other.
[49:57] And definitely in terms of being a concept that is constantly transformed. COVID has also showed us new ways of what it is to be human. In a way, doing what a lot of technologies have also done before. Removing us from each other. Removing us from the deep social encounters we took for granted before. So we’ve learned a great deal from being on Zoom and Teams and this, and this app here. But I think what we mostly have learned that we really are very ultra-social beings. We want to, after a session on a screen like this or teaching on screen online, we’ve really long to meet other people, really long to share a physical space with other humans. We really long to, to have new kinds of social designation, giving each other things going in the world, sensing the same air, smelling the same width round. All these things that we took for granted before and now becoming something we really long for. So in a way, COVID shows us that we are ultra-social humans.
Cliff: [51:23] I think that’s a beautiful way to end this conversation. And I want to thank you Professor Hasse for such an engaging talk. And I encourage all of our listeners to read your book. It’s very worthwhile. So thank you again for joining us on Leading Lines.
Cathrine: [51:37] Thank you to you, Clifford, and I really, really appreciate what you have done here. It’s wonderful to be heard by a new audience. So thank you very much to you. (music)
Derek: [51:57] That was Cathrin Hasse, professor of Learning at Aarhus University in Denmark and author of the new book, Posthumanist Learning: What Robots and Cyborgs Teach Us about Being Ultra-Social . Thanks to Cathrine for sharing her time and expertise here on the podcast. And thanks to Cliff for the great interview. I was struck by Cathrine’s use of the term ultra-social, a new term for me. As I understood it, ultra- social refers to the fact that humans aren’t just social creatures, but we’re social in a remarkably diverse set of ways. I gather that any group of a dozen gorillas will socialize in pretty much the same ways, whereas any group of a dozen humans are likely to socialize in different ways, especially if you consider human cultures from around the globe. That makes sense, although I have to admit, I don’t know much about the social lives of gorillas.
[52:46] But this ultra-social concept also helps explain why teaching is such a challenging and complex practice. Every student we encounter is different, yes, but every group of students is different too, in the ways that they learn socially. This is, I think, why efforts to automate teaching fall flat. You can’t funnel all students into the same learning channel and expect to get good results. Learning isn’t that simple, not for individuals and certainly not for groups. Teaching effectively is an endlessly changing, an interesting challenge.
[53:18] Leading Lines is produced by the Vanderbilt Center for Teaching and The Jean and Alexander Heard Libraries. You can find us on Twitter @leadinglinespod and on the web at leadinglinespod.com. This episode was edited by Rhett McDaniel. Look for new episodes the first and third Monday of each month, and sometimes you will find them. I’m your host, Derek Bruff. Thanks for listening and be safe. (music)