Episode 60:
Future Of Digital Literacies Faculty Panel

We have something special for this final episode of the academic year. Usually, we talk with educators, researchers, and technologists about what they’re doing now, and ask them a question or two about where they’d like to see educational technology go in the next few years. In this episode, however, we’re going to camp out in the future.
The Vanderbilt Center for Teaching recently convened a faculty panel to discuss the future of digital literacies where we asked our panelists to gaze into their crystal balls and engage in a wide-ranging and wildly speculative conversation about the future of digital literacies.
You’ll hear from Doug Fisher, associate professor of computer science and faculty head of Warren College, one of Vanderbilt’s residential colleges; Corbette Doyle, senior lecturer in leadership, policy, and organizations; and Jaco Hamman, associate professor of religion, psychology, and culture.
This episode is a little longer than usual, but it’s worth it. And stay tuned after the panel for a couple of programming notes.


 Links


Transcript

[0:01] (music)

Derek Bruff: [0:05] This is Leading Lines. I’m Derek Bruff. We have something special for this final episode of the academic year. On Leading Lines, we explore creative, intentional, and effective uses of technology to enhance student learning. Uses that might point the way to the future of educational technology and higher education. Most of the time, we talk with educators, researchers, and technologists about what they’re doing now in very concrete terms. And maybe ask them a question or two about where they’d like to see educational technology go in the next few years.

[0:38] In this episode, however, we’re going to camp out in the future. I recently convened a faculty panel at the Vanderbilt Center for Teaching to discuss the future of digital literacies. We know that technologies will change and as they do, the ways that we create and consume information and media will change. As will the ways we connect and communicate with each other. What will digital literacies look like in five or ten years? And what does the answer to that question mean for how we teach digital literacies today?

[1:07] I asked our panelists to gaze into their crystal balls and engage in a wide ranging and wildly speculative conversation about the future of digital literacies and they delivered. I’m excited to share this fun conversation here on the podcast featuring three Vanderbilt faculty who are often several steps ahead of their peers in how they think about technology.

[1:28] You’ll hear from Doug Fisher, Associate Professor of Computer Science and faculty head of Warren College, one of Vanderbilt’s residential colleges. Doug was one of the first faculty anywhere to wrap a course around a MOOC instead of a traditional textbook.

[1:41] Our second panelist is Corbette Doyle, a senior lecturer in Leadership, Policy, and Organizations, where she teaches courses both on campus and online. She’s also, I believe, our first repeat guest here on Leading Lines. I interviewed her way back in episode two.

[1:57] Our third and final panelist is Jaco Hamman, Associate Professor of Religion, Psychology, and Culture, and the author of the book, Growing Down: Theology and Human Nature in the Virtual Age. This episode is a little longer than usual, but I think it’s worth it and stay tuned after the panel for a couple of programming notes from me.

(music)

Derek: [2:19] My opening question is this, what will it mean to be digitally literate in our culture in the year, say 2025? And what implications does that have for how we teach digital literacies today? And I’ll ask Doug to get us started.

Doug: [2:40] Well, I think one important thing is that students will have to choose which of the various or many digital tools are available to them. I mean, I can read, write, and do arithmetic. And for most people, reading and writing is pretty straightforward. Arithmetic is pretty straightforward. If you go into mathematics, and you follow Derek’s lead. There’s all kinds of mathematics. You hand me a paper that Derek has written, and I am illiterate. And so even in today’s world, what we mean by literacy is highly conditioned on my life.

[3:17] And I think that’s going to be maybe one of the biggest challenges because it’s no longer just reading, writing, and arithmetic. Even though those have a multitude of values. It’s even gonna be a greater set of choices that students are going to have to make. And navigating those choices for technologies that are appropriate to their lifestyles is perhaps, I think going to be one of the most important skills that students are going to have to have. What do I look at? What do I learn? What do I not look at?

[3:51] John Bransford, who used to be a professor here, would talk about inert knowledge. And inert knowledge is knowledge you have, but you don’t know when you should use it. So, you can know what logarithms are, but when you encounter a problem involved that can involve logarithms, you don’t recognize it as such. And so that knowledge is inert. And I think that speaks to how these technologies too, are going to be relevant to our lives.

[4:22] Do we understand the connection to our lifestyle of this tool, and whether it’s something we want to use or not? I’ll just say, and we can get to this later, that I think increasingly and I say this with some confidence because we’re already seeing this, is that artificial intelligence is already going to be involved in the choices we make, the vetting we perform on sources. And so that’s going to change, I think the landscape of how we vet materials. I’m one of these people who stays in touch with my old high school friends on Facebook. And most of them are polar opposites of me on the political scale.

[5:01] But I use it, I interact with them on politics, as a way of practicing, practicing civil discourse under tough circumstances and sometimes I gotta walk away. But we see all kinds of memes out there. They’re not just political memes. There’s a popular meme going around about a line of wolves traveling through a snow scape. And the meme says the first three are the weakest and they set the pace for the pack. And the next one, next few are the strong, strongest, and then the middle of the pack and then the dominant female at the back to defend it all.

[5:38] Doesn’t sound political, but it probably is because it’s too good to be true and it’s not true. That was, you know, a meme that was created. But we don’t even, I didn’t know to look for that for a long, long time. I know when I need to look up the truthfulness of memes, but sometimes it just sort of, you don’t even think about it. And we gotta get that skill, not just how to do it, but when to do it. So that’s all I’m saying for now.

Jaco: [6:08] Thank you, yeah. I appreciate Doug. We’ve worked together on some other projects in-house, but I appreciate. By 2025, I think students will have a deeper awareness and we should have that too, that the students will be extended selves. By which I mean, that there will be no distinction between who they are as people and the digital world. Today, we don’t really have a person. We only have a person with a phone in hand. That’s the extended self.

[6:37] The extended self was maybe argued for the first time by William James, 1890. Where, coming from Harvard, he had his example of if you have your beloved horse, and your beloved horse dies, you’ll go into depression because your horse was an extension of you. And today phones are extensions of people, as well. And none of us will come to work, and halfway around the block, we recognize, “Oh boy, my phone is on the counter.” All of us will turn around to go and get it.

[7:08] But by 2025, I think we will be much more aware that our students will be extended selves. And of course, that has huge implications for how we teach and even what we teach, things like that. I think a second awareness that I would say by 2025 is that we will be much more attuned to the fact that digital literacy is an ethical project. It’s an ethical project.

[7:35] I’m gonna give you two quotes. The one is from Melvin Kranzberg, who arguably the biggest historian of technology in the US. Melvin Kranzberg, he taught at the University of Georgia. His quote is, “technology is neither good nor is it bad, nor is it neutral.” Technology is neither good. It’s not going to save the Earth or the world or your classroom. It’s neither bad. It’s not going to ruin us as well. But it’s not neutral. It’s a power relation, it’s a source of power. It shapes people, it shapes communities. And we will have to figure out, well how is that shaping taking, taking place. So that’s the one quote.

[8:14] The second quote is, Audre Lorde, the African-American feminist scholar, who said that,

“The master’s tools will never dismantle the master’s house. The master’s tools will never dismantle the master’s house.” The context in which she said it, was actually a feminism conference where she wasn’t invited. And she was kind of ticked off and she fought her way to get onto a panel. And when she used that sort of line, she was saying that feminist theory cannot undo feminist theory. That’s really her context.

[8:48] But of course we see people who were enslaved and bunch of other things behind that, that quote. So I think we will have to be really mindful of this ethical project. Because the companies who will give us these literacies, they are not going to police themselves. They are not going to be the ethical gatekeepers. So how do we, how do we do that then?

[9:07] And then I think the third thing that I would like to say, by 2025, I hope that we would have recognized that who we are as human beings have totally been transformed. And then we need a new set of intelligences to guide us. So all of us in this room are sort of aware and even work with Howard Gardner’s multiple intelligences, for instance. Or we will do Goleman’s emotional intelligence, or something like that.

[9:26] But I believe all of those intelligences are mute at this moment. I’m going to give you just two. In my, in my book, I identify six intelligences for us that we have to grow into. But I’m going to identify for us just two, at this moment. And the one is what Derek mentioned, is we have to cultivate in ourselves and in our students, playground intelligence. Playground intelligence asks two questions. How am I playing? That will include all the digital world. How, how am I playing? And the second question is, how am I being played?

Corbette Doyle: [10:12] Mm. (laughs)

Jaco: [10:13] And unless they can answer those two questions, they will probably be in trouble, because they will be played. Doesn’t matter what platform they use. How am I playing? How am I being played?

[10:25] The second intelligence that I explore in my book, and that I think is really, and that actually alluded to this already, I just call technological intelligence. And technological intelligence is the wisdom to being able to evaluate digital contents, but also to recognize how that is impacting yourself, as you are online all the time where screens are just around you very, very frequently.

[10:54] So I hope that by 2025, we would have a much deeper awareness of who is the student in front of us and ultimately who are we as faculty, as well. All of these things that I’ve mentioned had huge impacts on, of course, how we, how we teach. Just one that I’m thinking of is I think we will have to create some kind of a rubric that we can agree to.

[11:19] For instance, in terms of how do you evaluate digital content? And it’s important for multiple reasons. Not only how do you evaluate student work, that would be important too, but also how will we as faculty, be evaluated in form of our tenure and promotion process? A campus such as ours are highly ambivalent about digital processes. And if you engage in digital humanities, for instance, as a project.

[11:45] Say for instance, you create, I’ve got colleagues at the Divinity School who have created databases that thousands of people have used hours and hours of work. Is that half a book? The third of a book? Two articles? As you go up for review. And I think we will have to really encourage our administration to reevaluate the way they evaluate us as faculty. So the digital humanities and digital literacy could actually flourish. Thanks.

Corbette Doyle: [12:14] So there are advantages and disadvantages of going last. (everyone laughs)

[12:20] The advantage is, I had more time to think about what I wanted to say. The disadvantages, plural, are that I have to follow both of you. And then many of the things that I wanted to talk about, you’ve said more effectively than I planned to say. But just linking to your last point, which was actually very similar to my first point. Mike Caulfield, one of the people Derek mentioned that’s on a podcast, asked an important question when you ask about digital literacy. And he says, which digital literacy are you talking about?

[12:52] And one of the things that he includes in this blog article that I love, that’s actually two years old, he has multiple rubrics in there and they’re rubrics used by other, you know, they’re not rubrics he created, but they provide a framework for at least thinking about what might the different components of digital, digital literacy that I want to bring to my teaching? What did they look like, and what might my rubric be?

[13:27] The second topic I’d planned on talking about was artificial intelligence and ethics. So as Derek mentioned, one of, I mainly teach analytics focused classes both in-person and online. But I also teach a master’s level class in diversity. And that is where I do all of my work outside of the classroom. And in the beginning, I was very excited about the potential for artificial intelligence driven text solutions to improve organizations’ focus on creating more diverse student body, more diverse workforce, et cetera, and for improving the rate at which individuals who do not represent the majority are able to advance in an organization.

[14:26] And I haven’t given up on those tools, but I am increasingly concerned that they bring more problems to the table than they do bring solutions. There was an article, in March, I think, Amazon had launched a major use of technology to improve the diversity, or they’re recruiting. And what they found was that it actually backfired.

[14:52] And so one of the problems with the use of artificial intelligence, and I think it plays into so many different ethical issues, is that artificial intelligence is like predictive analytics in general, is based on an analysis of the past. And if you are evaluating talent based on who’s been successful in the past. Even if you’ve stripped identifiers, we all bring our identity to everything that we do.

[15:26] And there are likely an array of biases that have crept into that use of artificial intelligence that will make our efforts, that will set us back rather than advance us if we aren’t aware of the fact that those biases are baked into the technology that we are building. And I think this applies to the whole concept of what might digital literacy look like? What do we need to be aware of in the classroom? What do we need to be teaching our students?

[16:05] You know, in a similar vein, I use active learning in all my classes. And I have always been a huge proponent of using external platforms for creating dialogue outside of the classroom. Because in particular, in an active learning environment, I’m really looking for opportunities for students to make connections between what they’re learning in the classroom and what they are encountering in the real world. And then make that connection visible for their classmates and create a digital dialog.

[16:38] Well, now I am increasingly concerned. And one of the things I’ve always loved about that is… You know, I’ve always had a relatively meaningful number of international students and/or students who are less vocal in the classroom. And so I’ve always tried to use those digital opportunities as a way for people who want more time to think about what they’re going to say to still be able to contribute to the dialogue. So I view that as an inclusive strategy.

[17:11] But increasingly with all the things that are happening, in terms of privacy in the digital world. I’m now concerned that I may be forcing students to leave a digital footprint that may not be safe for them to leave. And so, yeah, so those are some of my ethical dilemmas that I’m just continuing to deal with.

[17:41] You know, just in terms of 2025, I hope that by 2025, all of the faculty at least, understand that digital literacy isn’t one and done. And that is continually evolving. And in the same way that we try to teach our students to be lifelong learners. Generally, that’s, I think the most obvious benefit of getting a college degree, regardless of what career path you might be choosing. You know, if we’re doing our jobs correctly, our students know how to continually learn.

[18:15] Well, we have to apply that to digital learning and digital literacy as well. Because whatever technology’s going to look like, and this was part of your point. In terms of that extended self, none of us can predict what it’s going to, and we can make all the predictions we want by 2025, we’ll all be off the path. And I think one of the things that is possible, I’m not saying it’s likely, but I think one of the things that’s possible by 2025 is that we will actually have seen a pull back from the digital life.

[18:48] And it’s like I have a 25-year-old daughter who, when I was moving, and I was trying to weed out. And I was going to digitize a lot of the old photos and it’s like nobody looks at the old photos. “Oh, no mom, These are great. Don’t get rid of the physical photos.” So I just wonder if by 2025, we won’t have a generation that values the physical more than we might otherwise project.

Derek: [19:16] I’m really hoping postcards make a comeback.

Corbette: [19:19] I love postcards.

Jaco: [19:20] A lovely book that explores those, if you’re intrigued, is David Sax’s book, The Revenge of Analog.

Corbette: [19:27] Ooo.

Jaco: [19:28] Real Things and Why They Matter, came out in 2016. David Sax, The Revenge of Analog: Real Things and Why They Matter. And he traces, for instance, the resurgence of vinyl, film, self-publishing and things like that. Board games is another one that he mentions. The, the, yeah, just the Revenge of the Analog. I love that title. It’s a beautiful title.

Derek: [19:54] This is Derek speaking from the panelists’ near future. At this point in the conversation, I asked for questions from the audience and one person asked the panelists if they ever see any pushback from students as they tried to teach digital literacies.

Cobette: [20:08] But I have had push-back from students who say, well, I’m really not on social media. I don’t comment digitally. And my, you know, the first time I heard that I’m like, “what?” Just because of the age. And so I no longer require it. I do require students to provide evidence at the end of the semester. They have to evaluate their participation. And they have to provide evidence, and so they have to be thinking about it. If you’re not going to participate in that forum, how else are you going to contribute to the collective and collaborative learning in the class?

Jaco: [20:50] Yeah, likewise, I think I also have found students are very hesitant, at best, to do that. I teach a class called Play, Subversion, and Change, where we look at Play Theory and how you can use Play Theory to subvert power structures, but also inform your leaderships that you are a different kind of leader. And the assignment for that class is that we start off by ever-increasing presentations in a, in a TED style. And then the final, instead of a paper, they have to do a 20-minute TED talk. And the option is, let’s record this and put it out there.

[21:30] Because one of the things that I think, what we need to do is. Sort of, in this digital age, we have to re-imagine the classroom in service of the public. And how do you do that? How do you do that well? And in the last class, maybe about 13 students, if I remember correctly, not a single one took me up on the invitation to record this and then put it out there. It wasn’t always just in terms of the ethics. It was also self-protection. That they don’t want to future employers to see what they may see as sort of an unrefined project of some sorts, but not a single student took me up on, on doing that actually in an online format, which was the intention at the beginning. But again, you give choice and then you discover that’s not the place they would go to.

Doug: [22:18] I was interested to hear. I teach the Ethics of AI. And we’re very concerned about the ethical implications of artificial intelligence and technology, generally. And each of us talked as though digital literacy in 2025 is going to be about understanding the implications of the technologies we’re using. It’s not going to be about using advanced Photoshop. You know, it’s going to be, what are the implications of what we’re creating?

[22:49] And I’ll just point out that in the I triple E code of conduct, which is something computer scientists and engineers are supposed to live by. One of the tenants in that code of conducts, you are to understand or you are to be educated to an extent that you understand the societal implications of the technologies you create. And the fact that, that is codified alongside we don’t, we’re not, you know, you can’t bribe us. That’s one tenant and the other one is, this other one which is at the other end of the scale. I’m really glad that, that’s in there and I like this idea that literacy is about understanding those, those greater implications, not just about being able to get on and use something.

Derek: [23:31] I just want to add, I think it was probably seven or eight years ago, when I saw that our first year, undergraduates knew not to put embarrassing photos of themselves on Facebook, right? Like 12 years ago? No. Or ten years ago. But at some point, they learned. “Okay, I need to be a little bit careful with the photos of me with this cup of whatever it is.” But what I’m hearing is kind of dangers or concerns that are not nearly so obvious, right?

Cobette: [24:03] Right.

Derek: [24:05] And so, and I think as the technologies become more complex, as there is artificial intelligence and machine learning under the hood, and algorithms that are so massively complex that no one human can really understand what they do. It’s hard to know.

Cobette: [24:20] A machine could.

Derek: [24:21] Right, a machine maybe.

Cobette: [24:21] That was a joke

Derek: [24:23] But it’s hard to know what’s being done with what we contribute and what we put out there. And so, so I may post a comment to a discussion board that on its face, is not embarrassing in any way, but it’s, it’s, it’s being assembled in a massive database that can be sliced and diced. And people can use it to learn lots of things about me that I might not want them to know or to be used against me. And I think that’s where I think that’s where things get a little bit scary for me is, is again, when the complexity is such that you just don’t know how you’re being played?

[24:58] Yeah. Yeah. And then just linking both sets of comments and to the being played, just the notion of unintended consequences in digital literacy. And so it has become so easy to share information. We receive something, we see it, whether it’s on Facebook or we’re just following a Twitter feed, whatever it is. And it sounds good, likely because it resonates with whatever we believe. And then we start sharing it broadly. And we don’t understand the unintended consequences. We may think it’s only going to a limited distribution, but we don’t know where it’s going and how it will be subsequently used or interpreted. Same with technology, I think that’s one of the hardest things.

Jaco: [25:52] I’m actually working on a book project at the moment. The working title is Caring Virtues for Artificial Intelligence. So I, I’m trying to explore, as we have AI today, it’s bolt around efficiency, cost savings, principles that ultimately, it’s not serving you and me in life giving, giving ways. And don’t we want AI to be patient with us, for instance? Or to have compassion towards us? I truly believe that within the next ten years, most, if not all medical decisions will be made by AI.

[26:31] Your diagnosis will be typed in. In many places, like radiology, I think that’s already happening, but your diagnosis will be typed in by a physician. AI will kick out saying 2 million people had this exact same symptoms. Here’s the protocol that was followed for X percentage. This is what work for X percentage. This didn’t work. And then the physician is going to side with a bigger percentage. Because should you die and your family sue the physician is going to say, “We did what worked for 80% of people, that that’s what we follow, the best protocol that we could, best practices we had,” and so, it’s really troubling. If you think of how do you gain the kind of intelligence that can shape even AI to go in, in different, different, different directions?

[27:17] But AI is going to change us radically. One of the questions that Derek prompted us was, you know, sort of a wild dream of 2025. And what you expect? And I actually think I expect that we will have a deeper awareness of place because of augmented reality. If you think of virtual reality, just takes you somewhere. But augmented reality gives you much more feel of this is my immediate, sort of surroundings and environment. And I can see by 2025 that augmented reality is going to drive us in ways that we don’t even recognize.

[27:56] It will be in our glasses, it’ll be on our contact lenses probably. And everything we see will be super imposed. That means that we will shift away from wisdom to knowledge. I don’t need, I don’t need to carry embedded knowledge with me. I don’t need that. All I need to know is how do I access the knowledge I need that which is not wisdom, of course. And I think our students and in our teaching, what we would love to do, I think in our, all of our classrooms, is we would love to empower our students to be a little bit wiser when, when they leave our classrooms so that they can be better citizens, they can be better partners, they can be better parents and better persons.

[28:54] But, but I think with augmented reality, that is going to shift radically away from wisdom as something deeply embedded, something that slowly grows in you over a lifetime to just knowledge that is readily available. And I think technology has been really good at selling knowledge to us. And then wisdom is just getting eroded.

Corbette: [29:06] And on the ethics front, So I have taken to reading a lot of sci-fi, which is not something I did previously, but it just read a lot of suggestions. A study had been done of some of the top thinkers. And the book was identified, 654q  as that they had in common was reading, not watching, but reading science fiction. And so there are just lots of these ethical issues really come to play and the futuristic sci-fi.

[29:38] But one of the big issues in terms of the loss of wisdom and a focus on knowledge is you are trusting the source of the knowledge and the generator of the knowledge, and who determines the access to knowledge. And without wisdom, you lose the ability to ask those really important questions and to evaluate the answers. And as we look around elections around the world, just in the last 3.5 months. And the shift toward authoritarian regimes with tighter controls over information. Becomes really scary.

Derek: [30:20] This is Derek again. The next audience question was about consent. We’re sensitive to students’ ability to decide what to post and what not to post online. And yet, a future with pervasive augmented reality seems to imply that consent of this sort is no longer possible. The panelists responded to this remark.

Corbette: [30:38] So yeah, that’s that’s a really, really good question. But even just going to your point, I’m not sure that even in the scenario that you just painted, that consent disappears. It’s just reduced. And so far more becomes visible. Although, this, you go back to ten years ago, you know, I had teenagers ten years ago and one in particular, no filters. I mean, everything was visible, into head out of fingers, you know. And so I think that there will still be some control, but just far less. Yeah, consent becomes an important issue.

Doug: [31:22] One thing in the future. I don’t know if this would be my wild speculation or not, because I think it’s relatively short term, is in addition to these platforms, like Facebook that hide the algorithms that are used to feed our walls and whatnot. We may see AI advocates, individual little AIs that are designed to protect me. The simplest kind of simply, I might read a terms of service and say…

Everyone: [31:53] (laughs)

Doug: [31:54]  …you need to look at this point, this point and this point. You’re not going to read it, but I just read it for you and pay attention to these two or three. We can do that now. That’s within today’s technological possibilities. And it’s a matter of getting some students to sit down and do it.

Corbette: [32:10] It’s a good business model.

Doug: [32:14] Yeah. I mean one thing I’d like to do in social media, is having discussion mediators. “Doug, you’re a hothead, don’t send this right now. Sleep on it.” And mediate the discourse with others across the, it could be the political divide or, or something else, but suggest alternative language. These kinds of advocates for me, I think can be something that we see in the future. And that would be one approach, the technological approach, technologically optimistic approach to dealing with issues like, I suppose consent would be one of them.

Jaco: [32:55] If we go the way of, Alexa, Google Assistant, or Siri, though then we’ll have to say that consent, is probably not a major concern for most people. Just in these very days, we discovered that Amazon actually had live people listening in to folk. And I am almost convinced people are not buying fewer Alexas in these very days. So consent, I think, will require enormous levels of education, on our side, to make people aware. Do you really want somebody to listen to every word and every sound that’s around you? And if you do, at what cost are you doing that, can you discern the cost? But we may, we may be behind a couple of ables here. We will have to work hard to educate.

Corbette: [33:48] Same thing with the iPhone, that was listening to you, even when you weren’t on a phone call.

Doug: [33:52] Well, face recognition is another big issue.

Corbette: [33:55] Yes.

Doug: [33:56] You walk into a store, a large store, and chances are very good, almost certain, that you’re being surveilled, and many are using face recognition algorithms too. They’ve refused to say. They’ve been approached and asked, “Are you using face recognition software on your, in your monitoring?” And they won’t tell you. So there’s no consent there. So now there’s a privacy debate going on about whether you should tell people that, they’re not simply being surveilled, which they know, but that they’re being recognized as individuals, potentially.

Corbette: [34:40] Well, this is a big issue inside organizations, as well. You know, using organizational network analysis, and reading your e-mails, reading your calendar invitations, what order are people included in meeting invitations? And, and then assessing all the dynamics inside an organization. There are positive implications of it, though. So for example, there’s a startup company. I had the founders speak to my class a couple of weeks ago. The whole ONA company is focus, Organizational Network Analysis, ONA is focused on diversity and inclusion. And so it’s finding who’s left out and what can you do about it? Or who is very good at influencing and connecting people? And how do you bring people in who might have been left out otherwise? So there again, positive implications and negative implications. And do you tell people you’re doing it?

Derek: [35:26] Here’s a question. I was trying to think, how can we talk about the future? I realize, there’s this paper that came out by Marc Prensky, back in 2001, that got a lot of attention, where he talked about digital natives and digital immigrants. And those were problematic terms for a lot of reasons. It also struck me that that was 18 years ago. And that his digital natives, these kind of children of 2001 who, who grew up with the Internet. They’re now in their thirties, right? So, I didn’t realize how dated this was. But it occurs to me, if I think about a third grader, like today’s third grader. When they come to college, right, when they start their first year as an undergraduate, what relationship to the digital world do you think they will have?

Corbette: [36:15] Well, this is the generation that, you know, they started on iPads at three, forget where they are at third grade. So they’ve been doing, I think that we will see certain intelligence, certain types of intelligence, will be superior to what we’re currently dealing with, and others will have atrophied. Same thing with, you know, abandoning cursive writing. Like the research that’s being done is showing that, I’m not a neurologist by any means, but there’s, you know, brain development that takes place because of the physical process of cursive writing. And we are losing that development. So, reading a physical, you know, this is a generation that will have read very little, if anything, depending on where they, you know, the school they went to, in a physical medium. Well, our artifacts are physical and so what, what do we lose in terms of history and the potential impact on the brain from the loss of attack in a very different kind of tactile interaction with knowledge.

Doug: [37:23] The only thing I would add to that is, is. I wonder if AI will change that. I mean, you know, I’ve been around for a while and every few years some new technology, whether it’s Texas Instruments’ calculators comes out. There’s some complaint that, oh, they’re not going to be able to add and subtract anymore. But you know, the argument is, and typically the response is, I think, new skills will emerge because they’re no longer having to do this kind of low-level activity, they can now move to higher level activities.

[37:55] So the only thing I would, I would say, is AI might change that because we’re not just moving from necessarily simple tools, we’re moving to tools, plus some chunk of intelligence that is going to be manipulating those tools for us, some of ourselves is being moved into an AI And I wonder if it will become, to go to your atrophy point. Everyone know the movie WALL-E? Will we enter a WALL-E kind of society because of AI’s? Purely speculative. Derek said we could (laughs).

Derek: [38:32] Yeah, right. And in that movie, the people basically, are going around in hover chairs and the technology does everything for them and they don’t have any physical movement, right? And they’re all kind of blobby.

Doug: [38:45] But they’re intellectually blobby too.

Derek: [38:47] Well, yeah, right. That’s true. Yeah.

Jaco: [38:50] I think we will encounter students whose relational skills will look very different, from the relational skills we probably long for in classrooms. Um, I’m thinking of the conversation I had with a colleague just this past week, was we were talking about social activism and community organizers. And the colleague said, “so what do you think would be the difference between a community organizer and some of our students, many of whom embrace the social activism label for themselves? They would love to change the world.”

[39:22] And I said, you know, I think the difference between a community organizer and some of our students would be that would be at an event and would listen to something. The community organizer will go out and will think, “who’s the seven people in my community that I need to be with, in the next few days to make the change happen? Who’s those seven people?” And I think our students are thinking, “how can I blog about this and what can I tweet about this?”

[39:52] Now, there’s a big difference between those two worlds, in terms of doing the relational work that’s needed to actually change society, versus just putting something, another digital form out there and thinking that that’s going to change society. Now, I don’t want to diminish the fact that I think online platforms can give voice with the reason of voice. And we’ve seen that around the world in wonderful ways, both to the positive side and the destructive side. But I think relationships would be one. I think the bigger challenge for me, I would imagine, would be around something like reading. I know Google is working on AI, which probably is ready already because they know us so well through our algorithms is that, should I want to read Doug’s book. AI will say, here’s the five paragraphs that you would have highlighted in Doug’s book.

Corbette: [40:45] Oh, jeez.

Jaco: [40:46] Here’s the paragraph that will speak to you. And I’m convinced that by that time, a student will be hard-pressed to read a book, for instance, because all the highlights will be given to you by an algorithm that knows you extremely well, at that point. And since they have cultivated that, that algorithm, from the very beginning, I think the challenge would be, how do we help students read? How do we help them read deeply?

Corbette: [41:19] And write?

Jaco: [41:20] And write?

Corbette: [41:22] They came out with an open source artificial intelligence. They pulled it back because of the ramifications. So sophisticated, it could write a student’s essay for them, in a voice that sounded like their voice. Open source, free.

Jaco: [41:36] My hope do resides also though, with sort of this theme that we’ve already identified, “The Revenge of the Analog.” That maybe students will find a deep longing and a yearning for ancient narratives. Narratives, I’m thinking of media theorist, Douglas Rushkoff, and it’s my paraphrase but, but he says, people need an opera with many acts to live into, when most people live by hit singles. And his reference is literally living in the Twitterverse. And you live so reactive to what’s coming in, but you don’t have a bigger arc that you can place yourself in. And I would hope that by 20 years time, or even ten years time, folks still have a longing for large narratives. And how do you find those narratives? How will they be given? Libraries become very important then, your textbooks that you ask people to read become important, especially in a world where snippets will be very powerful and already is.

Chelsea Yarborough (attendee): [42:46] So I feel like we were talking about students as if they’re all on the same playing field. But what we know with education, is that educational disparities start at birth.

Corbette: [42:58] Pre-birth.

Chelsea: [43:00] Yeah. So and particularly, K-12, educational system. There’s so much disparity of access or we know there’s classism. Like all of these things are deeply embedded into our systems. And so I’m interested in what you all. Two questions, that kind of work together. The first one is, do you think that the kind of future, if you will, will start to close some of these chasms or do you think they are going to continue to expand?

[43:34] And then my question after that is so then how does that affect our pedagogy? And what are we responsible for? For creating the space where all students that enter can actually learn, as opposed to just like. Because I think part of the assumptions with, oh, yes students obviously love an Instagram post as their assignment because this is the age. It’s like, actually, everybody doesn’t do that. Actually, there are a lot of social activists who were both blogging. And so there’s this kind of assumption that’s actually not the lived reality of a lot of people. So how does that work in our classrooms to kind of create a good space?

Corbette: [44:07] The reality in this country. And so, I’ll speak just about this country income, income disparity is increasing at a very dramatic pace. And so absent, some, not unanticipated, certainly not on any horizon I can see. Absent some intervention, it will grow dramatically worse. Read, Limbo, (*Editor’s note: Corbette later realized she actually meant to reference the book, Autonomous) sci-fi, and it is, the disparities in 100 years in the future, and focused on health care disparities and what happens in this high-tech world. The book is called Limbo. It’s frightening.

Jaco: [44:50] Yeah, I would agree, AI does intersectionality poorly because the folk who create AI is not into matters of intersectionality. So how power and race, and creed and class, and how those things intersect with, with our worlds, AI doesn’t do that well at all. And so how do we teach? I think we can actually learn from the disability world, where if you’re a person with special needs in a classroom today, you will have an individualized learning plan. And I think we will move into a future where all of our students will actually have an individualized learning plan. So that means in reality that, uh, maybe the first five weeks of a semester, everybody reads something and there’s a common base, common lines that everybody gets for the first five weeks.

[45:48] But then I think and I’ve been using that in my classes already for the past two years. I go into an agreement with every student to read according to his other interests, especially vocational interests, but also intellectual ones. And we come up with mini reading lists for them to fall the rest of the weeks. I minimize some of the readings that we do for those remaining of the weeks. And then for the last few, we invite them back into come and educate us because now every student has read about something else and we really become a community of learners. Where the classroom is just so vibrant, as somebody who have read four or five weeks on a topic of their passion can come back and educate us all about that.

[46:37] And I can think one way, Chelsea, to sort of bridge those disparities is probably thinking about individualized learning plans. And what does that mean to teach according to those principles that actually even here at the Kennedy Center on our campus can all educate us quite a bit on how to use, how to use those plans.

Doug: [46:59] So a quick comment. I think a lot about disparities. And you know, usually I will, I will preface my comments by saying, in the material, in the materially wealthy world, this is the case. But in AI, I mean, it is the case that AI development teams are terrible intersectionality. You may have seen demos online where the face tracking software is tracking the Caucasian face and not the African-American face. It doesn’t even recognize the African-American as a face. That development team was probably extremely embarrassed by that release. They probably complain before it went out, if they even knew.

[47:46] But my guess is they did not have an African-American on the development team. And so AI development teams have got to become better. But intersectionality is a problem that I think an AI can do better, when properly designed AI can do better, than any human being can do. You can take all these factors into account. It can do it better than simply looking at already biased historical data. I mean, the nice thing about using machine learning in un-covering these biases, is it doesn’t just shine a light on the biases in the machine’s learning. It shines a light on the bias that existed before. It makes the bias evident.

Corbette: [48:31] As long as someone’s asking the right questions.

Doug: [48:33] And I think the fact that they’re putting it out there, at scale, is getting those questions asked. So I’m optimistic actually, about the possibility that AI’s can be used to take all these factors into account. And I’m also enthusiastic about the facts. So much of our material is in the cloud. You know, we’ve come full circle, where I would go in and use a stupid terminal to interact with a, a big computer. And now I’m going in and I’m using a very, for most people, a really over, overproduced, overdesigned piece of, but I’m still going to the cloud. I’m still going to this and, and you know, things like Khan Academy and all these materials, they are freely available and I’m, I’m really excited about that. So the cloud and the possibilities of AI, if only they would be designed well, have full potential, I think are great promises in getting around these disparities.

Corbette: [49:30] And I know we’re late, but I’m going to mention one tool that I thought might come up earlier, but just in terms of trying to create a more inclusive environment. And I haven’t used it yet, but there’s lots of videos you can watch. You may, have you used Hypothesis?

Derek: [49:42] I haven’t used it, but I’ve heard a lot about it.

Corbette: [49:43] So Hypothesis is a free tool. You can put in your browser, students can all put it in their browser. And if you are assigning something that’s digital, everyone can comment and annotate the reading. And so the videos, you know, really highlight a wonderful way to bring different identities to how we interpret something that we’re all reading in a more dynamic environment than we might otherwise be able to accomplish. Particularly, if there isn’t time in the classroom, to have some of the robust conversations you were talking about.

Jaco: [50:25] I think another teaching impact will be flipped classrooms will be the norm, where students, because they consume, so much digital material online, that they would consume the classroom, the class outside of class, and when you do get to class, then what do you do? Discussion, maybe. I can see minimal lecturing in that regard, but we will have to change our expectations then, because to create a flipped classroom, the summer that I have set aside, research and writing, I now have to create digital content. And I don’t think our system is ready for that yet because I’m still being evaluated by what I write and my research.

[51:15] But I can think that in ten years time, flipped classrooms, will be the absolute norm. And part of that will be then, collaborative activities. Because I think today’s students are way better at collaboration in the classroom than I was, for instance. I got pretty angry when I got teamed with somebody who didn’t pull their weight, you know? But I think today’s students, they know how to figure that out, because they’ve been doing that for a long time. Flipped classrooms would be something for us to explore. But we have to explore that in the big system too. Not just as a, as an entity in itself, because the impact on faculty is huge, I think.

Chelsea: [52:00] I also wonder to the earlier point, if a pedagogical responsibility will be engaging ways to pull forth wisdom as opposed to just knowledge. And that’s not new in pedagogy to not just be an infuser of content, but I’ve sat in a lot of classrooms. (laughs) I think that it’s possible that thinking through the difference between wisdom and knowledge and how to pull different types of wisdom, particularly from places that people might find unexpected, might be an essential tool in classroom, it’s a lot of work for teachers. So it sounds like to me. I wonder if the pay is gonna go up.

Everyone: [52:40] (laughs)

Derek: [52:44] I’d like to see if you guys have any other wild speculations. I phrase it this way, if you imagined how we use technology in ten years from now, and by the way, we’ve been saying 2025, That’s just six years from now, right? That’s right. Let’s push it back. Let’s go 10-15 years out, right? What’s one aspect of that world that most people would not predict today? So you’re going to, you’re going to make a long shot bet on something, What might that be?

Doug: [53:13] So I think my wild speculation is that AI’s will be embedded in collectives of students, collectives of soldiers, as positive role models. Within a collective of students, the AI will serve as a positive role model in terms of scholarship and demonstrating what good scholarship is. Reading the whole book, and we will read at a pace that’s slowed down for the humans. In a collective of soldiers, it will illustrate constraint. And it will serve as a positive role model in that context. So Jaco, talked about this too, and I’m going to talk to Jaco after this because I want to talk about his book.

Corbette: [53:54] I know.

Doug: [53:55]And my recent presentation I gave. But, AI’s as positive role models and actually embedded within groups of humans.

Derek: [54:02] This is, my cultural reference would be Star Trek The Next Generation. Data, the android, who often was the voice of reason, of caution, of ethics in those environments. Because, you know, I mean, it’s all fiction. But he was bringing this kind of skepticism to what he saw in front of him.

Doug: [54:20] Yeah, my talk referenced a lot of those. (laughs)  But I think it’s possible, it can come to reality.

Corbette: [54:28] Well, it’s funny that you said that, because what I was thinking as I was listening to Doug, was the potential that part of this extended self, that yeah, we talked about earlier would be androids that students wouldn’t just have, you know, an artificial intelligence on their phone. They’re literally going to have their Android.

Jaco: [54:50] Their companion.

Corbette: [54:51] Who’s their buddy, their companion, that’s the conscience and the prompt for wisdom and,

Derek: [54:29] And maybe also a scooter.

Corbette: [55:02] Then we need the physical. We don’t want the blobs, right?

Derek: [55:06] This was, my example, two years ago, I would not have expected to see electric scooters all over our campus, right? Like that just came out of no where. And I don’t know that it’s transforming our lives or anything. But it’s like the, this pace which that innovation kind of went mainstream, was kind of surprising. And I can imagine, you know, really powerful small drones that are just, I mean, already when I go to big events at this point, I expect there to be a drone flying around getting some really cool video footage, right? So that’s, that’s been a very rapid shift and I don’t know. I guess I should also say, what I’m hearing from this panel is also, as we have these rapid shifts in technology, that the ethical questions, the concerns, the problems that they generate, it’s hard to know what they are. And I think part of our responsibility, as educators, is to help our students learn to grapple with unexpected challenges and questions, as the technologies change. rapidly.

Corbette: [56:09] I agree.

Jaco: [56:10] I would agree. And I think also, if we go 20-25 years out, then the importance of our school, as a leader in the world will have increased because I think the majority world, is going to be left behind in the majority of these things that we’ve actually talked about. And so the gap between rich and poor is going to be more pronounced. How, how do you deal with, in Africa? That is barely online. If the rest of the world is governed by AI?

[56:41] And I think if we want to be a leader in the world, it will force us to be more involved  in remote places in the world where people do not live so close to technology as we do, what can we learn from them? Because we would be able to do that, hopefully. And what can we teach them too?

Derek: [56:59] Any other wild speculation?

Jaco: [57:04] Just to say, that I actually look forward to that. I love technology and I think technology neither good nor bad, nor neutral. And that will remain in place.

(music)

Derek: [57:16] Thanks to Doug Fisher, Corbette Doyle, and Jaco Hamman for sharing their thoughts on the future of digital literacies. My biggest takeaway from the discussion is that I should be thinking about how to prepare my students, now, to tackle ethical challenges that result from technologies that don’t even exist today. My second biggest take-away is that I should read more science fiction.

[57:37] On that note, I should add that Corbette told me after the session, that she referenced the wrong sci-fi novel during the conversation. She meant to reference the book Autonomous by author, Annalee Newitz. And I’d like to take this opportunity to recommend Gnomon, That’s Gnomon, with a “Gn,” by author Nick Harkaway, which provocatively imagines a future surveillance state. You’ll find links to those books, as well as information about our panelists, in the show notes for this episode.

[58:05] Before I do my usual closing, I wanted to share two programming notes. The first is that we’re about to start our summer break, so don’t expect to see any new Leading Lines episodes, in June or July. We will be back with more interviews with educators, researchers, and technologists in August.

[58:21] The second, is that I want to say goodbye and thank you to one of our longtime producers, Gayathri Narasimham has been on the Leading Lines team since the beginning, helping to shape the format and direction of the podcast and conducting some really great interviews for us on such topics as open education, citizen history, and virtual reality.

[58:39] Gayathri is moving on from Vanderbilt and I wanted to take this moment to say that she’s been a joy to work with and that I wish her the best. To listen to Gayathri’s interviews, as well as other past episodes, visit our website, leadinglinespod.com. Folks, this episode you’re listening to now, is number 60. I can’t believe we’ve been doing this for three years. Back at the beginning, I was just hoping we’d make it to Episode ten.

[59:04] Leading Lines is produced by the Vanderbilt Center for Teaching, The Jean and Alexander Heard Libraries, and the Associate Provost for Education Development and Technologies, John Sloop. This episode was edited by Rhett McDaniel. Look for new episodes, the first and third Monday of each month, when our next season begins in August. I’m your host, Derek Bruff. Thanks for listening.

(music)

 

2 thoughts on “Episode 60:
Future Of Digital Literacies Faculty Panel

Leave a Reply

Your email address will not be published. Required fields are marked *