The Futures Initiative Speaker Series
A Conversation with Dr. Alondra Nelson
October 8, 2024
Dr. Nelson discusses equity and public access to science; AI governance and policy; race and technology; and the artistry of discovery.
Read transcript
JASON NABI: Good evening, everyone. Is this thing on? I'm Jason Nabi. I'm the project manager for the UVA Futures Initiative. And on behalf of the Futures Initiative, I'd like to welcome you to a conversation with Dr. Alondra Nelson.
[CHEERING]
[SCATTERED APPLAUSE]
ALONDRA NELSON: Thank you, my two friends.
MELODY BARNES: It's the fan club.
JASON NABI: I have a bold prediction about the very near future. Tonight will certainly be one of the most pleasant and edifying Tuesday evenings for many weeks to come. Thanks, y'all. And well beyond that, of course, tonight's illuminating conversation is sure to inspire and sustain us for many futures to come.
A quick word about the Futures Initiative, and then we're on to it. We were launched in January under the auspices of the Provost's Office. We scan the higher education horizon in search of ways to proactively position UVA to thrive in a radically evolving world.
Toward that end, the members of the Futures Initiative working group, which is a task force made up of representatives from all 12 of UVA schools, several of its administrative and academic divisions, and four of its Pan-University Institutes, members of the working group have been asking far reaching questions about what UVA might do to achieve its strategic goals in striking and innovative ways. And in doing so, to become the University of the future.
Part of that process, through the Futures Initiative speaker series, involves bringing thought leaders from a variety of sectors to grounds to share their visions of the future in their respective fields. We are excited to continue this series with today's event in which we focus on a future that will require us to better harmonize the great of scientific and technological advances with the good of societal well-being.
For this, it is our great pleasure to be hosting Alondra Nelson, joined by our own Melody Barnes and Yael Grushka-Cockayne. Please join me in warmly welcoming them.
[APPLAUSE]
YAEL GRUSHKA-COCKAYNE: Hello and good evening. I guess I could say good evening. It's wonderful that you all join us on this beautiful day. And I'm honored to be sitting here on this stage. And I'm excited about the conversation ahead of us.
My name is Yael Grushka-Cockayne, as was introduced. I'm a professor at the Darden School. My area and expertise is in data analytics and decision sciences. And I think one of the reasons I'm here tonight, although I haven't actually confirmed this, is because we just announced a new LaCross Institute for ethical AI in business.
And I am honored to be one of the co-directors, academic directors of our new Institute. And I'm excited about the conversation tonight.
MELODY BARNES: And I'm Melody Barnes. I'm executive director of the Karsh Institute of Democracy. And I am thrilled to be here with Yael and to be in conversation with our friend, Dr. Alondra Nelson.
And you all heard the screams and cheers when Alondra's name was mentioned. I'm going to tell you why. I get the pleasure of giving you a brief overview of her bio. And normally, you just say, oh, refer to the printed materials.
But this gives me real pleasure because Alondra is, and I'm just going to say it, a badass. And here's what I mean by that. We're going to have a conversation this evening about the transformative changes that technology will bring to our society and to the University and to our lives.
And there is no better person to have this conversation with than someone who has the expertise that Alondra has, and who also has a bio that is as mind blowing as that technology itself.
She is a scholar at the intersection of science and technology and policy and society. And she is currently the Harold F. Linder Professor at the Institute for Advanced Study, which is a Research Center in Princeton, New Jersey.
She's also a distinguished fellow at the Center for American Progress, which is a think tank action tank in Washington, DC. Alondra served in the Biden administration. And her final role was that of Deputy Assistant to the President and Director of the Office of Science and Technology Policy, or OSTP.
She was there from the beginning of the administration until the fall of 2022. She is the first African-American and the first woman of color to set science and technology policy for the country as in ever.
AUDIENCE: Woohoo!
[CHEERS, APPLAUSE]
MELODY BARNES: And in that role, Alondra spearheaded the development of the blueprint for the AI Bill of Rights. She provided guidance to expand taxpayer support for federally-funded research. She served as inaugural member of the Biden Cancer Cabinet, and strengthened evidence-based policymaking, as well as galvanized a multi-sector strategy to advance equity and excellence in STEM.
She's also had a distinguished career in the nonprofit and academic fields as well. She was the 14th president and CEO of the Social Science Research Council, and led academic research strategy at Columbia University, where she was the inaugural Dean of Social Science.
You won't be surprised to know that she's the author of several award-winning books and essays and articles that have been translated into multiple languages. She's currently doing research on a book about Science and Technology Policy in the Biden administration, an essay collection, Society After the Pandemic, which I can't wait to read. That sounds really fascinating, and engage in research on the social power of the platform and governance of AI.
As you can imagine, Alondra holds many honorary degrees and awards. And she's also was named to the inaugural Time 100 list of the most influential people in the field of AI. So trust me when I tell you, I could go on and on. Literally, I could go on and on.
But I will close by saying that I'm thrilled that this very, very busy and accomplished woman is also on the Karsh Institute of Democracy advisory board. So thank you so much. And join me again in welcoming Alondra.
[APPLAUSE]
So I want to start with the first question. And I think the first question should be the obvious first question, which is how did you get interested in and how did you start your work in science and technology.
ALONDRA NELSON: Yeah So thank you for having me. Thank you for that incredible introduction. I'm both here and on the Karsh advisory board because when Melody Barnes calls, you just say yes. You don't wait. You don't wait to find out what the ask is. You just go like, yes. And then you figure it out and hope you don't get yourself in trouble. I'm delighted to be here with both of you.
I think like probably many looking at students in the room, I was a STEM kid. But I was never fully satisfied or fully-- like that was never the kind of world I wanted to be in fully. I grew up with parents who work in technology and in science. I grew up in San Diego, California's biotech, science place. So my childhood was like going to the Salk Institute for after school programs and going to the Scripps Institute. So that's like a kind of San Diego kid like that and surfing, or going to the beach.
Then I got to college. So I was supposed to go to medical school like so many of us. You get tracked, you're supposed to go to medical school. And then I got to college. I went to UC San Diego, where I grew up. And I realized I was much more interested in, people and problem solving and other sorts of things.
And I found myself, luckily, at an institution that had what's still called, but is rare these days, four fields anthropology, which means undergraduate degrees in anthropology, but which means to finish my degree, I had to do physical anthropology, so which we now call biological anthropology. So I was doing biology. I also had to do courses in archaeology. So that was like soil chemistry and geology in addition to a linguistic and sociocultural anthropology.
So the undergraduate degree that I did had both science and social science together. And that perspective, which sort of old-fashioned on anthropology was sort of you can't understand human societies without understanding all of these things. So not the science and its isolation and not the people in their isolation. You need to understand them together.
And that felt so right to me. So I kind of immediately went from being on a science track to studying that. But I still was at an institution that allowed me to take like amazing physics classes and chemistry classes and the like. So that, I think, is kind of where that sort of begins. And so I have always thought about those things together.
I also should say, my mother was an army cryptographer, if you can imagine all things. I mean, she was like a whack or a wave, whatever the thing was, before women could actually be full members of the army.
But as a child, I grew up with her working on sort of big IBM mainframe computers. My mother was a computer programmer and systems engineer. And so there was also a kind of childhood in which I didn't think that women and computation was like an opposite thing. When you're in it back of the Chevy Vega with punch cards that mom was just using in the IBM, you don't think like, God, if only women could work in computing.
You're just kind of like computing is like--
MELODY BARNES: This is normal.
ALONDRA NELSON: It's like detritus in the back of the station wagon. And so I think I just had the benefit of just also having this extraordinary path-breaking mother, I think, who made things that I now know as a teacher and as a mentor and as a policy advisor, extraordinary, seem very commonplace. And so that just gave me, I think, a different perspective.
And then I think more immediately, thinking about my work in the Biden administration, it was-- I had been working on a book about the Obama administration. I had been working on-- I started in 2016 interviewing people who had worked in the Obama administration because I was fascinated by the ability of this administration that you worked in, that was clearly trying to take us back to this kind of bold era of big science.
So it was like under the Obama administration, that you had the Precision Medicine Initiative, which was this initiative to get a lot of American data in a database and think about how we can do a genomic analysis. It was under the Obama administration that you had the BRAIN Initiative, which was trying to map all the neurons of the brain, much like the Human Genome Project.
And so I was kind of fascinated in the way-- while also trying, that administration was also trying as this administration, I think, the Biden administration does, is also thinking about how science and technology and innovation are important drivers for the economy and for education and other facets of the social world that we care about.
So I was absolutely fascinating. And what I saw as like a shift in how we did science and technology policy. And it was through that work that I came to work in the administration.
MELODY BARNES: And so you're now in the administration. The blueprint for the AI Bill of Rights, tell us how that got started and what you wanted to accomplish with that.
ALONDRA NELSON: Yeah. So as a student of the Obama administration, I knew that administration, over the course of 2012 to 2016, had published some very smart that you can still find online white papers around, first, what we call big data, and then by 2016, AI.
And these were sort of broad think pieces and sort of guidance. And you call it the formerly guidance, you'd call it in government, that were about what is the United States need to do if it's going to be ready for really leveraging and interoperating data sets. And how do we need to think about the privacy implications and what are the implications for work and for health care, et cetera.
And then by the time you get to 2016, there are white papers on AI and civil rights, AI and job opportunities and employment and the like. So there had already been all this prior thinking.
And then in the Trump administration, which didn't do a lot of Science and Technology Policy. I mean, we can talk about that in the Q&A if you like. But the OSTP that I arrived to on the first day of the Obama, I mean, the Biden-Harris administration, had only 30 people in it. I mean, we had--
MELODY BARNES: Contextualize that for people.
ALONDRA NELSON: Yeah. So the Obama administration had about 150 people working in that office. And when I was working in the Biden-Harris administration, we had also about the same. OSTP does a lot of things, including just the grunt work of congressional mandate.
So every time there's a piece of legislation, some of it says something like and OSTP, every year, will submit data on the thing or the report on the thing. And so we came in after four years with Congress yelling because you didn't have enough people, actually, to fulfill all these mandates and all of this law had said. So the office had just shrunk.
But what the Trump administration did quite aggressively and well was around AI policy. So in the last NDAA-- National Defense Authorization Act-- of that administration, there was something called the National AI Act. And it sort of stood up a few pieces of infrastructure that then we were able to move with. And the AI Bill of Rights idea I think is both comes really emerges from work that the Obama administration was doing.
But also, I think, some of the work that the Trump administration was doing that we felt in the Biden administration was kind of under-realized. So you had an executive order or pieces of guidance that said things like AI should abide Democratic values or whatever. It's like, what does that mean in practice? Like, how do we begin to implement and think through what that means?
And so the AI Bill of Rights was an attempt to make that granular. What does it mean to say that we've got shared values as society, bipartisan? And how do we make those real. And the way that we are beginning to slowly, in the United States, advance AI governance?
MELODY BARNES: I'm going to turn it over to Yael and come back later. I have another question on that.
YAEL GRUSHKA-COCKAYNE: So we're going to tag team it. And we also threatened that we might go rogue at some point. So as I mentioned, I am a professor at the business school. And we often think about various needs for change and strategy. We think about it as either top-down or bottom-up.
Top-down, it comes from the leadership. It's a vision. And it's kind of been shared with an organization that then they have to follow suit. Bottom-up, meaning pressures from either employees or customers or consumers. It comes from the bottom. And therefore, there's some kind of an adaption or a change that occurs because of a reaction to a pressure coming from the bottom-up.
When we think about AI governance, is your sense in the United States or even globally, is your sense that is this going to be more of an evolving top-down approach or a bottom-up approach? Is it something that is going to come from users, from corporations, or is it come from, for instance, The White House?
ALONDRA NELSON: So I love that you use the term "AI governance" and not AI regulation. Because I think AI governance is both. And that if we are going to get the use and applications of AI to a place that it's mitigating risk and maximally beneficial for the most people, we need to have a suite of tools and levers.
And so those include things that are, hopefully, if we can get some laws out of Congress, top-down, which would be formal regulation, new regulatory authorities and regulatory agencies, actual laws and the like.
But there's also kind of new-- there's standards. So those are technological standards. What are the ways that we should be thinking about how technologies are built and used? What is a high capability AI model? What is low? How do we think about those things? How is everyone using the same language both in the United States and abroad? These are international standards.
And so those are not quite laws. Those are agreed upon kind of definitions in ways that we think we're going to move in the world. And then there's norms. We've got these new tools and systems. So how are we going to use them. No one's to say like we might talk since we're at a University, one of the world's finest universities, how should AI tools and models be used in the classroom.
I mean, that's not-- I mean, so you can create a kind of University law, but it's not a law per se. It's actually a norm that we're kind of slowly creating. So I think we need the bottom-up and the top-down a whole kind of suite of things. I will say, the United States needs a bit more top-down right now.
I mean, we have not been able to have any kind of systematic regulation around AI in the United States, although certainly, the president's executive order on AI is extraordinary in a lot of ways, and in part because, as colleagues and former colleagues in DC say, that the president said we should pull every lever. And so I think you've got federal agency, you kind of using all of the tools at their disposal to try to make sure that these tools are used appropriately and beneficially.
But I think we need a whole kind of panoply of things. And so I don't want us to get overindexed on regulation and understand that there's lots of other things we can do as well.
YAEL GRUSHKA-COCKAYNE: And so maybe if I dig deeper, just if I will, like what happens when the movement that is coming from the ground, from the industry, from the users, what happens with it clashes with the regulation that eventually gets introduced?
I think it's actually a moment where those things are converging. So we hear, both in political theater and performance, from big tech executives when they go to Capitol Hill saying, please regulate us. My God.
MELODY BARNES: It's what we dream of.
ALONDRA NELSON: If you watched the hearing last summer or two summers ago. But they do kind of mean it. I mean, if you talk to folks in industry, they do feel like things are a little bit out of control. And I think it creates a lot of both organizational risk and financial risk for companies when you don't know the kind of basic terms of engagement for the field that you're engaging on. So there's that.
I also think, as AI has moved from in March and November of 2022, shot goes around the world. It's like those of us who had worked on AI for many, many years, people thought we were the most boring people in the world. And then all of a sudden, at the Hanukkah table, at the Christmas table, everybody was like, AI. And everyone wants to talk to you about it, and thinks you're very, very interesting.
So that moment--
YAEL GRUSHKA-COCKAYNE: I know everybody in this room has been doing it for a long, long time.
ALONDRA NELSON: But all of you can remember those moments and you're just like, I'm working on this, like AI, whatever. And your whole family was, oh, my God. Shut it down. But that moment of these sort of behind the scenes technologies becoming consumer facing meant that companies want consumers, whether those consumers are the federal government or individual consumers, how do you get adoption.
And adoption means that also consumers of the technologies, whether or not it's the University of Virginia procurement office or the federal government procurement office or us, as individuals, that kind of adoption that all of these companies want requires there also to be rules of the road.
And so there is, if you look at things like the Edelman Trust Barometer, the trust of Americans in artificial intelligence tools and systems is like low, like some of the very lowest in the entire world. And so if this is the market you're counting on to live out your valuation of 5 billion or whatever, you're kind of multi-- it's probably a trillion valuation. It's not going to happen unless people feel that the tools are safe, that they're responsibly used, that they're not putting their privacy or their families at risk in engaging in the use of them.
YAEL GRUSHKA-COCKAYNE: OK. Fantastic. So thank you for that vision and understanding how hopefully these pressures can coincide.
ALONDRA NELSON: Yeah. I think there are some-- we have this kind of rare moment of like overlapping incentive structures for a little bit of time.
YAEL GRUSHKA-COCKAYNE: I'm going to change topics just a tad and talk a little bit about the fact that in the Bill of Rights, you mentioned one big chapter or some of the discussion there is around algorithm, discrimination, protection to prevent biases from occurring, and protecting various individuals.
There is conversation around race, but it's also about color, ethnicity, sex, religion, and so on. Is this a new tension around the concern of AI and technology with regard to those dimension, or is this something that has been there all along with other technologies, historically? You've studied this for a while.
ALONDRA NELSON: Yeah. Well, thank you for raising-- you raised, I think, two of the issues in the AI Bill of Rights. Certainly, well, it has five sort of prongs. And protections against algorithmic discrimination is one. And it's significant because it's a through line in the whole policy document. But it also is about the fundamental issue that AI tools and systems should be safe and effective.
I mean, that's just fundamental consumer standards. Looking at Professor Citron, they should have some modicum of data privacy. I mean, like shocking, that you should have notice when an AI tool or system is used for a consequential decision about your life. And that you should have some sort of fallback if a decision is made using an AI tool or system, and you have a question about how that decision was reached.
YAEL GRUSHKA-COCKAYNE: And an ability to opt out.
ALONDRA NELSON: And an ability to opt out. And those things were all distilled from almost a year of engagement that we did with academic researchers, industry researchers, folks in civil society, and just regular folks. I think the opportunity and challenge that artificial intelligence presents us-- let's make it take it down a level, let's say generative AI, is that it is a tool that I think brings together lots of different dynamics that makes it different from past technologies.
So it's dynamic, it's iterative. It uses historical data often. The data is not transparent. We don't have a lot of accountability around the tools and systems. People who make the systems tell us we don't understand them. We can come back to that. I call that algorithmic agnotology. And to me, it's kind of a learned ignorance that one wants to have around their systems that should be, I think, pushed back against.
So it does present a different challenge for us. And I think some of the big challenges that we've seen immediately, we can talk about both near-term and sort of farther term risks and harms. But what we are seeing already are harms to people with dark skin, the use of facial recognition technology, who are being falsely identified, falsely arrested, falsely convicted in some instances, that can't be. That can't be how we want it. That's not how the society that we want to live in.
We know that artificial intelligence use of generative AI, that makes it easier to do a cut and paste job and to create, basically, to do cyberstalking or sexual violence is becoming a challenge with generative AI.
So it increases, I think, concerns that we already have the sort of scale and velocity of doing them. So I think it's different from other technologies and that you don't worry about this with the toaster or the car. You might have worried about it with the computer, the introduction of the computer. And obviously, everyone, let's be very clear and not prudish about this. The very first technology innovation with any new technology is pornography.
I mean, when people are like the sex AI chatbots and all that. It's like, of course. Of course. Every sort of wave, if you could create pornography with it, that is what it's been. And so that's always been a special risk in those spaces. But I think this technology in particular presents those problems. I also think we learn more and know more.
So I think in 1989 or '94, when we were all beginning to get the personal computers kind of trickling through our lives, I don't think we even knew how to think about what that might mean. And that transition was pretty slow, actually, if we think about it. I mean, it happened over the course of maybe a decade or eight years or something like that.
We woke up on November 22 or whatever day it was, and chatbots were a consumer-facing product, and everyone in the world had access to it for free. And that was kind of overnight transition. And that makes all of these risks in some way more acute and make it a little bit different from, I think, prior kinds of technology paradigm shifts.
YAEL GRUSHKA-COCKAYNE: Yeah. So maybe in the past, we've had an opportunity to think about it, a just, the academic world, that moves a little bit slower, had time to develop some thinking around it. And in this case, we don't have time because it's moving so fast. I'm going to pass it over.
MELODY BARNES: And I want to come back, given the conversation that we've been having, to something you said a few minutes ago about the AI Bill of Rights and our shared Democratic values and putting, these are my words, meat on those bones.
And I'm curious about the struggle and the challenge of doing that at a moment when it feels as though we don't all share those same values that one group, one person's values may not look like another person's values and how you go about the process or when about the process of policymaking given that challenge.
ALONDRA NELSON: Yeah. The day one kind of a crew of us that were day one in the Obama-Harris administration came into a really crazy scene. I mean, I'll leave it for others and historians to write this book, but we did a presidential transition, not only with contestation and political violence.
So when I first came to DC. Like every building in DC near the White House and near the Capitol was like double fenced. I mean, every store was closed because of the pandemic and everything was just fenced. Like it was unbelievable, actually.
And we did the transition during the height of a pandemic. And so before the sort of White House Council, the National Security apparatus, had approved Zoom for use in the White House. So when I started in the Biden administration, we were still doing conference calls. And the only thing that we could use with video was Skype. For some security reason, I didn't know.
YAEL GRUSHKA-COCKAYNE: I thought you were going to say Webex.
ALONDRA NELSON: No. No. We couldn't even use Webex. I mean, that's new. That's newfangled. You're getting crazy, Yael. I mean, it's crazy, crazy talk.
MELODY BARNES: I can empathize. I remember going in in '08 and '09. And we were in a big meeting and they were like, and you can't take your laptop home. I was like, what? We don't have the security for they got it. But yeah, I mean, technology comes late.
ALONDRA NELSON: We can take them home now but also, you can't use the Google Suite. Like they're just like oh, no. So we were trying to figure out, in that context, which was already different from any other, I think, transition context.
And moreover, the question that we were facing, particularly in OSTP is like, how do you do science and technology policy in a moment where American trust and Democratic institutions and science and our ability to wrangle this pandemic is low-- low, low, low.
And we came in, the leadership with a philosophy that was like, we're going to try to do it differently. So we had, by the time I left, science communicators on the staff for the first time, people whose expertise was to translate science to the public.
So instead of OSTP, as it traditionally done, would pad out a policy document and we wonky people would write it. And if people could read it or not, shrug. And the job is really, we turned it back upon ourselves. And it's just like actually, our job, as people who work for the American public, is to make these documents clear to them.
So part of what you see in the AI Bill of Rights is a commitment to that. What is clear communication to people beyond ourselves about the stakes of these issues? But I'll say about the process, so the AI Bill of Rights is a curious document because it's guidance. It's not formal policy. But we did a formal policy process.
But we also engaged the public. So it is announced in an op-ed in wired that has an email address that goes to the White House at the bottom. So it was like, if you have anything to say about this, write to us at OSTP about this thing.
We took a page from the FDA and did a series of just town halls. So if you've ever gone to an FDA hearing, you get a two-minute timer. And there's a facilitator and they say, anyone can speak. And so we did that. We had several of those. We did them different times of day. So people in different time zones could come.
We had panels on topical issues like AI and health care AI and the workforce. And we had weekly standing meetings that all of the staff working on this workstream had to set aside in their calendar. And we just met with anybody who would want to meet with us, many of whom we met from this email address that was in this op-ed and wired.
I mean, we met with high school students. We met with rabbis. We met with other kinds of clergy. We met with just regular folks. So the typical, I'm new to DC. And so it felt a little weird to me that you had the kind of big civil society organizations and the big lobby organizations. Those are the ones that you talk to. Like, is that how it works? Why? Why do we do that? So we really did have a broad swath of people that we engaged.
And the AI Bill of Rights is really a distillation of all of those conversations. It's not breaking any new ground. It says nothing that I think any document, including a Trump administration document, a lot of that work was led by Michael Kratsios, who's quite good.
People want their systems to be safe and effective. I mean, these are very, I think, common sense claims. We also do talk about issues around discrimination. We talk about vulnerable and marginalized communities. And that's very much the imprimatur of the Biden-Harris administration in thinking about these, but at a high level.
I think that we felt comfortable by saying, you should know when AI is being used. It should be safe and effective. If it's being used for resume screening at a job or to give a health diagnostic tool, that you should have a fair shot at. You should not be discriminated against in the use of those tools if you're trying to rent a house or get a mortgage or get a job.
And you should have some sort of fallback. And so I think we felt-- it took almost a year of that process. But we did try very hard to get to a place that felt that most people would, I think, find it quite common sense, and moreover obvious, and in many instances a kind of resuscitation of what they had said to us about what they thought should happen.
MELODY BARNES: I want to ask a question. We've been talking a lot about AI, thinking about science and technology, generally, as they affect other aspects of people's lives.
So thinking climate change, thinking public health, and how you see, over the course of the next decade or so, the use of science and technology evolving in the policymaking process to try and tackle those big issues, particularly at a moment where sometimes, there's a struggle with data, with facts, and what that means for policymakers as they are also listening to hearing from their constituents?
ALONDRA NELSON: Yeah. That's a lot of hard, I think, issues kind of clustered in that really good question. I mean, I think one of the things that, at least in my time at OSTP and working in the White House, we were committed to is continuing, I think, from the Obama administration in particular, that started things like the Open Government Partnership, that started the first White House GitHub account.
Hello, did you know the White House had a GitHub account? That's all the data nerds in here. There was an attempt to make data available to the public. And I think that we also wanted to do that and understanding that people might take that data and do crazy things with it.
But you wanted to be able-- I think as a policymaker, you wanted to be able to say, we've provided you the facts. We can also provide some interpretation. But I think as we're trying to the extent that we can restore trust in the work of government, part of that is just giving the data to people.
And you can't control how they interpret it. And all of us have been on social media. You see the diagrams with the strings of all the one White House memo and another, and leads to this kind of conspiracy. So you can't control that. But what you can control is a policymaker, and as a leader, is giving people high quality data.
We also had support for that and things like Paul Ryan was a big sponsor of the Evidence Act, which President Trump signed into law, which also has other obligations for evidence-based policymaking, for data that's supposed to be provided to the public and the like.
So there's been, in the last decade, maybe not clear to I think people have not worked in government, a real sea change in how government thinks about its obligations around data to the broader public. And I think that was really important.
I will say, the challenge that we-- the other challenge is that facts alone, or the science alone, don't solve the thing. And that's the work of policymaking. And that's the hard stuff. So coming into government during the pandemic, the science and the engineering was like miraculous on the level of the miracle.
We had the genome of SARS-CoV-2 decoded in less than a month. We had an operational viable vaccine in less than a year. I mean, that has never happened in the history of the world. But then we had all sorts of challenges. So how do you keep it cold enough? Then there were all this kind of infrastructure challenges. And then how do you get people to take it?
And those are questions of social science, behavioral science. Those are questions of whether or not people trust the government, whether or not they trust the research because it was done so quickly. I mean, part of why we engage folks with science expert communication expertise is that it was clear that government had to be a lot better at saying things like, we got it done more quickly because we did all of these other X things more quickly than before, not because we cut any corners or like risking people's health to get this virus done very quickly.
So it's just a kind of different philosophy. And then the other thing I would add, I think, the amazing work that you did in government and the Domestic Policy Council, I think all of that work are now science and technology policy issues.
So not just climate change, but how we're thinking about DHS and immigration, which is using the CBP One app, that you have to have a smartphone to be able to use, which probably many refugees and asylees don't have, if they have a phone at all, and requires that you have very good lighting, and if you're a dark-skinned person seeking asylum. All of those technical questions.
Ditto, the health portfolio, ditto, the education portfolio. So in retrospect, I think my interest also in the Obama OSTP was that I think it was a real kind of awakening that all significant domestic and international policy issues were also science and technology policy issues. And I think that really is where we are now and where we're going to be sort of heading.
MELODY BARNES: Yeah. And in listening to you and your bio, this next question, and your reflection on the fact that science is not just a collection of facts, but a deeply social process. And your background and interest, also in the humanities and that combination of the sciences, and I'm wondering if you could talk a little bit more about that, about the social process.
ALONDRA NELSON: Sure. I mean, let me talk-- maybe, offer a couple of examples. And we can think about gender. And all of this are just as a more granular way. But you should think of it as a metaphor for training data and AI systems. And we can talk about those two, but I want to be less abstract before getting more abstract and talking about AI.
For example, if you think about clinical research that we do and sort of protocols that we have for clinical research and for creating new drugs, for example, we've had, traditionally, pretty much up to the present, all clinical research subjects have almost been exclusively male.
And we've created whole drug ecosystem and diagnostic ecosystem around men and male biology. And so we can put-- we can problematize men and women, all of that. We can have that conversation. But that means that we have created things that we say work for all without actually probing that as an empirical question.
So those are early design choices, in part, made by the scientists who were working-- like, I'm going to test it on my guy, and see if it works or whatever. We think about 18th century science or something, and see if it works.
So that's kind of one example in that if we think about science or biomedical science or something, not just being the drug, but that process through which you achieved it, then if we have pharmaceuticals that work for some people and not for others, we shouldn't be surprised if you're able to think that through.
On the kind of engineering side, if you think about, our crash-test dummies, for example, some of you might know this data, are men, or it's like male physiology. And there was a kind of headline two or three years ago. It was like Swedish scientists create a woman crash-test dummy.
And so you think about that design choice and that somehow, that was supposed to sort of sit-in for everybody else. And obviously, there's all sorts of reasons why that doesn't work. And you can take the male crash-test dummy, make it smaller or whatever, but it's still not quite kind of mainstream or normative, or whatever phrase you want to use, kind of woman's physiology.
I share those as more like obvious examples. But if you take it to the space of AI and you think about historical training data sets around employment or around housing. Computer science, if you're going to do a higher end computer science, who traditionally in the data set?
So if you look at all the resumes of all the successful computer scientists in the history of the world up until 2010, what is all that data tell you? It tells you they're male, tells you they went to Caltech or Berkeley. There's a few data points.
So that kind of data is being used, we know for certain in like resume screening in 2024, depending on the company, there are companies that are much-- I'm a big fan of Indeed and their CEO, who's actually very engaged in these conversations about bias and the training data.
But we're pulling all of this kind of historical constraints and limitations in our design choices around how we're doing AI in the contemporary moment. And so part our mission, if we want to do it better, and do it in a way that benefits more people, is to be willing to ask those social questions, those philosophical questions about how we're getting to the data and the decisions that we're making using these tools and systems, whatever that is.
MELODY BARNES: That's great. Thank you.
YAEL GRUSHKA-COCKAYNE: We're here with the Futures Initiative. And a few weeks ago, we had a talk in the same capacity, related to the future of higher Ed, a very stimulating talk. And obviously, AI and generative AI and STEM, more broadly, kind of play a key role in the future of higher Ed.
In your mind, what are some risks and some challenges, and maybe even some hopes related to how AI affects higher education?
ALONDRA NELSON: Yes. This is a fascinating space to talk about these issues. I mean, first of all, I have a wish that I have said to Sam Altman. So he's well aware of that, that they had actually taken-- that they were not racing to market and they had taken another day, another week, another month to talk to teachers about this tool before they released it.
The freak out that happened with schools banning it and penalizing students and analyzing their papers with bad AI detection tools that do not work. I mean, like this whole thing just did not have to happen. I think even if you had said-- even if you had given them a little bit of a heads up.
So I think whatever market incentive or market desire was behind that, that's just irresponsible. And I think that we need to be able to say that. We could have imagined a rollout of ChatGPT even a month later. I mean, people had seen ChatGPT 3. We'd seen 3.5 and 2. It wasn't like this thing just came into the world and people didn't know that there were increasingly capable tools.
But you could have imagined a partnership with teachers, where they had different kinds of tools, teaching modules, like things that brought it into the world in a way that was less, I think, adversarial and less traumatic, I think, for the classroom.
YAEL GRUSHKA-COCKAYNE: Or at least wait until after winter break.
ALONDRA NELSON: There's that. Right. Exactly. Let the parents deal with it. Go back home. So let me just say that at the beginning, I mean, I think we have to teach differently. And I can say this with some liberty because I don't have students right now at the Institute. And I think that's OK.
I mean, I think one of the things I've been interested about around in the last couple of weeks is Google's introduction of NotebookLM, which has all sorts of problems, including that it hallucinates. So if you're using that, don't think it's giving you-- you need it for something factual, do not use it for something factual. Because there is a degree of hallucination.
But I think one of the takeaways of how people have been responding to that is that we learn differently. And that we are-- I mean, go back to biblical times, where oral cultures. So should we be surprised that students are like, oh, my God, this essay that I'm supposed to read, and it's supposed to have some virtue, my teacher said it's supposed to be more virtuous to read it, has come to life for me with this bizarre chatbot podcast thing, which is how I learn or how I take in information, ditto video and YouTube and TikTok, and all the things that people who are younger than us are much better at and much more interested in.
I think we need to be open to those kinds of conversations. And also, remember that all of our cultures start as oral culture. I mean, the reason we know anything about anything is because we used to talk about it. And that's how people took in information.
And that should be OK. I also think we don't know exactly how these tools can, will be used. I mean, we're still to figure it out to the extent that they should be used. And we can talk about that. I mean, I have concerns, particularly around K to 12 education, the tracking and the data surveillance, particularly for young people in schools around that that we should, I think, really be wary about.
But I think also, students that are going to show us how they're useful. And we can call that plagiarism or we can call it something else and give them other ways of learning and other sorts of tools in the classroom, other kinds of assignments that are different, and that incorporate the fact that this technology exists in their lives. And they're going to use it.
YAEL GRUSHKA-COCKAYNE: So that's very optimistic and that's the hope and the vision. Do you have any concerns, or are you-- do you want to some risks that are important--
ALONDRA NELSON: Well, I mean, I think the risks for any of these chatbots that you use are free is all the data and surveillance that happens. We don't know how the data leakage is working. We don't know enough about the systems to know what happens as your inputs as queries get input to the training data as just part of how the systems work.
So all of that, I think, people should be mindful of. And if you have access to an enterprise software that has more data protection, or even a slightly paid one that has more privacy protection, I think yes. I'm particularly, particularly worried in the K to 12 space. Because we already have demonstrated harms to young people.
I mean, all of their biometric data is being tracked, eyes, hands, all of that. I mean, that's ridiculous. In addition now to things that young people might input into a chatbot, that you don't know. I mean, a child is not going to say like I shouldn't put my mom's Social Security number or whatever, some other kind of sensitive information or the symptoms that a parent is having or I'm having or if I'm ill, into this chatbot.
And so I do worry about that. And then there are just the-- they're the bigger harms. I mean, we haven't about any of those. If we're trying to mitigate climate change using AI systems, it matters that we are using a 1050X more energy to operate these systems.
How do we want to think about that? And maybe it's not worth it. And how do we-- and this is where I think maybe norms, I think, are really important, how do we want to talk about and think about the fact that if you're just going to make a basic query, do we need to say to people, if you care about the environment, use Google, use DuckDuckGo for your privacy.
Don't use the chatbot, which is more fun, probably because it more-- but it consumes a great deal of energy that we can't, right now, that we haven't figured out how to replicate, now that we're moving into-- we're like firing up the nuclear facilities, again, to try to figure out to get enough energy on the grid to figure out how we're going to do all this.
YAEL GRUSHKA-COCKAYNE: So related a little bit to the idea around being responsible and how you leverage generative AI or AI in general, what role does higher Ed have, going back to our previous conversation earlier, in enforcing and educating about the AI policy, the regulation, the governance, what role does higher Ed have in that?
ALONDRA NELSON: Huge. Huge, huge, huge. We don't know anything about these tools and how they work in our world. We're being told how you know about them is to be a good prompt engineer, those sorts of things. But there are scores, hundreds of academic research questions around these tools and systems.
Energy use, what does it mean? How do people learn better if you give them an article on a podcast versus distilling it into bullet points, using a chatbot versus reading the article? I mean, there's all sorts of those kinds of questions.
What does it mean to have systems that output sentiment? I mean, there's just kind of all sorts of fundamental science, social science questions. And then I'll come back to where we began with the thing that drives me crazy is creators of these tools saying, I don't know. We made them. We have no idea how they work.
And I think, in part, because I work at just a research center, where not only social scientists, but mostly historically scientists work on really hard problems, I just think that is an unacceptable answer. And it's only acceptable if you take the market and wanting to race ahead in the market as the only outcome that you're supposed to have.
So I'm heartened that DARPA and other research agencies, ARIA in the UK, are trying to understand the foundational, fundamental mathematics and science of these systems. But that's taking resources away from other things.
Like the companies should be understanding more about these models, but it's more just like, we got it enough to run as a product. We're going to kick it, push it out the door. And we don't know anything else about it. We don't know what it's going to do. But it's getting us the valuation that we want, and it works good enough to send to the consumer.
I mean, that is really problematic. And so I think some of the role of the University is to continue to help figure out some of those problems. But there's a whole swath of other problems. Like how are we going to think about if there's a labor transition coming, not just like accept as I think companies, who have a certain responsibility to their stakeholders. So OK. But what are the other pathways for thinking about work?
How can work displacement be mitigated? Are there other models for thinking about hours of day per work? There's a whole bunch of research that needs to happen that some people are doing, but not nearly enough.
And I think the University is so important. And my worry for universities is that we will be captured doing the cleanup work for companies as opposed to doing the blue sky innovation, experimental work. So I always want to make the case that there's so much research to be done, but it can't just be answering the questions that the market-- the companies don't want to answer, don't want to pay for.
YAEL GRUSHKA-COCKAYNE: I've heard a lot of call for action for business school students, so I'll take that back to Darden and work with our students on some of that. I know that we are going to leave time for the audience for questions, so I'm going to end with my final kind of question to you around we have plenty of students here. What is your advice to the students in the room, both in terms of their studies, but also as they start their professional career in the workplace?
ALONDRA NELSON: Yeah. I would say study what you want to study. The last 10 or 15 years have been everyone has to study CS and learn to code. So I think you should take these very imperfect robots that now can do sometimes good, sometimes crappy code as a get out of jail free card. Just like escape whatever the sort of prison has been that the only way you can live happily in this society is to learn how to code, unless that's what you want to do.
If coding is like your heart's desire, like, go at it. But now, you don't-- there's other ways to think about that. There's other opportunities. And as critical as I am about things like everyone needs to become a prompt engineer, because I do think fundamentally, that is a kind of coding that we will all have to learn how to do to engage with systems.
I also think it is incumbent upon the companies to create systems that have interfaces that are easier for people to use so you don't have to come up with crazy phrases to learn how to get an output that you want. So that's just product design. And right now, it's not, I think, up to par.
But I hope that this moment of generative AI for all of its challenges is also opening up all of the things that we need to work on. There are philosophical questions around, there are huge humanities questions around AI.
Some of them are in the realm of philosophy, but some are just like how do we think about literature in this moment. How do we think about literary theory in this moment? How do we think about communications and media studies? All of these kind of questions. There's huge social science questions, and obviously, there's lots of science and research questions.
So my hope for the students is that this feels like a moment of broadening out, which I saw became a far too narrow path for their lives and their creativity.
YAEL GRUSHKA-COCKAYNE: Wonderful. And I think we're going to open it up for Q&A. We have a couple of mic runners, I believe. And so we encourage you to raise your hand and ask us some questions. Fantastic.
MELODY BARNES: Next question here.
YAEL GRUSHKA-COCKAYNE: Extra points go to the first question.
AUDIENCE: Hi. Thank you for speaking with us. I'm curious what government policy and regulation can do to address the profit-making incentive and the race to market for AI systems.
ALONDRA NELSON: I think regulation can help a little bit. So there can be some friction placed in a pipeline that is sending things out too quickly without things for any other product we would require. Verified pre-deployment testing, that these systems are safe. Any other product, consumer products that you can imagine, that is the case. These tools do not have even that low bar before they're released.
So I think that's pretty important. If someone has to say, compliance checklist, someone has to verify it. There's a lot of different ways you could think about that. And the incentive there is not to slow down the race to market. The incentive there is to ensure that the public has tools that are safe. And we're not there yet. Even as we're talking about catastrophic risks, these tools-- new releases and versions of these tools are being released at a regular interval. And nobody is testing to make sure that they're safe.
I mean, that's shocking. And it's not true of literally any other consumer product. So I think that is a fundamental thing. I also think that we are-- some of the hype from the companies about what the tools will do. Cure cancer forever, fix climate change, all of that are some of these outcomes are things that are never going to be things that the market will do, at least, initially.
And that's part of what government can do distinctly and why, I think, across people's different ideological and partisan divides, we need to figure out a way for government to be able to invest in public goods around AI, to be able to model responsible AI, to be able to use its procurement levers to get us to only buy.
To go back to Yael's question, a conversation we were having if we think about the personal computer and we think about the US Federal government, which is the world's largest consumer, and include DOD, the world's largest employer, lots of people use lots of tools for their thing.
If you think about the personal computer, those came into government over time. And there were these vendor contracts over time. The US Federal government is about to make huge, all at once, massive investments in AI tools and systems.
And that is a really important moment for saying to the market and to companies, we're only going to give you the trillion dollars that the federal government is going to spend over the next two years or whatever on these tools and systems if they meet these certain bars, if they're safe and effective, if we can have some transparency about what's in the training data, if we know members of the American public or protected classes are going to be discriminated against, that you can sort of-- the government can play that role.
And then to the extent that we want goods, like can AI help cure cancer? I don't know. But if there's not a market for it, it really falls to government and government research to invest in research that gets us to a place where there might be some commercialization potential.
And so those sort of public goods and public benefits for AI that are, right now, market failures are super important. And I think, again, across however people might think about, government expenditures and the role of government, it's a super important place in technology, a super important role for government to play.
AUDIENCE: Thank you so much for your talk. And I'm so happy that I finally get to talk to you. I actually use the AI Bill of Rights. I worked with a team of researchers at NIEHS. And there are quite some consequential questions that we discovered in some research that we did comparing public use of AI and experts' use of AI.
So these consequential questions are really like, I don't know if policy, sometimes, speaks to the public about this. Because while America is really open in the Bill of Rights compared to Europe, and probably other countries, in regulations, America has a lot of open conversations around innovation.
And some of these things have to do with America's leading edge in the market. Technology is America's business. How, then, does government and policy have these conversations with people?
ALONDRA NELSON: How indeed?
AUDIENCE: And there's also one piece that we added to that paper about technology being the reflection of its creators. Like AI is, most times, that's a common point that we found in the data about public and experts. People do create things in their own reflections.
So this harms that we find in these technologies, are they not inherently harms that already exist in these people that are now translated onto these technologies? Even stopping the technologies, do we really stop these harms? Those are some of the insightful, consequential questions that we found in that research. And probably, we can share that paper.
ALONDRA NELSON: Yeah. Well, thank you for your comment. I mean, I guess, the one thing I might just tease out of there, but there was a lot there, and we can continue the conversation, is the tension in the United States, in particular-- and your phrase was something like, technology is America's business.
I mean, United the States has, right now, we're trying to keep it, a pretty significant asymmetrical advantage with regards to AI. So that means it puts legislators in a tough place. One of the illustrations of this is if you watch the kind of first big congressional hearing that we had around AI that Sam Altman was at, at one point during that hearing, Senator Joe Kennedy says to Sam Altman, we should have a regulatory free agency for AI. Do you want to run it?
I don't know if you remember that moment, but I watched it very-- like this exactly is the tension. On the one hand, we really understand that leveraging these tools and making sure they're used responsibly will lead to greater adoption, will lead to expansion of markets, like a lot of the things we want.
But right now, the status quo has led to the United States asymmetrical advantage, which is a market advantage and a national security advantage. And so the tension between those two things of wanting to regulate or like don't rock the boat. If we regulate, we might rock the boat. And we don't know what's going to happen with the broader ecosystem. So we're going to grudgingly, inch by inch, maybe do some legislation, except we're not.
One of the things I participated in the Senate Leader Schumer had this AI insight forum, which was a series of, I think, eight or nine meetings that happened. So ChatGPT is introduced November, I think. By June or May or something, Sam Altman on the hill, all this stuff is happening, happening, happening.
And then Leader Schumer says, we're going to have these meetings. We're going to talk to people. And Senate, we're ready to move on legislation. You know how this story ends. Which is we have these nine meetings, I think Elon Musk is at meeting two. I'm at meeting one. There's cameras and things. I'm at the second meeting on innovation.
Mark Anderson's there and others at the second meeting. So there's a series of them. And then nothing happened. Then we get a framework that's basically, to my interpretation, status quo. Let's give some more money to NSF to do some AI R&D. Yes. I mean, I want to be very clear. I'm not just agreeing with that, but nothing came out of that process that was going to be anything different from what we're already doing.
So you identify, I think, a very challenging tension. Then on the other side of that, that means that we are left with the Brussels effect, which is the EU regulation that does bind the companies becoming, by default, US regulation. And so we're in this kind of tricky place, a bit of a prisoner's dilemma, regulatory prisoner's dilemma. And so thank you for raising that.
AUDIENCE: Thank you for the great talk. You mentioned that now, US government is interested in buying these services from technology companies. And I want to follow up on that. And I've been sort of following now like there's a huge sort of recruitment of AI experts within federal agencies like US Digital Service, DHS. And there's a big interest in, OK, what are some AI opportunities for the government, for public services.
And I'm sort of curious what you think in terms of how do we identify the right use cases. How do we make sure that we're not just we're going to train refugees, let's just put a chatbot on it? How do we make sure we're building the right things?
ALONDRA NELSON: Yeah, such a great question. And this is where it gets into trouble that I studied the Obama administration that sort of hatched really important new institutions like the US Digital Service, which is an interesting model for now, but a different model. That model was, how do we get more technologists, which effectively meant engineers, some computer scientists, into government.
President Biden's AI Executive Order sort of mandates this AI talent surge. So it's not just DHS and it's not just USDS. But the people that you need are different. What it takes both capital investment, data investment, and compute to make foundation models, it doesn't necessarily make sense for the United States to do that.
So part of some of what the USDS was doing was building websites, building kind of small algorithmic systems, building scripts that to make things run better in government. It's not clear that the value when you've got llama 3, when you've got three or four or five, including some open source foundation models that can be built upon, that it makes sense for the federal government to build these systems.
So if you're thinking about an AI talent surge, you don't need builders. You need a whole suite of other people who know how to do a lot of other things, who how to think strategically about data, whether or not they know how to use the data, who know how to think about where to find data and systems, how to think about the responsible and ethical issues around the data, think about the privacy issues, think about the procurement issues.
So what is the skill set for having a negotiation if you're, as an agency, with a company to buy or license or use a foundation model for a certain use case for an industry, for a company? So you want people who know-- you need to know something about AI. But do you need to have built a foundation model yourself to know how to make that negotiation? I don't think so.
So I think there needs to be a little bit shift in the philosophy of how we're thinking about talent, even though government needs a lot of technical talent, including AI scientists and engineers. So that's not what I'm saying. But I think we need to think about there's a broader aperture that we need to think about that I think that the Biden-Harris administration is just beginning to sort of think through with these talent searches, but I think is important.
The other thing I would say to you is if you have any walk in you at all, there are two beautiful Office of Management and Budget memos, two memos, one that came out in March and one that came out this week, on government use of AI. They're gorgeous. Shalanda and her team, Jason Miller have done a magnificent job on these memos.
The first one is about how government should think about the use of AI technologies for government services. So things that gatekeep people's access to everything from FEMA benefits that people are desperately needing now to Medicare and Medicaid. What is the threshold of safety and rights abiding and preserving that these technologies should have? So that's what that first memo does.
The one that's just come out late last month is about procurement. And it's about what are the rules that federal agencies should use? What level of transparency? What kind of vendors? How do you keep the transparency iterative since these systems are dynamic and changing?
So it's not just like buying software off a shelf and you do this compliance check at the beginning. There needs to be, as this memo, as government finally understands, a more kind of iterative, dynamic relationship with a vendor around these AI tools and systems as they're going to be changing dynamically over the course of the contract. So they're very interesting.
YAEL GRUSHKA-COCKAYNE: And we have time for maybe one more question.
AUDIENCE: Hi. Thank you for coming to speak with us.
ALONDRA NELSON: I'll assume, perhaps, self-identified women have asked questions.
YAEL GRUSHKA-COCKAYNE: I know. There were a couple of men wanting to ask questions.
ALONDRA NELSON: OK.
YAEL GRUSHKA-COCKAYNE: Don's been waiting patiently. I'm OK with that.
ALONDRA NELSON: Yeah. No, I know. It's really shocking, actually.
AUDIENCE: So my question is, how do we go about making algorithms objective and equitable when oftentimes, it's the data sets that are biased or have gaps, especially when these AI systems are coming out and changing so quickly? Whereas with computers, we had more time to react.
ALONDRA NELSON: Yeah, that's such a great question. Beautifully said. We just have to ask that question again and again and again. It's not a one off question. You don't just ask it at the beginning, before it's deployed. You have to ask that after it's deployed, as it's being used, in which context in use case.
Sometimes, having a data set that's not representative or incomplete doesn't matter for certain uses. For other uses, it matters and the stakes are very high. As we move towards-- I put this in quotes, "general purpose tools" because these tools can not be used for every purpose.
I think we've got to just ask that question again and again and again at many stages in the life cycle of an algorithm and its use, and also, in the context of the specific use cases. So a use case that's around sensitive data with very high stakes like health care is distinct from, I don't know. You're going to ask a chatbot to write a poem for you in the voice of Shakespeare.
So I think we've got to open ourselves up from this one-time, one way of doing things to-- I wrote an essay for Foreign Affairs on AI, how to think about AI governance. And one of the models I used was from-- this colleague mentioned this, to which is the National Institute of Standards and Technology.
Starting in cybersecurity and then with their AI risk management framework, they began to use something called 1.1 or 1.2. They kind do these guidance documents that have versions, almost like software has versions.
So even as we're thinking about governance tools and levers, who I was talking to Danielle about Section 230. I mean, that law is 30 years old. And so that's the way the Senate likes to work. We want to create a law. It's supposed to last forever. So if it's a good law, it will endure. There'll be parsimony around the language, 230 words or whatever. How many? 180 words. 26 words. That's the name of that great book, actually. Yeah. Yeah.
So there's been this sort of obsession, I think, in lawmaking around sometimes parsimony of language and something that's supposed to endure. And I think we've got to give up around new and emerging technologies this sense that the law has to endure. We've got to build into it iteration, versioning, much like kind of software version. And I think this, in government, provides a good example of that.
YAEL GRUSHKA-COCKAYNE: We have plenty of time afterwards for some questions. So those who didn't ask questions, any men in the room, are welcome.
ALONDRA NELSON: All are welcome.
YAEL GRUSHKA-COCKAYNE: Anybody, welcome to ask. But gosh, we've talked about so much. So it's been fascinating. We started with the fact that women have always been in computing. This is not new. We've talked about the AI Bill of Rights, of course, and your amazing work and leadership there about how AI and generative AI. There's a reason why we're feeling this anxiousness because it is moving faster than any other technology.
And we have not totally digested and figured out how to stay on top of it. We've talked about whether regulation and governance should be bottom-up or top-down. And maybe it's a little bit of both about the safety checks and the responsibility of a lot of our business school students that are on their way to lead some of this innovation.
We've talked about the White House technology from Skype to GitHub. So quite a lot of movement there. We've encouraged higher Ed to play a role in answering some key questions on the research front and to embrace the technology in a variety of different ways.
And finally, we've encouraged our students to study what they want and to recognize that there are many different ways to find their way into the tech conversation. And not all of them are obvious up front, and they don't require computer science.
But please, I hope you join me in thanking, not only my co moderator, Melody Barnes. Thank you very much--
[APPLAUSE]
--for being here. I'm so honored to meet you. I've waited for this for many years. And of course--
ALONDRA NELSON: We all feel that way.
YAEL GRUSHKA-COCKAYNE: And of course, for Dr. Nelson for visiting us here in Charlottesville, thank you.
[APPLAUSE]
ALONDRA NELSON: Thank you, Yael.
[CHEERS, APPLAUSE]
Futurizing Higher Ed
September 5, 2024
Four trailblazing academic leaders discuss how their institutions are each boldly taking on the future. UVA President Jim Ryan moderates a panel with Presidents Michael Crow (Arizona State U), Harriet Nembhard (Harvey Mudd College), and Santa Ono (U Michigan).
Read transcript
JASON NABI: Good afternoon, everyone. My name is Jason Nabi. I am the Project Manager for the Futures Initiative. We are so excited to be hosting this event, especially in this magisterial space. So thanks to each of you for joining us today.
Special thanks also to our visiting presidents, our presidents, all of our panelists for being here as well. One quick note. We will have some Q&A at the end. So if you have a question, please use one of the provided note cards to write it out, then pass it to the closest aisle, at which point one of our ushers will collect that note card. You are free to do that at any point in the program.
Also consider writing your name and your email on the card so that if we don't have time to get to your particular question, we might circle back to it later to you directly. So with that, it is my pleasure to introduce to the podium Lori McMahon, UVA's Vice President for Research.
[APPLAUSE]
LORI MCMAHON: Good afternoon, everyone. It's a great day for us today. We will be speaking with several presidents who will enlighten us about the future of higher Ed. So on behalf of the Futures Initiative, I'd like to welcome you today to hear about what's going on in higher Ed.
So a quick background about the UVA Futures Initiative. This was launched at UVA in January under the auspices of the Provost office, and it was supported by a UVA Strategic Investment Fund Award. The goal of the Futures Initiative is to scan higher education horizon and search for ways that UVA can proactively position ourselves to thrive in a rapidly evolving world.
Toward that end, members of the UVA Futures Initiative working group, this was truly a pan university task force made up of representatives from all 12 of our schools, four of our pan university institutes, and several of our administrative and academic divisions.
And we've been asking very ambitious, far-reaching questions about what we might do to achieve our strategic goals and striking innovation and doing so to become the university of the future. Part of that process through the Futures Initiative speaker series involves bringing thought leaders, our panelists today from a variety of sectors to grounds to share their vision of the future and their respective fields.
We are excited to kick off this series with today's panel for its focus on higher Ed futures to be hosting three distinguished presidents from trailblazing institutions. Santa Ono, President of the University of Michigan. Welcome. Harriet Nembhard, President of Harvey Mudd College. Welcome. Michael Crow, President of Arizona State University.
And joining as our own president, Jim Ryan, for what is sure to be an illuminating and inspiring conversation. I think we're in for a real treat today. But before we get to that conversation, I wanted to pause to appreciate the unique significance of this moment today.
It's really hard to think of a time, not even in 1817, when Thomas Jefferson, James Madison, and James Monroe attended the Cornerstone Laying Ceremony at Pavilion 7. So we've never had or almost never had so many presidents on grounds at the same time.
And while it might seem to be an apples and oranges comparison to compare American presidents to university and college presidents, when you think of the sheer collective reach of these institutions and the hundreds upon hundreds of thousands of people they educate, employ, serve, and otherwise impact annually, perhaps this comparison of the leaders of a young America isn't so far off.
Indeed, Jefferson, Madison, and Monroe in 1817 could only dream of what America and American higher education would ultimately become. The states where our visiting presidents today hail from-- Michigan, California, and Arizona, those states hadn't even been formed in 1817.
So in the 200 plus years intervening, it's humbling to ponder just how vast America has become geographically, of course, but also socially and technologically. And how higher education has played a crucial role in the spectacular growth of our nation.
So perhaps it's no coincidence then that our visiting presidents during their time with us on grounds are staying at the Colonnade Club, which nestled between Garden 7 and Pavilion 7 and the academical village is a literal stone's throw away from the historic cornerstone.
In a sense, we are coming full circle today as these presidents are here to join us in laying a new foundation, a cornerstone of ideas, and the possibilities for a monument to the future of higher education.
We know that our mission is to harmonize the concepts of being both great and good. Across grounds, we are continuing asking ourselves several questions. For example, how might we integrate the great of scientific and technological advances with the good of humanist legacies?
How do we align the great of academic research and discovery with the good of public-facing impact? How do we balance the great of fast moving innovation with the good of institutional stability? How might we unify the great of profound scholarship with the good of enabling young people to make their way in a complex world?
Each of today's panelists is poised to explore with us answers to these questions to empower us with ideas and inspiration as we learn from them about the many impressive forward-looking initiatives at their respective institutions.
At the University of Michigan since 2022, President Ono's signature effort has been to deliver a life-changing education, especially in the fields of human health and well-being, civic and global engagement, and climate action, sustainability, and environmental justice.
Through vision 2034, a comprehensive strategic visioning process, President Ono has set the University of Michigan on an aspirational path to imagine what it might achieve in the next 10 years, with a special commitment to defining the overarching goal of a public university to be in service to humanity.
President Nembhard joined Harvey Mudd College last year as president, and she is singularly positioned to lead it forcefully into the future. A renowned voice at the national level for transforming undergraduate STEM education, President Nembhard has been recognized by the National Science Foundation, the National Academies of Sciences, Engineering, and Medicine for her expertise in this area.
This makes her the perfect leader for Harvey Mudd College, not only because it is widely regarded as one of the nation's best undergraduate science and engineering colleges, but also because Harvey Mudd distinguishes itself by challenging the norms in STEM education.
Toward this goal, President Nembhard initial focus on inclusive pedagogy, increasing diversity in STEM, and empowering students through campus level infrastructure for civic learning and engagement.
President Crow has spearheaded Arizona State University's ambitious evolution into one of the world's best public research universities. At the helm since 2002, he has orchestrated a dramatic rise in students, schools, people, and programs in order to further ASU's mission to be student-centric, technology-enabled, and focused on global challenges.
In President Crow's 22 years at ASU, enrollment has gone from 55,000 to 144,000, while at the same time, 25 new interdisciplinary schools have been established, including the School of Earth and Space Exploration, the School for the Future of Innovation in Society, and the School of Human Evolution and Social Change.
President Crow has also launched multidisciplinary initiatives like the Biodesign Institute, the Julie Ann Wrigley Global Futures Laboratory, and the nation's first School of Sustainability. Finally, our own President Ryan, has ushered into an era of transformative growth at UVA.
Crafting the great and good strategic plan, coordinating the grand challenges research investments, and establishing a new School of Data Science and a new Karsh Institute of Democracy, plus a new Performing Arts Center.
All of this, while making UVA shine as a welcoming beacon of opportunity through initiatives that have heightened the university's affordability and accessibility, that have enhanced undergraduate life and that have fostered genuine community engagement.
This is but a small sampling of the accomplishments of these distinguished panelists. But what is abundantly clear is that we have before us four presidents who are not just making history, but more crucially, shaping the future. Help me welcome these presidents.
[APPLAUSE]
It is now my pleasure to welcome President Ryan to the podium.
JIM RYAN: Can I sit here?
LORI MCMAHON: You may sit there. Yes.
JIM RYAN: Thank you. Thank you, Lori, for that introduction, which is very kind. And I'm glad you set expectations low by comparing this group to founding presidents of the country. So we'll try to live up to that.
And thanks to the UVA Futures Initiative for sponsoring this event, and especially to Phil Bourne, who has spearheaded this from the beginning. Thanks. The most thanks go to our panelists. It is a genuine honor to be among three of the most accomplished leaders in higher education.
And I'm delighted that you're joining us, and I am eager to hear your view of the future of higher education. I know you all brought your crystal balls. And over the next hour and a half or so, we're going to be able to see what you see.
At first, I want to make sure, is it OK if we go by first name?
SANTA ONO: Sure.
JIM RYAN: OK. Feel free to call me President Ryan.
[LAUGHTER]
So I would like to start with a general question about the future of higher education and how you and your institution are preparing for the future. And you can pick whatever period you like, but I'm not thinking the next few years, thinking 10, 20, 30 years and beyond.
And Michael, I'd like to start with you. You are, in effect, the Dean of Presidents having served for more than 20 years, which is truly extraordinary and deserves round of applause.
[APPLAUSE]
MICHAEL CROW: You just need to stay low and keep moving.
JIM RYAN: You've written about the New American University, so I wonder if we could start there and how that relates to the future of higher education.
MICHAEL CROW: Well, I mean, the premise of that book and that theory on which our university, Arizona State University, is now a prototype after 22 years of effort by our faculty and our staff and our team and our donors and everyone else that's been involved is that the US has been through several very successful evolutionary phases of the development of the institutions in our democracy called higher education institutions that are essential to our success.
So there were the colonial colleges, there was the initial conceptualization of the public universities, all of which were in the south initially, including Virginia, Georgia, South Carolina, North Carolina. Virginia coming on because they didn't have the religious communities that built the schools that were in the northeast.
And then the truly unique American next two phases, the land grant colleges and universities, including the historically Black colleges that emerged. And then the research-- the emergence of an American Research University.
And I think what happened along the way was that we won World War II. We became this superpower. We began thinking we were educating all kinds of people. And we didn't realize that the country was going to grow to 350, 400, 500 million people, that it would become unbelievably diverse.
And that the old antiquated models of a few students in a classroom with a few brilliant professors would be necessary but insufficient. And so we started out on a task of constructing a University which could have a faculty equal to the faculty anywhere and a student body that represented the totality of our population in terms of its socioeconomic diversity, access, and excellence being the phrases that we use.
And that you could design an institution using technology and using innovations that could become not a replacement for the schools that already existed, but a new version of a university. And we call that the New American University.
We've been at this long enough that we have found a way to do that. We just broke $1 billion of research expenditures, according to NSF calculations, without a medical school. While at the same time, last year, graduating almost 40,000 graduates of the most diverse student body that you can possibly imagine.
And we did that by basically assuming that in this design, different than other designs, faculty members could be projected. So we basically calculated that our faculty, if empowered with technology and innovations on campus, could deal with a large and complex student body.
It didn't mean that courses couldn't be small, et cetera, et cetera. They can be. But that if you did that right, that same faculty could graduate or have two or three more students than they have on campus by using a technology mediation function.
So we built thousands and thousands and thousands of courses and spent hundreds of millions of dollars and built hundreds of degree programs. And our faculty now are projected in a way where we've graduated 100,000 people with college diplomas of the highest quality who were unable to finish college in the system that makes up the United States, where more than half the people that go to college never finish.
So the idea was to basically do the following. The idea was that higher education in the United States has been unbelievably powerful, unbelievably successful, and is wholly inadequate to the future. We need more people to be educated, not fewer people to be educated.
We need more research and discovery activities, not less. We need more and different kinds of institutions. And so we're trying to move to the point of what we call a breakout. So we built 40 new transdisciplinary schools. We just have 15 astronomy majors. We now have 500 astronomy majors. We have 9,000 biology--
JIM RYAN: Meaning 500 different majors in astronomy?
MICHAEL CROW: No, we have 500 students majoring in astronomy. So we have 800 majors at the university, 800 degree programs at the university. So the notion was in the American plethora of institutional types, I used to be a trustee of Bowdoin College, which is a wave one school in Brunswick, Maine.
And so you can have those colonial colleges that are still thriving and still advancing the Americas Greek academies. You can have the great examples like the University of Virginia, which is what we call wave 4 research university.
We're also a research university. And some research universities and some others can then also scale. And that means scale and differentiate. This will be my last point, but I've made this point to some of your colleagues this morning.
The last thing the world needs-- there's two elements to this. The last thing the world needs is more smarty pants at university telling to the rest of our society we have all the answers. You may not graduate if you come here. We're going to charge you a lot of money to show up. And we have all the answers. We got to stop all that.
We got to figure out how to be more connected in many, many, many ways. And we also need new kinds of universities, universities that are not just replicants of each other. Why does everybody have the same engineering schools and everyone has the same political science department and everybody has the same geology department and anthropology department.
And then faculty like mercenaries just move from one to the other, trying to get the best advancement opportunity for themselves. I used to be on the faculty at Columbia University, and we went through this all the time.
So my point about the future is that we need more differentiation, more design to specific sets of outcomes on a national scale, more ability to connect to the people. And we also need new kinds of universities and colleges to emerge.
JIM RYAN: OK. You raise a lot I'm going to come back to you. But I just want to clarify one thing. You're not suggesting that all schools should become like Arizona State. Arizona State should be a type 1 example of different types.
MICHAEL CROW: Yeah. So one of the books I wrote about this is called The Fifth Wave. And it basically argues that we need a new kind, another wave of institutional design that will someday be followed by another and followed by another and followed by another.
True to higher education, the earlier waves are all still here. Bowdoin and Amherst and my wife went to Oberlin. Those colleges are fantastic institutions. They're all doing fantastic things. They're not going anywhere. Now they're very expensive. They're small in scale. They're a certain kind of environment.
We need all of these things in a country of this size and this robust set of cultural differences. We just became very satisfied with our self. And I will say one other thing that happened. And so on my desk is a catalog of the University of California at Los Angeles from the summer of 1950.
Their admission standards in 1950 are our admission standards today. We admit every single student that has a B average from high school that took 15 particular courses and has a B. And that means if you're going to do that, you're going to have a lot of students. So if you want to control enrollment, keep raising your admission standards.
Well, we're not going to raise our admission standards ever, that we're going to have that as the qualification and we're going to adjust. Now, some universities have better figured out how to do that, and they better figure out how to graduate students when they go to those universities.
And some of those better be research universities or only elite high school performers will be able to go to a research University and the country will decline over time as a function of that. So we maintain those standards.
And at the time in 1950, in Los Angeles at the University of California, there was no tuition. So at our institution for our students that don't have the ability to pay, there's no tuition. And so we've made this work. But at scale, we've got 43,000 Pell eligible undergraduates attending our institution who have substantial needs to be engaged with us in certain kind of ways. That requires a new design. This is not a system. This is an institution.
JIM RYAN: Right now, yeah, it's remarkable. So, Harriet, if I'm doing the calculations right, you could fit about--
HARRIET NEMBHARD: Oh, don't do that calculation. Don't do that calculation.
JIM RYAN: 1,400 Harvey Mudd's into Arizona State. So a very different model.
MICHAEL CROW: We'll take you.
[LAUGHTER]
HARRIET NEMBHARD: Careful there, Michael.
[LAUGHTER]
JIM RYAN: So how are you at Harvey Mudd preparing for the future?
HARRIET NEMBHARD: Well, I think it is important to situate what is Harvey Mudd as a project, very different than either of these institutions, of which I'm an alumna of both. But Harvey Mudd is interesting project. It's less than 70 years old. It was founded in 1955 after the Cold War, during the Cold War, when there was still a race to put a human on the moon.
There was a recognition that the types of problems that we were facing as a country in STEM would keep coming, and we would need to have an institution that would meet those challenges with a focus on humanity and society.
So Harvey Mudd was established as one of the Claremont colleges. The Claremont colleges are a consortium of seven colleges in Southern California. There are five undergraduate institutions and two graduate institutions together that are about 8,000 students in total.
Harvey Mudd itself is about 915 students or 915 nerds, as I like to say, because all of our majors are STEM, all of our students have STEM majors. And of that, we have 12 STEM majors. That's it.
But the college is structured as 1/3 of the curriculum is based on the fundamentals in mathematics and sciences based on the formation of the grande écoles. 1/3 of the curriculum is on a program of humanities, social sciences, and the arts. And 1/3 of the curriculum is in the major.
We have 8,000 alumni. And so that gives you an idea of the very small, unique scale of the project. But as Michael was saying, we don't need morphism. It is a different project with a different intention.
And as we think about the future at Harvey Mudd, I made the remark that I'd gone to JPL and had a little reception for the 20 or so Mudders who are currently working there. And again and again, as I meet students from Harvey Mudd, they speak about how they were able to really distinguish themselves in their career because they were the engineer who could write or the physicist who could write.
And one of the things that we've thought about and that I've posited is that in a handful of years, that same story may be because I was the engineer who could still talk to humans and relate to humans amid all of the technology that we see advancing.
And so what does that mean for us at Harvey Mudd? When we say our vision is STEM for the world, it really means on two levels. The first level being that we have a rigorous, hands-on, nearly Socratic approach to teaching, very rigorous, difficult subject matter material.
And at the second level, we have an approach that says that we are engaged in this as humans. Wherever you come from in the world, you have another home at Harvey Mudd where people care about your well-being and your ability to be a good citizen.
So in our imagining of the future, it really is about, how do we continue to build out these very intentional, close knit relationships between students and faculty at this 1 to 9ish or so teacher-student ratio, about 100 faculty, 900 students? And how do we use that to advance the very best of what we need to meet the challenges in the world that we see today?
JIM RYAN: Santa, what do you think? President of the University of Michigan in between the two in terms of size, the state's flagship, how are you thinking about the future?
SANTA ONO: Well, just like you, we went through a visioning process and we called it 2034 to decade long vision. And we have an accompanying campus master plan, which is a few decades in length, so that we can strategically allocate resources and build a campus that supports that vision.
I would say that through that visioning process, a couple of things came. And it was really not a top-down. It was sort of a ground up kind of a process where we had about 30,000 inputs from students, faculty, and staff.
And I really believe in shared governance, and that's one of the very worthy commitments that I made when I started. And so we heard a couple of things. One is, as you know, University of Michigan is a comprehensive research university, very large graduate school. And one of the things that we heard was that there was a strong desire to maintain disciplinary strength across not only STEM disciplines, but arts, humanities, the social sciences.
And that's a commitment that we're going to make because each of those, I believe, no matter where knowledge creation goes or wherever the job market might go for critical thinking and for creating an educated human being and citizen, it's really important for universities like Virginia and Michigan to have disciplinary strength.
It may be viewed as a dramatic model, and it is in many ways. But we at the University of Michigan believe that there's a very important place for universities like Virginia and the University of Michigan. So a commitment to disciplinary strength, importance of the arts and humanities to perspective, to intercultural understanding, all those sorts of things we believe.
And so that's one thing that was loud and clear. And we actually share almost exact same building. We were also, even though Michigan wasn't a state at the time, we were founded in 1817. So pretty much the same age.
JIM RYAN: Did a former president found your university, though?
[LAUGHTER]
SANTA ONO: We get up too early for it. We get up too early for it. But that was not a sound. But we have a huge amount of respect for UVA. And I've been a fan for a long time, by the way, because I grew up in Baltimore, Maryland. And remember Ralph Sampson and ACC and all that.
But to answer your question, beyond the disciplinary strength, there are two things that we're focused on, with that as a foundation. One is how we serve Michigan, how we serve the United States. We believe as a public university that we have an important role to make sure that we contribute to the innovation economy to job creation and attraction.
And so innovation is something which is one of the focuses of our current vision. And we think that in addition to creating an environment and milieu where innovation can be supported by students and faculty and students are central to that innovation, we believe that one of our responsibilities is to be part of the ecosystem.
And so we would love to, for example, partner strategically with the University of Virginia. We would love to collaborate with Harvey Mudd and Arizona State because we think that that's important for the national competitiveness and also regional competitiveness.
If you think about, for example, the CHIPS for Science Act and the fact that we're losing ground in the semiconductor industry, for example, it's because we're not meeting the workforce need. And we're not sharing. It's viewed as a competition between regions and hubs and things like that.
And one of the commitments of the vision is to be a partner with other universities, community colleges, other kinds of institutions, liberal arts colleges, but also to be a strategic partner with government and with industry. And I think we can do that. We can do that better. And I'm here because, like I said, I have huge respect for UVA and I'd love to see what we can do together.
JIM RYAN: Thanks for that. I want to come back to the topic of disciplines when we talk about research, but first I'd like to talk a little bit about teaching and the future of teaching. And Harriet, let me start with you. Either at Harvey Mudd or more generally-- And I'll ask all three of you the same question, how do you see teaching changing, if at all, over the next 10 to 20 years in terms of what's taught, where it's taught, how it's taught and when it's taught?
So anything from online education to new disciplines or new courses. It seems like a key feature of Harvey Mudd is the intimacy of the teaching experience, and that seems core to it. But do you have a sense of whether that's destined to change? Is that something you're going to hold on to? Or more generally, when you think about teaching on college campuses, do you see huge changes in it?
HARRIET NEMBHARD: Yeah. I think that there will be huge changes in how teaching is done, how education is delivered. But let me situate this for just a moment and say, I did my undergraduate at a small liberal arts college, one of the Claremont colleges, Claremont McKenna College.
And then have had education and a career across large R-1s since then. And I find myself in coming back to the campus now and taking another look at what a liberal arts education means, it's been very significant for me. And I can maybe describe one experience that perhaps articulates a little bit to your question.
In the second semester of my first year there, I decided that I was going to go to a Harvey Mudd class. And I spent the first week going to the first-- I spent a week going to the first week of our sophomore core impact course.
This is a course that has been recently redesigned by the faculty to be an infrastructure that will allow us to be flexible and teach new interdisciplinary courses as they come to be recognized and needed.
So the way this core impact course works is that, for three years, it's focused on climate. For this first three years, it's focused on climate. And it's taught in the students' second semester of their sophomore year to the whole cohort of 200 students, 228 to be specific. And it's taught by seven professors, seven faculty collaborating to teach this one course. Four in faculty and three humanities faculty.
And that, just on the surface, really bent my mind. I mean, how many times I have co-taught a course in my entire career. It made me twice. We have this very rigid way of focusing on here is your teaching credit, how many courses you teach for the year and so forth that I think precludes a lot of co-taught courses.
But I really saw the power in it when you see faculty coming together focused on teaching and delivering this course, as I said, on climate change. And so this was week 1. First day of class, go in. Here is data on all of the wildfires that have occurred in California over the past 50 years.
Acreage burned, buildings burned, lives lost, source of fire if it's determinable and so forth. OK, now students, turn to your neighbor and write Python code to visualize and map this data. Do, do, do, do. 20 minutes later, the students have mapped 50 years of data. And so forth through that first course, the first class.
The next class, it was OK, now-- humanities faculty are leading. Now let's think about the decisions that are made and perhaps the biases introduced by the data that are not there, that we don't have data on access to capital to rebuild. We don't have an understanding of whether this was a rural community or an urban one and so forth.
And a very sophisticated conversation then, guided by the humanities faculty, to pair these ideas together. And so again, when we talk about 1/3 in STEM or in your math and science fundamentals and 1/3 in humanities, these are not distinct.
The idea is to have fluidity between those so that we are really arguing and synthesizing for the challenge that has to be met in a very broad way. And this, again, speaks to a change, I would say, perhaps especially at universities, a sea change. I have never experienced that in all of my days in higher Ed.
What does it mean to be able to pull that kind of power of teaching in to help students to build their formation as competent STEM leaders and as citizens? And so this is the kind of work that I think will continue to be challenging, that will continue to challenge us.
And what are the things that we'll need to do as a college and I would say indeed as a sector, a higher Ed sector to meet the needs that students have to explore these very complex competencies in new and insightful ways?
And so I think the things that have to change, that continue to change in that type of paradigm could speak to a lot of things that we would have to renegotiate across higher Ed, like I said, from teaching assignments and teaching credit to promotion and tenure, all of these sorts of things that would be underneath that.
JIM RYAN: That's a little bit like bringing interdisciplinary research into the classroom.
HARRIET NEMBHARD: Right. Very much so.
JIM RYAN: Santa, what are you seeing at University of Michigan in terms of how teaching is changing and how you think it might change over time?
SANTA ONO: I think it's an incredibly exciting time. It's going to be a period in the history of education that people are going to note was a significant transformation in how we teach. I'm going to talk to you about two things that we're working on that I'm very excited about, and more importantly, the students and faculty excited about it.
One is, about 10 years ago, 2014, the provost of the university, Martha Pollack, who just stepped down as President of Cornell, launched something called the Center for Academic Innovation. It has grown. There are now about 400 faculty that are involved in projects. They're funded by the university.
And they're using all kinds of new approaches, creating new programs, virtual and augmented reality to enhance the learning experience, bring it to life, put a student. In addition to 400 faculty, we have about 50% to 75% of students, undergraduate and graduate are called Fellows. And they're critical to thinking about new ways of teaching. I'll give you a couple of examples in the second.
And the third thing is that we have a dedicated staff of about 125 people. We have about 100,000 square foot facility, about 50,000 is new. And we have steady our Hollywood style virtual reality studios where people can create content.
And so the kinds of things you can do is if you're a music professor, you can create a virtual augmented reality situation where a conducting student can, on the same day, conduct a Berlin Philharmonic and the New York Philharmonic, and they can feel like they're in Carnegie Hall.
If you are an architecture student, you can have a situation where simulation, an augmented reality, you can see what it looks like to actually create a structure with different media and see the pluses and minuses of it, simulate what might happen if you have too much weight in one area or another building.
You can actually go back in history, you can be a history professor and you can go back to ancient Greece and you can have a situation where you're in the midst of a conflict and it brings to life the Iliad, for example.
And so it's really remarkable. So that kind of innovation that students and faculty are driving, not from the administration. We fund it, but they drive it is a game changer. And 100% of our undergrads now have that kind of experience. And we know about 11 million learners globally that experience that through Coursera, which UVA is part of too. We'd love to collaborate with you on that. And the other is artificial intelligence. ASU--
JIM RYAN: I was waiting to see how long that would take for that to come up.
SANTA ONO: We were early adopters to embrace that. I really think it's here to stay. And so the important thing is to really have a whole university-wide conversation. And we did that. We had a committee with representatives from all of our 19 schools really think about what are we going to do.
And so we invested quite a bit into that. And it's pretty exciting. We're going to announce very soon something that's going to be open source available to all students, where you can put in your characteristics-- male, lacrosse player, or where you were born, what you're interested in, what your hobbies are. And in seconds you'll come out with a comprehensive list of all the scholarships that you can apply.
So you can also have, the last thing I'll say is that the students and the faculty have created. So they have-- we've given them the sandbox, all the tools, and they've created something called a 24/7 advisor.
And so you have a real advisor. But if you're an advisor, you're happy because you're not woken up at 3:00 AM in the morning. You have a virtual advisor that can answer a lot of the questions about, should I take this course or this course if I'm interested in be a chemistry major? And I want to go to medical school, even get professor ratings on that.
And so I think it's here to stay. And I think that these are things, if you think about the internet or going from the slide rule to the calculator, those things that we should embrace and the academy should come forward with the right way to use these to augment the educational experience.
JIM RYAN: Michael, you've already talked a little bit about teaching innovations at Arizona State. What do you see on the horizon?
MICHAEL CROW: Well, so it's hard for people to grasp the complexity of the democracy. So the full realization of the democracy is that we have people from every cultural background, every ethnic background, every religious perspective, imaginable.
We have people here from all over the world that have gathered together, and then we decide to cram them through British and English model schools and hope that somehow we can and we get into hundreds of millions of people do that with the great professors standing in the front of the room, which certainly every student should experience.
And so with the first paragraph of our charter being that we'll measure the success of our university based on who we include versus who we exclude and how they succeed, the only way to do that, even in a small school. Size has nothing to do with it. The only way to do that, given the population is to assume the following, and that is that everyone is a learner of abundance rather than a learner of deficit.
That if they have a deficit, it's a function of something that's been precluded from them along the way. And therefore, one has to find a way to personalize the learning to the largest extent possible. Now d sometimes that's with the professor, sometimes that's not with the professor. Sometimes that's with a tool.
And I'll use three short examples. We had a student recently, our first student, because we encouraged now the use of every learning tool that we have, of which we have developed more than 500. Every learning tool that we have on campus, off campus, online, on campus, everywhere, all the tools are applicable everywhere.
So that means then we have tens of thousands of double majors. We just had a student who was actually in my class, the spring of '23, graduate with five degrees in four years. Now people say that's not possible. You're wrong. It is possible. It's possible because we have devised learning--
JIM RYAN: Five degrees in four years? He
MICHAEL CROW: Also won a Rhodes Scholarship that semester.
JIM RYAN: He deserved it.
MICHAEL CROW: No. But what I'm saying, though, is that this kid was not an extraordinary kid. He was a kid who could take advantage of every tool that we had built. So far ultra high performing students, we now have the opportunity for these students to major in Lyric Opera and biochemistry while minoring in philosophy at the same time, no problem.
Using all of these tools for enhanced learning, including new virtual reality tools, all these other kinds, everything that you can possibly imagine. Then a second category of student would be a young woman who I talked with recently who wasn't able to finish high school because she had two babies while she was in high school age.
When our system in our society, it's like, well, too bad. You're up the Creek without a paddle. I'm really sorry for you. And so one person in 10,000 in that category can, like her, be admitted to the Mayo clinic's medical school, which she was after she finished our online biochemistry degree, developed by our biochemistry faculty in our School of Molecular Sciences with every tool you could possibly imagine.
And the third person, which just to put this in perspective, this was a hugely humbling thing for us relative to teaching. I'm a professor. I got tenure at Columbia. I grew up writing and teaching like many of the people in the room. Yeah, OK.
That's not the only way to get things done. It turns out you might not be able to go to school. So this young woman from Afghanistan wrote us a couple of years ago. She said, I found your online universal learner courses.
I took four of them. They're all taught by robots. They're taught by robots developed by our faculty that enhance the learning outcomes. If you take these four courses and you get a B plus or better in the course, you have a 95% probability of being able to do very well in the curriculum at the university.
She hadn't been able to go to school since she was 10 years old, lived in Taliban controlled areas. She wrote us a note and said, I took your courses. I got an A in all of them. I'd like to come to your university. We're like, OK. So we admitted her. We got her out of the country, we got her money, we got her a visa, we got her all this. And then she comes to the university and kicks everybody else's butt.
So she's like Frederick Douglass, who didn't go to school, or Abraham Lincoln who didn't go to school and who didn't take some tests. But Frederick Douglass and Abraham Lincoln couldn't be admitted to our universities today.
So we took that model now using every tool that you can possibly imagine. We have 30,000 students enrolled in our pathways program that she was on taking our universal learner courses, and we've admitted 6,000 of them to the university. So 6,000 of her are now students of ours.
JIM RYAN: Which is remarkable. So the idea of five degrees in four years prompts a question about, how much time it takes to get an undergraduate degree. Do you see that changing?
MICHAEL CROW: Hopefully.
JIM RYAN: Right. So right now, the four years is the default at almost every institution. Some students can graduate by a year or early in the US. Do you see that going by the wayside and students being able to go at their own pace and finish in the time that makes the most sense to all of them? And do you think that there will be more flexibility in terms of where you actually are taking the classes, even if you stay within the same school?
HARRIET NEMBHARD: Yeah, I do. And again, to Michael's earlier point, I hope that we have a host of institutions that would be able to make-- to have this sort of flexibility from a three-year time to matriculate to less where it's possible.
However, I would say that I don't immediately see us at Harvey Mudd taking that up. We describe it not as a 120 credit hour experience, but a four-year residential experience. All but six of our students live on campus.
And there is a lot of the co-curricular education that is there very much with intention. Everything from your ability to continue being tutored in the outdoor classrooms by your faculty members after class to being able to do-- 100% of our students do research with faculty as they matriculate through. So this is a very different model. And a part of it is also being in community with each other.
So, as I said, while I hope that there will be different opportunities and different pathways, Harvey Mudd, a lot of our focus on the mission of building STEM leaders kind of would lead us to explore in other ways first.
And by the way, as you are a student at Harvey Mudd, again, all of the majors are in STEM. But you'll go into any class and half of your fellow students will be women. We graduate 50% of our engineering majors are women, 50% of our physics majors are women. And so we have these mini foci about what that experience means and what inclusive excellence means in this environment.
[INTERPOSING VOICES]
SANTA ONO: I'd answer on quite a different way. A lot of respect for Harvey Mudd and that philosophy. We're actually attracting completion rates. And when they attain their degree, whether it's a baccalaureate or master's degree.
And we hold ourselves accountable to getting students their degrees as quickly as possible. There's a lot of levers to doing that. One is for the university to reach into K through 12 schools and provide them with more options to obtain course credit before they land on campus.
And second, we really are investing in advising, so fewer people fall through the cracks and so that they can finish their degree sooner. We don't want them to leave as quickly as we believe in the importance of experience, but we also believe in affordability.
It's a real burden for students, parents to pay for every year, every semester of education and books. And so we don't apologize for the fact that we're trying to get kids out of college soon as possible.
I think it's on-- it's something which has eroded some of the trust in society and education. There are too many students who do not finish their baccalaureate degree. We have to--
MICHAEL CROW: This is the point. More than half that starts.
SANTA ONO: We have to seriously explore credit, stackable credit, because if you don't complete your baccalaureate degree but you've finished three years, there are real skills depending upon what your major is that you should get credit for.
And so I think it's incredibly important. And there's-- the last thing I'll say is there's an American Academy of Arts and Sciences Committee that's focusing on really looking at all these places where there are leaks in the pipeline, lost opportunities for articulation with high school, lost opportunities for transfer credit from community colleges or between universities. And I think it's really important.
MICHAEL CROW: I guess what I would say is that the universities are hugely designed for the faculty. Most people don't even know what the word semester means. They don't know why it exists when it exists. They don't know why we don't go in the summer other than farmers used to not be around.
And so what we haven't done is thought about at a different level. So how do we help our faculty to be empowered to now deal with all different kinds of students, rather than self-selection, self-selection, self-selection for certain pathways?
And so I don't think time is the unit of analysis that we should be worrying about. The unit of analysis that we should be worrying about is, what does the person know? And so we have this theory now that we've moved away from and we've moved toward, which is that our job is to produce a master learner.
How do we do that? How do we find a way to produce a master learner? And it will be different in different subjects, but there'll be certain overlapping things. And so what we found is that we changed the nature of our semester, which freed our faculty up immensely.
Our faculty had become unbelievably productive, unbelievably productive. Research, scholarship, community service, students, teaching more students, teaching online, teaching on campus, picking what they want.
We took the semesters and broke them up into six academic modules over the year-- fall A, fall B, which are real things. You can teach your face-to-face class in 7 and 1/2 weeks and be done and go to your lab in Bermuda and do whatever it is that you want to do. We don't care.
JIM RYAN: You have show labs in Bermuda?
[LAUGHTER]
MICHAEL CROW: We actually bought the Bermuda Institute for Oceans--
JIM RYAN: Really?
MICHAEL CROW: Yes, to flesh out the global--
JIM RYAN: Any Virginia faculty here, don't pay any attention.
[LAUGHTER]
MICHAEL CROW: We also have planes and ships in Hawaii and all kinds of other places. And so even those faculty. So we have faculty that are distributed. We have faculty that are all over the place. We have faculty, students that are all over the place.
The notion is, how do you empower this student in whatever time they are able to devote? If they have the resources and they can stay for four years based in all of the glory of the institution, have at it. That's what I loved when I was an undergraduate.
I had two majors and three minors. And it was just like a hugely important thing for me. And there was no way I could have done that in less than four years. But we're obsessed with the clock. We're obsessed with the models. We're obsessed with the faculty being the drivers of everything.
Eventually, and we've talked about this earlier in our group, society is becoming increasingly unhappy with us stuck up people at the universities. So now let's tax the endowments. Let's attack the universities. Let's take away their privileges.
Let's cap the salaries of the NIH grants. Let's do this and this and this and this and this, and they're just warming up. Why don't we go after four or five Ivy League presidents and execute them in public? And so-- politically.
And so that just happened within the last six months. And there's more of that on the way. That's all a function of our inability to be focused sufficiently on what our real job is, which is empowerment of the citizens and the democracy to be successful. The fact, they are the means, not the end.
JIM RYAN: So I want to come back to that and the point of flexibility. But I'm going to turn to research for a second and talk first about disciplines which has come up before. So one of the oddities that I've seen in higher education is there is a contradiction between how much we emphasize interdisciplinary research and how universities are organized.
So everyone talks about collaborative research, team science, how important it is to bring experts from different disciplines if we really want to tackle these thorny challenges like climate change.
But then when you look at how universities are organized, they're still organized by schools and departments and disciplines. And if you look at how someone is supposed to most of them. I know. So this is a footnote for-- an asterisk for Arizona State.
And then if you look at tenure standards, new faculty are told you're going to be judged based on your own independent work. So they're talking about cross-purposes. And in the meantime, you see a proliferation of interdisciplinary centers and institutes.
And so clearly, there's some tension. And I appreciate the idea that it's impossible to have interdisciplinary work without having disciplines. So it may make some sense. But I am not sure it can last.
If the answer is, well, we need to create a center every time we want to bring folks who are studying the same topic together, what does that do to departments? And is that really sustainable? Yeah, I really think it, sounds like you have the answer.
SANTA ONO: Try to do both. So when we thought about this, like I said, and we decided to admit to the breadth of our disciplinary strength. And someone I talked to at UVA just earlier today said that's really important because if you're a mathematician or you're an immunologist like I am or a philosopher, you tend to go to those meetings.
And that critical mass to speak about this field is incredibly important to me as an immunologist. If I didn't go to the immunology meetings, then that field is really, really fast and I'd miss it out. So the way we solve it, this is the last thing I'll say real quick, is I did an experiment at University of British Columbia.
We created this research building the size of a couple of football fields. And we said, here are all these benches, here's all this core equipment. Choose where you want to go, whether you're an immunologist or a chemist or a biologist or whatever. Choose where you want to go.
And they came together as interdisciplinary clusters around topics. So they might be focused on juvenile diabetes or pancreatic cancer or whatever, neurobiology. And so they clustered, even though they had different disciplinary homes for research collaboration. But guess what? We thought maybe they would just stay there and not go back to their department. Opposite.
Actually, their attendance at the departmental meetings were not because they knew if they weren't there, they would miss out in the milieu and the reinforcement of being within the Department. So I think--
JIM RYAN: So you think it's sustainable--
SANTA ONO: You can solve it by space, but you can also enhance the disciplinary strength by maintaining that community as well.
JIM RYAN: But do you two agree?
HARRIET NEMBHARD: Well, I think if we're going to really support faculty in their interdisciplinary work, it has to not rely on the heroic efforts of the one person who keeps things all together. And there has to be some systematic way that we recognize that work for most faculty on the tenure clock. This is our tenure track.
This is really a part of the academic progression. What can we do to enhance and support their work as interdisciplinary scholars as they go through the organization? So some of the things that we've really focused on at Harvey Mudd have been, in some ways, very technocratic things.
Looking at the faculty notebook to be the PNT criteria, referred to as the faculty notebook, and to be very specific about giving recognition for interdisciplinary work in ways that would be supportive of the faculty.
Similarly, as we look at our budgets, to be very intentional about supporting with resources these interdisciplinary efforts. So those have been some of the things that have been undergirding those efforts so far.
MICHAEL CROW: I think what I would add quickly is that we decided that the last thing the world needs is another generic public university doing the same thing that the other 800 public universities are doing and doing inadequately, net, net overall.
There are some exceptions-- a few dozen exceptions. But beyond that, there's a number of performance issues. And so having said that, we said, OK, why don't we rethink the logic? So not everything is as simple Don Stokes Pasteur's quadrant fight between the four realms of science. I mean, it's a-- so we said, is it OK for us to have exploration as one of our science areas?
Yes, How about outcomes? Yes, how about just pure reductionism? Yes, how about pure creativity? Yes. So we decided to allow schools to evolve in directions where they broke down their departmental barriers. And so we now have schools that are focused on outcomes as their objective.
So they're not in the medical school, which is an outcome oriented design school, but they're in other areas-- like sustainability and ocean futures and conservation futures, the new school that we're designing. We've started 40 of these and did away with 85 academic units along the way, all with consensus vote by the Academic Senate.
JIM RYAN: So you're moving away from the disciplines to a certain--
MICHAEL CROW: Well, no, what we're moving toward is the world doesn't need more disciplines only. So a person then, therefore, can write in multiple disciplines, be recognized in multiple disciplines while also advancing in these new areas.
So in our School of Earth and Space Exploration, that process has allowed us to increase the number of majors attracted to exploration as opposed to geology or as opposed to astronomy, attracting many more people to the program, attracting much more investment, philanthropy to the exploration objective, attracting much more research funding.
We have a 10x increase in the research funding and research expenditures in that unit. And many more majors than we've ever had, more diversity than we've ever had. And we have the degrees, the traditional degrees-- geology and astronomy and astrophysics and astrobiology.
But we also have a degree in space exploration and space strategy and all kinds of other things filled with all these students in these faculty who want to learn in that modality. So for us, I'm not suggesting everyone has to do this.
But what I am suggesting is that most places are too rigid. Here's a story as I left Columbia. So one of the things-- I was the founder of a thing called the Earth Institute at Columbia, and among other things that I worked on. And I helped get a new environmental biology and evolutionary program going.
And I remember as I left, it was like one of the first new departments in 50 years. And as I left, they said, finally, you're getting out of here so we can get rid of that thing. And so I will note that it's still there 22 years later.
HARRIET NEMBHARD: I'd like to add to that, Michael. I mean, I think that, that approach gives at least a couple of to-dos. I mean, I think it means that we need more experiences for graduate students to have access to these opportunities in these schools and these types of schools.
And we talked about perhaps some ways to have some exchange experiences. But I think it also means that we need to get-- I think we need precision on how we hire interdisciplinary faculty and bring them into our organization as well.
Right now, there's still a lot of faculty lined by disciplinary area and such. And I think that approaching those two parts of the pipeline, if you will, also in ways that facilitate the interdisciplinarity are important.
MICHAEL CROW: So one visualization I have for this is there's the reductionists who are boring into the understanding of nature in every possible way down to the finest subatomic particle. And then there's the systems-level thinker, a discipline like sustainability, which is boring up to get the broadest interconnected thing. We now produce both of those.
And it's not a wise thing for a person on the upward screw to try to go to a reductionistic department and hope that they'd have any chance of success because they'll be annihilated. What we have now is a way for our PhD students and our faculty are moving in both directions. And it's a conscious, planned thing on our part.
JIM RYAN: So I have a lot more questions to ask you, but I want to make sure we leave time for audience questions. So someone is collecting questions.
HARRIET NEMBHARD: From the questions, it looks like we might have homework, guys. [INAUDIBLE]exciting to see all the cards flying around.
MICHAEL CROW: The first question is, why did they let you on this panel?
[APPLAUSE]
JIM RYAN: And next question is, who do you mean by you?
[APPLAUSE]
MICHAEL CROW: There you go.
JASON NABI: So it's only dawning on me, the irony at this very moment, that the requested method of delivery for the Q&A was not in and of itself very futuristic. But thank you. Thank you. There's a lot here. And I'm thinking that we will have time for one.
[LAUGHTER]
But this is wonderful. And we will read each and every one of these and circle back to you. This question seems to be more of a choose your own adventure. So it looks like take it one direction or the other.
One, what is the most existential threat we will face in the next decade? And what will be the role that higher education plays in addressing that threat? And/or, what excites you most about the future?
[LAUGHTER]
And how does this affect your vision for the future of higher education?
HARRIET NEMBHARD: Yeah, go for a hot planet or AI? Which way you want to--
MICHAEL CROW: I would say accelerated technological change with no mechanism for our society to educate quickly enough for people to stay up and keep up with it before they begin to lose context.
JIM RYAN: That's the threat or the opportunity?
MICHAEL CROW: It's the threat.
[LAUGHTER]
HARRIET NEMBHARD: And the opportunity.
MICHAEL CROW: It's both.
HARRIET NEMBHARD: I like it. Like I said, well, I think this is very encompassing. What we're really focused on is helping-- have you all heard of the book Generation Dread? I recommend it. It really taps into the anxiety that a lot of the students of college age have right now about the planet that they are inhabiting.
And I think it's very important for us to be able to listen to the students and help them to navigate their way through solutions for a warming planet.
SANTA ONO: I would say, looking back to when I was in college a long time ago, we're just trying to figure out how the immune system works. It was a black box. A lot of things were black boxes back then.
And if you think about what we know, what a sophomore at the University of Virginia now knows, it's quite far advanced from what we knew back then. It's a credit to what universities have done. Much of that occurred in universities like the University of Virginia. Now things that we only dreamed about are going to happen. There are drugs that are being generated that will reverse diseases that are chronic.
We can now edit genes. We can now take cells and propagate them in vitro, put them back and repair diseases that result from degeneration. It's remarkable. So I'm going to end with an optimistic note. I can't wait to see what the students of today can do with the knowledge that didn't exist when I was their age. And the last thing I'll say is that we hosted Barack Obama when I was in Vancouver.
HARRIET NEMBHARD: He was at Harvey Mudd, too.
[LAUGHTER]
SANTA ONO: And it was--
MICHAEL CROW: An issue.
[LAUGHTER]
SANTA ONO: Jefferson founded UVA. But what he said at the end is looking at the mountains, because you can see beautiful mountains there. And he was talking about all these difficult challenges, geopolitical challenges that are real. But what he said is I have a belief in the future of civilization because of the quality of the youth that are at our universities. So something optimistic.
[APPLAUSE]
JIM RYAN: Well, I would underscore that point and I want to thank all the panelists. And I'm sorry, we didn't get to more questions. But I want to turn it over to Lori to say a few closing remarks. But please join me again in thanking the panelists.
[APPLAUSE]
LORI MCMAHON: I'm sure you now understand why I equated these four outstanding presidents to Jefferson, Madison, and Monroe. So please help me congratulate them again on a great panel.
[APPLAUSE]
And I just wanted to share a few of my thoughts and the lessons that I have learned today. And hopefully, you heard the same messages. So there are several important points here that we heard. So serving students in varied, diverse way is our job.
There are so many learners, and innovation and teaching includes being flexible and finding many ways to engage with our students and to deliver education. And we must be inclusive. Inclusivity is important and personalized education is critical for all of the learners that we have the responsibility for educating.
We must provide opportunities for students and faculty to be the drivers of innovation, that they have to have opportunities to engage and to interact. And that collisions are important in driving that innovation. And the question that we heard was about fear. What is the biggest threat to higher Ed? But we also heard a positive spin on that question, too.
But it seems to me that many of us have some fear of the future. And so when we have fear of the future, we may not think hard about how to get to solutions. So being inspired by the future is where the Academy comes in.
And we, in the Academy, are in an extremely privileged position. Ultimately, it is our responsibility for not just predicting the future, but we must work to create the future. Thanks again, and we hope to see you at the reception. Thank you, panelists.
[APPLAUSE]