Monday Musings from The Kravis Center Monday Musings May 3, 2021 Focus: How can we prepare students to be "future proof"?
Dear Colleagues,
How can we prepare students to be "future proof"?
A recent paper by MIT and Boston University researchers reveals that "for every robot added per 1,000 workers in the U.S., wages decline by 0.42% and the employment-to-population ratio goes down by 0.2 percentage points". But it is not just robots — as an article in the New York Times also observed, "White-collar workers, armed with college degrees and specialized training, once felt relatively safe from automation. But recent advances in A.I. and machine learning have created algorithms capable of outperforming doctors, lawyers and bankers at certain parts of their jobs."
How can we equip our students (and ourselves!) to be, in a sense, "future proof"?
Kevin Roose, author of both that NY Times article as well as a new book Futureproof, explained in an interview on NPR that "we have been preparing people for the future in exactly the wrong way: We've been telling them [to] develop these technical skills in fields like computer science and engineering. We've been telling people to become as productive as possible to optimize their lives, to squeeze out all the inefficiency and spend their time as effectively as possible, in essence, to become more like machines. And really, what we should be teaching people is to be more like humans, to do the things that machines can't do. ..."
Before they were machines, computers were people. Alan Turing, in an attempt to describe digital computers before their widespread use, leaned on the analogue of 'human computers,' or people performing operations and "following fixed rules; he [the human] has no authority to deviate from them in any detail." As technology has advanced, and society evolved, we have dropped any qualifiers — computers are machines, and from dating to politics to policing, those machines are impacting more and more aspects of our lives in ways that are increasingly less visible.
Of course, as educators, our charge is not simply to prepare students for the job market, but also for a fulfilling, well-adjusted life (e.g. our mission statement). So what does the "best self for the common good" look like in an algorithmic world? Perhaps it means letting machines do what they do best — and understanding how they function. As we saw from The Social Dilemma, algorithms run amok can have dire consequences; we need to understand the ways in which AI is creeping into our daily routines. ("Program or be programmed" as playwright/computer scientist Douglass Rushkoff advises.) But any 21st century curriculum must emphasize our uniquely human traits like collaboration, creativity, persistence, and passion. It is unlikely that there's space for traditional will-this-be-on-the-test approaches to teaching and learning in a curriculum that thinks this way. We need to be quite unlike Turing's human computers and be comfortable breaking the rules and upending the status quo.
After all, it seems unlikely that technology alone will create a more just, inclusive, and equitable society. (It may well do the opposite.) We cannot wait for some AI to solve the climate crisis, homelessness, or mass incarceration. These human problems require human solutions. At a time when it feels like a lot of our world is becoming less human, asking our students to examine and uphold the human experience while valuing the characteristics that make us so will be even more necessary.
Matt, on behalf of the Kravis Center
_______________________________________________________________ ***Click here for professional learning opportunities for this spring and summer. Follow us on Twitter! |