Open educational resources or learning critical evaluation

Open educational resources (OER) have become a sort of holy grail in ed tech circles. They combine the sharing and openness of the web with carefully curated resources designed to supplant the textbook. They provide additional options for instructors and cost-savings for students. They showcase the best aspects of a community-oriented pedagogy.

Or do they?

I like the idea of open educational resources. I really do. But I still find something very troubling about the way in which they are used and discussed. OER still represents a carefully curated set of instructional resources aligned to a particular way of thinking about the world, and often conforming to particular standards – what I might call the “textbook view” of life – and are thus implicated in providing an overly sanitized view of the world. They insulate students from information “out there” by pre-digesting it and packaging it for ready consumption. This makes sense given the way we think about school, as the place where one goes to get the right information about the world and to learn the things that one needs to get a job. But if learning is actually something else, something more difficult and trickier to teach, then OER falls into the same trap as textbooks: they don’t help students learn how to learn. They don’t help students learn how to critically evaluate sources and arguments that exist “in the world.” They take the possibilities of the web, a fundamentally open system (at least theoretically), and offer a sanitized and curated view of it. It’s great that OER saves students money, but it’s not so great that they don’t advance critical faculties or pedagogy.

How do we help students learn how to critically engage with the kinds of messages that exist “out there”? How do we help them flourish by stoking their innate sense of curiosity? How do we break out of the “textbook view” of life and help students explore the world beyond the walled gardens of our own creation?

But of course, I’m not the first person to think about this. For a much more eloquent statement of this issue, I urge you to watch Gardner Campbell’s  “Ecologies of Yearning” talk at Open Ed 2012. And please use the comments section if I have missed some examples of OER that really do expand the boundaries of student learning rather than reinforcing the textbook view of life.

Advertisements

Intentionality, or minding one’s goals

Routines can be very comforting. On most days I wake up, make coffee, pet the dog, read the news, and chat with my wife for a few minutes before really starting the day. I find this a perfect introduction to being conscious. Similarly, when I get to work I generally open my calendar, respond to any pressing emails, and put together a game plan for the workday. Both of these routines are beneficial for me and have become a sort of natural flow.

Other routines that are just as comfortable, however, might actually keep me from doing the things I want to do. Mondays and Fridays are not terribly dissimilar, as I generally follow the same routine on each of these days. Mondays in April look very similar to Mondays in October. Following the academic calendar is straightforward, with peaks and valleys occurring around the same time every year. There is a certain hypnotic rhythm about it.

I imagine that one could get lost in this rhythm, find five years elapsed, and be no closer to one’s long-term goals. While it would be nice to say this hasn’t happened to me in the past, the reality is that I spent the better part of six years in a corporate IT job and came out the other side having made only slight progress on my long-term goals. Perhaps my goals at that time were unfocused and slightly out of touch with reality, maybe they were the wrong goals, or maybe I just didn’t make time for them. At any rate, I don’t want to find myself in the same position in another five years.

The best solution to this problem that I can come up with is both obvious and simple: build “goal-reaching time” into my routine. I’ve been saying for years that I would like to write more, especially in this space. The only thing that will put words on this metaphorical page is writing them, and the only way to write them is to make time to write them. Like many people, I try to excuse my lack of effort via the “I don’t have enough time” retort, but I no longer believe this to be a legitimate excuse. Saying “I don’t have enough time” is the same as saying “I don’t value this thing enough to allocate time to it.” If I set attainable goals and truly valued them then I would spend time trying to reach them.

Competency and credit

I worked as an IT support technician for several years in college and after completing my bachelor’s degree. I liked the work, for the most part – helping users figure out why their Windows [version redacted] machines weren’t working as expected and helping to administer a few servers – but it was clear to me almost from day one that the vast majority of the content I learned in college wasn’t going to be particularly helpful in this business (not counting the ability to speak, write, and think clearly, which are invaluable anywhere). This may have something to do with the fact that I was an interdisciplinary humanities major, rather than a computer science major, but it didn’t change the fact that I needed further training in order to perform my job correctly. Some of this came informally by learning on the job. Some of this came more formally by training seminars and workshops supplied to me by my employers and ones I sought out myself.

This is traditional in the IT support world. Some higher-level techs come out of college with degrees in computer science and hit the ground running as programmers, network administrators, or DBAs. But I would guess that most IT folks work through tutorials and study for an exam like A+. Passing this examination demonstrates competency with a particular type of skill. Other competency exams await for those looking for further certifications or career advancement, even those who do graduate from college with computer science degrees.

This model seems very effective to me. If I study for and pass an exam with a known competency undergirding it (like understanding how to troubleshoot PC hardware and software), then a potential employer has a verifiable way of knowing that I understand how to perform certain types of work. When someone graduates from college, though, what does a transcript really offer to a potential employer? In my interdisciplinary studies degree, I learned to write much better than I had in high school, but there was no writing exam that solidified my competency in writing academic research papers. Though I tried to absorb Weberian sociological analysis, when I finished my degree my transcript did not say that I was “well versed in the Protestant Work-Ethic.” My transcript was so useless, in fact, that none of my potential employers ever even asked for it. Though the bachelor’s degree is itself a type of credential, it is a murky one; can we assume that a diploma from a reputable four-year institution certifies solid thinking, writing, and problem-solving skills?

This leads me to wonder if college should be more like the IT exams. I’m not suggesting that high-stakes examinations should become the core of the higher ed system, though this is often the case anyway (by virtue of the dreaded Cumulative Final). The model I envision would include evaluation of specific skills, likely related to both the discipline in question and more general thinking and writing skills, demonstrated through project- or test-based assessments. If a student does well in calculus class, we have a general sense of the quantitative skills he or she might possess. But if that student instead had demonstrated that they can calculate a derivative and apply that skill to a real-world problem, would that be of more use to both the student and those who might look at this student’s transcript.

To take it even further in this direction, suppose that graduation from any given institution of higher learning required a student to complete a series of competencies. Rather than a specific amount of in-class time, measured by the Carnegie unit, students would be able to demonstrate competencies at their own pace. Southern New Hampshire University is already doing something like this with their “College for America” program, so it will be interesting to see the results of this experiment. The benefits of a system like this seem obvious, though – especially for job seekers and their potential employees – because these competencies will be listed directly on the student transcript and will become public knowledge about anyone graduating from that school (or, taken to scale, any school in the country). Colleges could (and are) working directly with employers to understand the types of skills they desire in employees, thus closing the loop between educational attainment and employability.

I don’t mean to sound like all higher education needs to be career-focused. I think there is immense personal value in taking courses because one is interested in the material. But the fact remains that the vast majority of students who complete a degree are headed to the workforce in one way or another, so doesn’t it make sense to ensure they’re properly prepared?

Educause, Day 2

I thought I had published this post awhile back. Here’s the second installment from Educause.

Day two of my Educause experience was just as interesting as the first. Games and gamification was the theme of the day. The very short version of the story: designing games is hard, and taking design elements and throwing them at non-game contexts until they stick is not a good way to approach gamification.

My day kicked off bright and early with a good session on using badges to recognize granular student achievement. Perhaps the most interesting thing to come from it was the connections I made with others who were undertaking similar projects. It was the most discussion-based of the sessions that I attended, with many different people in the audience weighing in on various aspects of developing a badge system. Two notable insights emerged for me from this session: first, their experience showed that badges could actually drive interest in a course. Second, utilizing badges forces a self-conscious alignment of course activities with learning outcomes – for what is a badge but a demonstration of competency around a learning outcome – and thus there is some question about whether badges themselves are responsible for student learning gains or the thoughtful redesign that generally accompanies the addition of badges. Food for thought, for sure.

Then came the second keynote presenter of the conference: Jane McGonigal. For those not familiar with Jane, she is a gifted game designer committed to doing real societal good through the games that she creates. She gave an inspiring talk about the future of games in education. Two things stood out quite strongly to me. First, as part of a discussion about the ways that games make people feel, I was struck by the most common feeling experienced by gamers: creative agency. I’m willing to bet that this is not the first thing one would think about or talk about it the context of higher education, but it is obviously a powerful motivator for gamers (as evidenced by the now one billion gamers on this planet).

Second, gamers fail an average of 80% of the time. Think about that for a second. When you’re playing a game and trying to accomplish a goal you feel engaged in that game. If you fail the first time that you try the goal, it actually seems to increase your motivation to try again. To make this more concrete, my wife has been playing Candy Crush for a few months. There are times, now that she’s in the really hard levels, that she will try a level twenty or thirty times before mastering it. While this can get frustrating at times, it seems that the act of failure also teaches: trying out a strategy and trying to understand why it fails is a great way to learn (and is not dissimilar, incidentally, from the hypothesis-experiment model in most science disciplines). Once again, this is not the model of higher education. We are all about high-stakes exams, where failure is not an option at any point in the process. What would it look like if we embraced a model more similar to gaming that encouraged students to try something until they mastered it? Our assessments would need to get more sophisticated, for sure, but perhaps rethinking the multiple-choice model would be good for us anyhow.

Educause 2013, Day 1

I’m having a great time at my first Educause. It turns out that being surrounded by thousands of IT professionals dedicated to making their organizations better is a heady experience. Sir Ken Robinson’s keynote this morning was enchanting. He is the kind of speaker who can mix demographics with science fiction and educational paradigms to create a captivating experience for his listeners. Though his message was similar to other talks of his I’ve seen, I was struck by the shared vision of creativity and innovation that he inspired in his listeners. My main takeaway from his talk was that we’re still at the beginning of the digital revolution and, given the kinds of challenges that we are facing with population and climate, we need to become even more creative in our endeavors. Educational systems need to reflect this priority rather than holding on to an outdated, industrial-revolution-era model.

This theme continued with a talk by Mimi Ito of UC Irvine. Mimi is a cultural anthropologist who studies the uses and cultural meanings of technology among young people. She made a great distinction between the ways that learning has changed, especially its new networked and socially-mediated dimensions, and the way that education has not changed. Like Sir Ken, she sees the educational system as based on an outdated paradigm, though she defines the old paradigm slightly differently: information scarcity. Thus she thinks that our current transition to information abundance should cause us to reconsider our educational models. She made an analogy that I found particularly insightful: when human beings lived in an age of caloric scarcity, our biological systems for storing and miserly use of those resources made great sense. In an age of caloric abundance those strategies no longer make sense (and, indeed, can be maladaptive), and our job as eaters becomes one of choosing our food wisely. Likewise, information scarcity leads to the kinds of educational models we still have today, like lecture-based classes and expensive textbooks. But this can (and perhaps should) change with the recognition that scarcity is no longer the reigning paradigm.

Two other things stood out to me in her talk. The first was her call for educators to use insights about how students learn outside of school to change the way that students learn in school, hopefully mediating the culture clash in learning modalities between the two areas. It turns out that students are learning quite a bit through outside-of-school online interactions, including an increasingly important technical literacy. And this idea is not new: Dewey talked about the potential seamlessness of education many years ago.

The second point that stood out to me was the distinction she drew between learning motivated by interest and learning motivated by friendship. The latter is learning that comes from one’s peer group, from keeping up with one’s friends of Facebook and Twitter and other social media outlets. The former is learning that students seek out because they are interested in something, be it web comics or programming or how to build a pumpkin-launching device (no, seriously). This learning does not necessarily have anything to do with their in-person social group, and in fact often creates a sort of distributed peer community organizes around a certain topic. Anyone with a particular hobby can tell you how deep this rabbit hole can go. Her point here, though, was that it may be important for educators to target the interest-based engagement rather than the friendship-based engagement in order to avoid the “creepiness factor” and engage students where there interests lie.

 

Though I went to three other sessions, the two referenced above were the highlights for me. I’m excited to see what Day 2 has in store – I think it will likely be great.

Learning curves (or, the value of a good mentor)

I have been in my current role at Boston College for about six months. The position had just been created when I started, so I found myself with the exciting and somewhat tricky task of determining how to fulfill the stated job responsibilities. Though I had been working at BC while pursuing my Master’s degree and had developed many great relationships, I’m not sure that I could have been fully prepared for the jump to full-time, project-based work.

The learning curve was steep. I had a lot to learn about the ongoing operations of our department and the wider university. I had a lot to learn about working with faculty, about communication, and about managing multiple simultaneous projects. At one point, after surveying my tasks for the upcoming week, I remember thinking, “Uh oh. I’m not sure what I got myself into.” My hunch is that this happens to everyone at some point, but that is little consolation in the face of rapidly approaching deadlines. I think that one the greatest challenges for any aspiring project manager is learning to manage the impending sense of panic when looking at a project as a whole, and instead focusing on concrete, solvable tasks. If I’m being totally transparent, I still have a ways to go with this.

I have made several mistakes along the way, and I’m sure I will continue to do so. I still have a lot to learn. I guess “learning curves” is doubly appropriate because life and work are inherently unpredictable. There’s a real sense in which I don’t know what I don’t know.

All of this would be totally overwhelming were it not for the support of my coworkers and, especially, my boss. The word “mentor” springs immediately to mind in reference to her role in my professional development. She has created the space in which I can continue to learn and develop, all the while offering myriad insights and constructive criticism. To use an extended (double, and probably overworked) metaphor, she has both flattened the learning curve significantly and regularly tipped off the hitter about which pitch is headed his way.

And for that I am (doubly) grateful.

Remembering and forgetting

Lots of people keep diaries. My wife’s grandmother kept a record of nearly every day of her life, preserved in perfect journals to which (I am told) she sometimes referred in order to refute something her husband said. I do not know if this is true, having seen no actual record of it, but I take it on good authority that this would sometimes happen.

Many people, myself included, are also keeping these web-journals or web-logs, both in an effort to communicate with others and to chronicle the present. They make for great records of what one was thinking at a given time. Smaller snippets of virtual remembering are also embedded in various social networks – what I thought about the latest cat meme, the sardonic comment that really didn’t come off about some piece of inane news, or the virtual argument I started with an acquaintance I haven’t seen since 1999.

My gmail account has so much storage that I no longer need to delete messages. Everything is archived and instantly searchable. With online note-taking tools, I can quickly unearth nearly anything I was thinking in the last several years. We appear to be rapidly approaching a time when life itself will be streamed live and recorded for posterity (see Google Glass). Anything that hits the web will likely be kept in perpetuity.

I recently finished a book about the aftermath of the American Civil War in which the author argued that specific kinds of forgetting and a creative remembering helped to bring the North and South back together, at least politically, after the war. “Both sides were valiant in battle, and both sides fought for that which they believed” – this was the constructed memory of the war. If they had YouTube back then, I doubt the fiery political speeches that each side made would have been forgotten.

We all tell stories about ourselves, and remembering itself is often an act of creativity. Our brains are wired to see patterns where they might not exist, to make our past more linear than it actually was, and to de-emphasize difficult or painful episodes. We tell stories, in the process re-encoding those memories and subtly changing them. We experience hindsight bias, the feeling that we knew exactly what was going to happen in a given situation, but only after the fact. We experience cognitive dissonance, or something akin to “making the best of it,” in which we tell ourselves that a decision that we made or a situation in which we find ourselves really is what we wanted all along. We rationalize our decisions to ourselves and others by remembering in creative ways – and by selectively forgetting or downplaying conflicting data.

All of which leads me to wonder if the mass “archivization” (to coin a neologism) or near-perfect, instantly-accessible, fully-searchable record of everything we have said and done is really as wonderful as we seem to think. Is there something to be said for uncertainty, especially the uncertainty of the past and the creative (and, perhaps, psychologically beneficial) ways in which we remember? Or is this simply the next way humanity will augment its intellect through computers?

Innovating within constraints

I have been reading some pretty heady stuff recently, especially around creativity and innovation. I have been thoroughly convinced of the additive and connective nature of innovation – the idea that innovative thinkers are usually those who can combine existing elements in novel ways, thereby advancing the field in which they work (see Everything is a Remix for some great examples). I find this a compelling way to think about innovation. Rather than the isolated genius struck by a new thought as if by lightning, innovation by connection and addition allows us lesser mortals to practice the type of thinking that fosters innovation and actually improve at it. The fact that this idea is strange – the idea of “getting better” at creativity – is itself a sign of how deep the isolated genius idea permeates our perception of innovation.

But almost at the same time that I think this, I find another thought interrupting me: we all work within constraints. I want to be very clear here: I think some innovations do (and should) break open constraints and expand the boundaries of the possible. One look at the types of connected networks we’re building for ourselves should be enough to convince just about any one of this fact. But in other contexts we have defined constraints out of which we cannot innovate ourselves. Work constraints, budget constraints, time constraints (the list goes on and on) all help delineate the space in which we can innovate.

I find myself wondering if innovating within defined constraints can be understood as a subset of this idea of additive or connective creativity. I think that finding innovative ways to accomplish goals that seem out of reach certainly fits this paradigm. Likewise,  coming up with new ways to use existing tools certainly shows creativity. But what about things that we might call process improvements, like concocting new methods to improve the efficiency of a process or devising new methods for time management (things like GTD come to mind here)? Are these really innovations, or are they something else? Is “innovating within the realm of the possible” really innovation? Or does innovation contain within itself the idea of moving beyond the boundaries of what is understood to be possible?

In a certain respect I am beginning to think that constraints are a natural complement to innovative thinking. Physicists run up against lots of constraints in their work – the so-called “laws” of nature – and still manage to be innovative. I wonder if taking the attitude that constraints are drivers of innovation rather than its natural enemy would be a better way to foster the idea that one can get better at creativity by practicing. Or does this turn into a severely limiting frame of mind that detracts from one’s ability to “think big” and try to move beyond one’s constraints? Something tells me I might be thinking about this question for some time to come.

Intentional Connection-making

I haven’t spent much time on Brainpickings because I was unsure about the “7 somethings to make you better at something else” type of posts that seem to characterize a lot of the content there. But after some good linking from Twitter, I took a look at a book review about the intentional practice of creativity, or what others have described as unusual connecting-making. I often feel like any writing that I do needs to be polished and professional, avoid strong opinions, and be generally palatable to whomever might read it. There are a few reasons I feel like this, not the least of which is the eighteen or so years I have spent in formal education of some kind. I do think it is important to communicate one’s ideas clearly, or at least make a good attempt at it, but I think that these three constraints have become excuses for me to hide behind in this format. How does discussion start if everyone agrees with what I say? Throttling my own opinions to make them more palatable also cuts into the authenticity of what I say – and what is this format supposed to promote but authenticity?

All that to say that I want to start practicing connection-making on a regular and intentional basis. Because writing is a great way to think through ideas, and writing with at least some audience in mind (hopefully) helps one to write clearly, I am going to ramp up my activity in this space.

On Data

I have come across several articles recently that mention the rise of so-called “big data.” I have a certain hazy notion of what this means: the collection and collation of lots and lots of information about everything from shopping habits to traffic patterns to learning styles. For instance, if the data show that a lot of customers tend to buy Gatorade when they buy Clif bars, it might make sense for retailers to put these items next to each other on the shelves in order to increase the purchase of both items. If traffic at a particular intersection is calm between the hours of 10am and 3pm, but picks up during both rush hours, pothole repair could be scheduled for the calm part of the day. And so on.

Big data is making inroads into the educational context as well. Platforms like Coursera and edX are collecting an enormous amount of data about the students that take their courses, including information about when students tend to study, how they tend to interact in discussions, whether cross-cultural interactions spur increased reflection, how students collaborate, etc. These data will then help inform the design process for the next iteration of courses offered on the platform, and perhaps also inform traditional classroom settings as well.

I think the results of this research could be very beneficial to educators of all kinds, whether working in a traditional setting, a blended setting, or an all-online setting. The question I wonder about, however, is what kinds of data we should be collecting, not what kinds are now possible to collect. I’m not referring to privacy concerns, though that is an important discussion in its own right. I’m referring here to the development of metrics that will best allow educators and those who serve them to make informed decisions regarding pedagogy, especially technologically-enhanced pedagogy. What do educators need to know in order to improve education for their students? And how do we measure it? To say that this is a large question is like saying the Pacific Ocean is a big body of water. Nonetheless, I want to put down a few introductory thoughts about how we might try to approach the question, geared specifically to the higher education context.

First, I think that the answer to this question must reflect the goals of the discipline or course. These goals need to be stated explicitly and should probably be formulated as specific learning outcomes. In an American history course, for example, one goal might be civics-oriented: “Students will engage critically with the founding documents of the United States and be able to articulate the relevance of these documents at different stages of American history.” A goal like this may also belong to a larger category that stresses the development of critical thinking. The goal formulation process would then inform the course design process – once the goals are made clear, assignments directed toward meeting those goals would be easier to develop. Some disciplines may be focused primarily on process-centric learning, like engineering or computer science, while others may rely more heavily on memorization or concept-based thinking. This is why articulating the goals for each discipline or course is so important: it allows specific metrics to be designed and implemented. The use of the data collected through these customized metrics will then allow professors to gauge the effectiveness of new tools or pedagogical approaches by comparing data semester-over-semester or year-over-year. Rather than wondering whether a new technology is actually facilitating student formation, professors would be able to clearly see its impact in the data.

Though this sounds simplistic, I am relatively certain that only a handful of professors ever go through this entire process. Goals are certainly articulated on some syllabi, and some professors do spend significant time matching their assignments to these goals. But do they then collect data based on a well-designed series of metrics to measure the effectiveness of their approach? Do they formulate their assessments so that critical thinking skills or process-oriented skills are effectively measured? If not, why not?

While this might not be “big data,” it certainly is important data.