I have come across several articles recently that mention the rise of so-called “big data.” I have a certain hazy notion of what this means: the collection and collation of lots and lots of information about everything from shopping habits to traffic patterns to learning styles. For instance, if the data show that a lot of customers tend to buy Gatorade when they buy Clif bars, it might make sense for retailers to put these items next to each other on the shelves in order to increase the purchase of both items. If traffic at a particular intersection is calm between the hours of 10am and 3pm, but picks up during both rush hours, pothole repair could be scheduled for the calm part of the day. And so on.
Big data is making inroads into the educational context as well. Platforms like Coursera and edX are collecting an enormous amount of data about the students that take their courses, including information about when students tend to study, how they tend to interact in discussions, whether cross-cultural interactions spur increased reflection, how students collaborate, etc. These data will then help inform the design process for the next iteration of courses offered on the platform, and perhaps also inform traditional classroom settings as well.
I think the results of this research could be very beneficial to educators of all kinds, whether working in a traditional setting, a blended setting, or an all-online setting. The question I wonder about, however, is what kinds of data we should be collecting, not what kinds are now possible to collect. I’m not referring to privacy concerns, though that is an important discussion in its own right. I’m referring here to the development of metrics that will best allow educators and those who serve them to make informed decisions regarding pedagogy, especially technologically-enhanced pedagogy. What do educators need to know in order to improve education for their students? And how do we measure it? To say that this is a large question is like saying the Pacific Ocean is a big body of water. Nonetheless, I want to put down a few introductory thoughts about how we might try to approach the question, geared specifically to the higher education context.
First, I think that the answer to this question must reflect the goals of the discipline or course. These goals need to be stated explicitly and should probably be formulated as specific learning outcomes. In an American history course, for example, one goal might be civics-oriented: “Students will engage critically with the founding documents of the United States and be able to articulate the relevance of these documents at different stages of American history.” A goal like this may also belong to a larger category that stresses the development of critical thinking. The goal formulation process would then inform the course design process – once the goals are made clear, assignments directed toward meeting those goals would be easier to develop. Some disciplines may be focused primarily on process-centric learning, like engineering or computer science, while others may rely more heavily on memorization or concept-based thinking. This is why articulating the goals for each discipline or course is so important: it allows specific metrics to be designed and implemented. The use of the data collected through these customized metrics will then allow professors to gauge the effectiveness of new tools or pedagogical approaches by comparing data semester-over-semester or year-over-year. Rather than wondering whether a new technology is actually facilitating student formation, professors would be able to clearly see its impact in the data.
Though this sounds simplistic, I am relatively certain that only a handful of professors ever go through this entire process. Goals are certainly articulated on some syllabi, and some professors do spend significant time matching their assignments to these goals. But do they then collect data based on a well-designed series of metrics to measure the effectiveness of their approach? Do they formulate their assessments so that critical thinking skills or process-oriented skills are effectively measured? If not, why not?
While this might not be “big data,” it certainly is important data.