Showing posts with label college rankings. Show all posts
Showing posts with label college rankings. Show all posts

Monday, July 13, 2009

on the community-source model for open educational software design

For all my fascination with all things open-source, I'm finding that the notion of open source software (OSS) is one that's used far too broadly, to cover more categories than it can rightfully manage. Specifically, the use of this term to describe collaborative open education resource (OER) projects seems problematic. The notion of OSS points to a series of characteristics and truths that do not apply, for better or worse, to the features of collaborative learning environments developed for opening up education.

While in general, open educational resources are developed to adhere to the letter of the OSS movement, what they miss is what we might call the spirit of OSS, which for my money encompasses the following:

  • A reliance on people's willingness to donate labor--for love, and not for money.
  • An embrace of the "failure for free" model identified by Clay Shirky in Here Comes Everybody.
  • A loose collaboration across fields, disciplines, and interest levels.

Open educational resources are not, in general, developed by volunteers; they are more often the product of extensive funding mechanisms that include paying participants for their labor.

There are good reasons for this. As Christopher J. Mackie points out in Opening Up Education, while the OSS movement has produced some "runaway successes" (Perl, Linux, and Firefox), the moveent has less success at tackling certain types of projects, including development of products designed for widespread institutional use (instead of adoption by individuals). There are good reasons for this, he argues; and his explanation points to both the weaknesses and the strengths of the open education movement:

This limitation may trace to any of several facotrs: the number of programmers having the special expertise required to deliver an enterprise information system may be too small to sustain a community; the software may be inherently too unglamorous or uninteresting to attract volunteers; the benefits of the software may be too diffuse to encourage beneficiaries to collaborate to produce it; the software may be too complex for its development to be coordinated on a purely volunteer basis; the software may require the active, committed participation of specific firms or institutions having strong disincentives to participate in OSS; and so on.


Perhaps the two most significant weak spots Mackie points to are the unglamorous nature of developing OERs and the strong disincentives against institutional participation in developing and circulating these resources. OERs require sustained, consistent dedication at all levels, from programmers all the way up to administrators and funders; and this type of dedication is difficult to attain for the following reasons:

  • While OSS is primarily affiliated with the movement itself, OERs are by their nature affiliated first with an institution or funder; as project affiliates change institutions or roles, their commitment to developing the OER can shift or disappear.
  • OERs require institutional buy-in, and the notion of openness, on its surface at least, appears at odds with institutional goals. (Universities survive by offering something unique, something you can only get by paying your money and walking through the gates.)

Mackie suggests an alternate term for OERs designed in keeping with the open source ideals: community source software (CSS). He identifies the following characteristics as key to the CSS movement:

  • Multiple institutions band together to design software that meets their collective needs, with the ultimate goal of releasing the software as open source eventually;
  • Development of the software is conducted virtually, with employees from each institution collaborating;
  • The collaboration aligns with a corporate, even sometimes hierarchical, structure, with project leaders, paid staff, and experts in a range of design and development categories;
  • Everybody is compensated for their expertise, and this supports a systematic, targeted approach to software development that is often lacking in OSS projects.



Embracing the notion of community source software instead of open source is more than a semantic choice, in my view. It opens up new avenues for participation and the possibility for new affiliation structures across institutions of higher education. Just as higher education institutions have historically affiliated around various community markers (cf. The Associated Writers and Writing Programs, HASTAC member institutions, the Doctoral Consortium in Rhetoric and Composition), colleges and universities--and their affiliates--might unite around the notion of opening up education by opening up technologies, access, and information.

After all, let's take our heads out of the clouds for a second and think about what sorts of factors might motivate a university to align with the open educational movement. Asking institutions to relinquish their monopoly on whatever they think makes them unique (cf. the college ranking system at U.S. News and World Report) requires that we offer them something in exchange. "For the good of humankind" is a sweet notion, but you can't take it to the bank.

Sunday, June 14, 2009

the harrison bergeron approach to education: how university rankings stunt the social revolution

I've been thinking some lately about the odd and confusing practice of comparing undergraduate and graduate programs at American colleges and universities and producing a set of rankings that show how the programs stack up against each other.

One of the most widely cited set of rankings comes from U.S. News and World Report, which offers rankings in dozens of categories, for both undergraduate and graduate-level programs. Here, the magazine offers its altruistic rationale behind producing these rankings:
A college education is one of the most important—and one of the most costly—investments that prospective students will ever make. For this reason, the editors of U.S. News believe that students and their families should have as much information as possible about the comparative merits of the educational programs at America's colleges and universities. The data we gather on America's colleges—and the rankings of the schools that arise from these data—serve as an objective guide by which students and their parents can compare the academic quality of schools. When consumers purchase a car or a computer, this sort of information is readily available. We think it's even more important that comparative data help people make informed decisions about an education that at some private universities is now approaching a total cost of more than $200,000 including tuition, room, board, required fees, books, transportation, and other personal expenses.

(To access the entire rankings, developed and produced selflessly by U.S. News and World Report, you need to pay. Click here to purchase the Premium Online Edition, which is the only way to get complete rankings, for $14.95.)

The 2009 rankings, released in April, are in the news lately because of questions related to how the magazine gathers data from colleges. As Carl Bialik points out in a recent post at the Wall Street Journal, concerns over how Clemson University set about increasing its rank point to deeper questions about the influence of rankings numbers on university operations. Clemson President James F. Barker reportedly shot for cracking the top 20 (it was ranked 38th nationally in 2001) by targeting all of the ranking indicators used by U.S. News. Bialik writes:
While the truth about Clemson’s approach to the rankings remains elusive, the episode does call into question the utility of a ranking that schools can seek to manipulate. “Colleges have been ‘rank-steering,’ — driving under the influence of the rankings,” Lloyd Thacker, executive director of the Education Conservancy and a critic of rankings, told the Associated Press. “We’ve seen over the years a shifting of resources to influence ranks.”

Setting aside questions of the rankings' influence on university operations and on recruiting (both for prospective students and prospective faculty), and setting aside too the question of how accurate any numbers collected from university officials themselves could possibly be when the stakes are so high, one wonders how these rankings limit schools' ability to embrace what appear to be key tenets emerging out of the social revolution. A key feature of some of the most vibrant, energetic, and active online communities is what Clay Shirky labels the "failure for free" model. As I explained in a previous post on the open source movement, the open source software (OSS) movement embraces this tenet:
It's not, after all, that most open source projects present a legitimate threat to the corporate status quo; that's not what scares companies like Microsoft. What scares Microsoft is the fact that OSS can afford a thousand GNOME Bulgarias on the way to its Linux. Microsoft certainly can't afford that rate of failure, but the OSS movement can, because, as Shirky explains,
open systems lower the cost of failure, they do not create biases in favor of predictable but substandard outcomes, and they make it simpler to integrate the contributions of people who contribute only a single idea.

Anyone who's worked for a company of reasonable size understands the push to keep the risk of failure low. "More people," Shirky writes, "will remember you saying yes to a failure than saying no to a radical but promising idea." The higher up the organizational chart you go, the harder the push will be for safe choices. Innovation, it seems, is both a product of and oppositional to the social contract.

The U.S. News rankings, and the methodology behind them, runs completely anathema to the notion of innovation. Indeed, a full 25 percent of the ranking system is based on what U.S. News calls "peer assessment," which comes from "the top academics we consult--presidents, provosts, and deans of admissions" and, ostensibly, at least, allows these consultants
to account for intangibles such as faculty dedication to teaching. Each individual is asked to rate peer schools' academic programs on a scale from 1 (marginal) to 5 (distinguished). Those who don't know enough about a school to evaluate it fairly are asked to mark "don't know." Synovate, an opinion-research firm based near Chicago, in spring 2008 collected the data; of the 4,272 people who were sent questionnaires, 46 percent responded.

Who becomes "distinguished" in the ivory-tower world of academia? Those who play by the long-established rules of tradition, polity, and networking, of course. The people who most want to effect change at the institutional level are often the most outraged, the most unwilling to play by the rules established by administrators and rankings systems, and therefore the least likely to make it into the top echelons of academia. Indeed, failure is rarely free in the high-stakes world of academics; it's safer to say no to "a radical but promising idea" than to say yes to any number of boring but safe ideas.

So what do you do if you are, say, a prospective doctoral student who wants to tear wide the gates of academic institutions? What do you do if you want to go as far in your chosen field as your little legs will carry you, leaving a swath of destruction in your wake? What do you do if you want to bring the social revolution to the ivory tower, instead of waiting for the ivory tower to come to the social revolution?

You rely on the U.S. News rankings, of course. It's what I did when I made decisions about which schools to apply to (the University of Wisconsin-Madison [ranked 7th overall in graduate education programs, first in Curriculum & Instruction, first in Educational Psychology] the University of Texas-Austin [tied at 7th overall, 10th in Curriculum & Instruction], the University of Washington [12th overall, 9th in Curriculum & Instruction], the University of Michigan [14th overall, 7th in Curriculum & Instruction, and 3rd in Educational Psychology] the University of Indiana [19th overall, out of the top 10 in individual categories], and Arizona State University [24th overall, out of the top 10 in individual categories]). Interestingly, though, the decision to turn down offers from schools ranked higher than Indiana (go hoosiers) wasn't all that difficult. I knew that I belonged at IU (go hoosiers) almost before I visited, and a recruitment weekend sealed the deal.

But I had an inside track to information about IU (go hoosiers) via my work with Dan Hickey and Michelle Honeyford. I also happen to be a highly resourceful learner with a relatively clear sense of what I want to study, and with whom, and why. Other learners--especially undergraduates--aren't necessarily in such a cushy position. They are likely to rely heavily on rankings in making decisions about where to apply and which offer to accept. This not only serves to reify the arbitrary and esoteric rankings system (highest ranked schools get highest ranked students), but also serves to stunt the social revolution in an institution that needs revolution, and desperately.

In this matter, it's turtles all the way down. High-stakes standardized testing practices and teacher evaluations based on achievement on these tests limits innovation--from teachers as well as from students--at the secondary and, increasingly, the elementary level. But the world that surrounds schools is increasingly ruled by those who know how to innovate, how to say yes to a radical but promising idea, how to work within a "failure for free" model. If schools can't learn how to embrace the increasingly valued and valuable mindsets afforded by participatory practices, it's failing to prepare its student body for the world at large. The rankings system is just another set of hobbles added on to a system of clamps, tethers, and chains already set up to fail the very people it purports to serve.