Showing posts with label online learning platforms. Show all posts
Showing posts with label online learning platforms. Show all posts

Tuesday, October 27, 2009

The Void Between Colleges of Education and the University Teaching and Learning

In this post, I consider the tremendous advances in educational research I am seeing outside of colleges of education and ponder the relevance of mainstream educational research in light of the transformation of learning made possible by new digital social networks.

This weekend, the annual conference of the International Society for the Scholarship of Teaching and Learning took place at Indiana University. ISSOTL is the home of folks who are committed to studying and advancing teaching and learning in university settings. I saw several presentations that are directly relevant to what we care about here at Re-Mediating Assessment. These included a workshop on social pedagogies organized by Randy Bass, the Assistant Provost for Teaching and Learning at Georgetown, and several sessions on open education, including one by Randy and Toru Iiyoshi, who heads the Knowledge Media Lab at the Carnegie Foundation. Toru co-edited the groundbreaking volume Opening up Education, of which we here at RMA are huge fans. (I liked it so much I bought the book, but you can download all of the articles for free—ignore the line at the MIT press about sample chapters).

I presented at a session about e-Portfolios with John Gosney (Faculty Liaison for Learning Technologies at IUPUI) and Stacy Morrone (Associate Dean for Learning Technologies at IU). John talked about the e-Portfolio efforts within the Sakai open source collaboration and courseware platform; Stacy talked about e-Portfolio as it has been implemented in OnCourse, IU’s instantiation of the Sakai open source course collaboration platform. I presented about our efforts to advance participatory assessment in my classroom assessment course using newly available wikis and e-Portfolio tools in Oncourse (earlier deliberation on those efforts are here; more posted here soon). I was flattered that Maggie Ricci of IU’s Office of Instructional Consulting interviewed me about my post on positioning assessment for participation and promised to post the video this week (I will update here when I find out).

I am going to post about these presentations and how they intersect with participatory assessment as time permits over the next week or so. In the meantime, I want to stir up some overdue discussion over the void between the SOTL community and my colleagues in colleges of education at IU and elsewhere. In an unabashed effort to direct traffic to RMA and build interest in past and forthcoming posts, I am going to first write about this issue. I think it raises issues about the relevance of colleges of education and suggests a need for more interdisciplinary approaches to education research.

I should point out that I am new to the SOTL community. I have focused on technology-supported K-12 education for most of my career (most recently within the Quest Atlantis videogaming environment). I have only recently begun studying my own teaching in the context of developing new core courses for the doctoral program in Learning Sciences and in trying to develop online courses that take full advantage of new digital social networking practices (initial deliberations over my classroom assessment course are here). I feel sheepish about my late arrival because I am embarrassed about the tremendous innovations I found in the SOTL community that have mostly been ignored by educational researchers. My departmental colleagues Tom Duffy, who has long been active in SOTL here at IU, and Melissa Gresalfi have recently gotten seriously involved as well. The conference was awash with IU faculty, but I only saw a few colleagues from the School of Education. One notable exception was Melissa’s involvement on a panel on IU’s Interdisciplinary Teagle Colloquium on Inquiry in Action. I could not go because it conflicted with my own session, but this panel described just the sort of cross-campus collaboration I am aiming to promote here. I also ran into Luise McCarty from the Educational Policy program who heads the school’s Carnegie Initiative on the Doctorate for the school.

My search of the program for other folks from colleges of education revealed another session that was scheduled against mine and that focused on the issue I am raising in this post. Karen Swanson of Mercer University and Mary Kayler of George Mason reported on the findings of their meta-analysis of the literature on the tensions between colleges of education and SOTL. The fact that there is enough literature on this topic to meta-analyze points out that this issue has been around for a while (and suggests that I should probably read up before doing anything more than blogging about this issue.) From the abstract, it looks like they focused on the issue of tenure, which I presume refers to a core issue in the broader SOTL community: that SOTL researchers outside of schools of education risk being treated as interlopers by educational researchers, while treated as dilettantes by their own disciplinary communities. This same issue was mentioned in other sessions I attended as well. But significantly from my perspective, it looks like Swanson and Kayler looked at this issue from the perspective of Education faculty, which is what I want to focus on here. I have tenure, but I certainly wonder how my increased foray into the SOTL community will be viewed when I try to get promoted to full professor.

I will start by exploring my own observations about educational researchers who study their own university teaching practices. I am not in teacher education, but I know of a lot of respected education faculty who seem to be conducting high quality, published research about their teacher education practices. However, there is clearly a good deal of pretty mediocre self-study taking place as well. I review for a number of educational research journals and conferences. When I am asked to review manuscripts or proposals for educational research carried out in classrooms in the college of education, I am quite suspect. Because I have expertise in motivation and in formative assessment, I get stacks of submissions of studies of college of education teaching that seem utterly pointless to me. For example, folks love to study whether self______ is correlated with some other education relevant variables. The answer is always yes, (unless their measures are unreliable), and then there is some post hoc explanation of the relationships with some tenuous suggestions for practice. Likewise, I review lots of submissions that examine whether students who get feedback on learning to solve some class of problems learn to solve those problems better than students whose feedback is withheld. Here the answer should be yes, since this is essentially a test of educational malpractice. But the studies often ignore the assessment maxim that feedback must be useful and used, and instead focus on complex random assignment so that their study can be more “scientific.” I understand the appeal, because they are so easy to conduct and there are enough examples of them actually getting published to provide some inspiration (while dragging down the over effect size of feedback in meta-analytic studies). While it is sometimes hard to tell, these “convenience” studies usually appear to be conducted in the author’s own course or academic program. So, yes, I admit that when that looks to be the case, I do not expect to be impressed. I wonder if other folks feel the same way or if perhaps I am being overly harsh.

Much of my interest in SOTL follows from my efforts to help my college take better advantage of new online instructional tools and to help take advantage of social networking tools in my K-12 research. While my colleagues in IU Bloomington and IUPUI are making progress, I am afraid that we are well behind the curve. While I managed to attend a few SOTL sessions, I saw tremendous evidence of success that I will write about in subsequent posts. Randy Bass and Heidi Elmendorf (also of Georgetown) showed evidence of deep engagement on live discussion forums that simply can’t be faked; here at IU, Phillip Quirk showed some very convincing self-report data about student engagement in our new interdisciplinary Human Biology Program, which looks like a great model of practice for team-teaching courses. These initial observations reminded me of the opinion of James Paul Gee, who leads the MacArthur Foundation’s 21st Century Assessment Project (which partly sponsors my work as well). He has stated on several occasions that “the best educational research is no longer being conducted in colleges of education.” That is a pretty bold statement, and my education colleagues and I initially took offense to it. Obviously, it depends on your perspective; but in terms of taking advantage of new digital social networking tools and the movement towards open education and open-source curriculum, it seems like it may already be true.

One concern I had with SOTL was the sense that the excesses of “evidence-based practice” that has infected educational research was occurring in SOTL. But I did not see many of the randomized experimental studies that set out to “prove” that new instructional technology “works.” I have some very strong opinions about this that I will elaborate on in future posts; for now I will just say that I worry that SOTL researchers might get are too caught up in doing controlled comparison studies of conventional and online courses that they completely miss the point that online courses offer an entirely new realm of possibilities for teaching and learning. The “objective” measures of learning normally used in such studies are often biased in favor of traditional lecture/text/practice models that train students to memorize numerous specific associations; as long as enough of those associations appear on a targeted multiple-choice exam, scores will go up. The problem is that such designs can’t capture the important aspects of individual learning and any aspects of the social learning that is possible in these new educational contexts. Educational researchers seem unwilling to seriously begin looking at the potential of these new environments that they have “proven” to work. So, networked computers and online courses end up being used for very expensive test preparation…and that is a shame.

Here at RMA, we are exploring how participatory assessment models can foster and document all of the tremendous new opportunities for teaching and learning made possible by new digital social networks, while also producing convincing evidence on these “scientific” measures. I will close this post with a comment that Heidi Elmendorf made in the social pedagogies workshop. I asked her why she and the other presenters were embracing the distinction between “process” and “product.” In my opinion, this distinction is based on outdated individual models of learning; it dismisses the relevance of substantive communal engagement in powerful forms of learning, while privileging individual tests as the only “scientific” evidence of learning. I don’t recall Heidi’s exact response, but she immediately pointed out that her disciplinary colleagues in Biology leave her no choice. I was struck by the vigorous nods of agreement from her colleagues and the audience. Her response really brought be me back down to earth and reminded me how much work we have to do in this regard. In my subsequent posts, I will try to illustrate how participatory assessment can address precisely the issue that Heidi raised.

Thursday, October 1, 2009

Positioning Portfolios for Participation

Much of our work in our 21st Century Assessment project this year has focused on communicating participatory assessment to broader audiences whose practices we are trying to inform. This includes:

  • classroom teachers whose practices we are helping reshape to include more participation (like those we are working with in Monroe County right now);

  • other assessment researchers who seem to dismiss participatory accounts of learning as “anecdotal” (like my doctoral mentor Jim Pellegrino who chaired the NRC panel on student assessment);

  • instructional innovators who are trying to support participation while also providing broadly convincing accounts of learning (like my colleagues Sasha Barab and Melissa Gresalfi whose Quest Atlantis immersive environment has been a testbed for many of our idea about assessment);

  • faculty in teacher education who are struggling to help pre-service teachers build professional portfolios while knowing that their score on the Praxis will count for much more (and whose jobs are being threatened by efforts in Indiana to phase out teacher education programs and replace them with more discipline-based instruction);

  • teachers in my graduate-level classroom assessment course who are learning how to do a better job assessing students in their classrooms, as part of their MA degree in educational leadership.


It turns out that participatory approaches to assessment are quite complicated, because they must bridge the void between the socially-defined views of knowing and learning that define participation, and the individually-defined models of knowing and learning that have traditionally been taken for granted by the assessment and measurement communities. As our project sponsor Jim Gee has quite succinctly put: Your challenge is clarity.

As I have come to see most recently, clarity is about entry. Where do we start introducing this comprehensive new approach? Our approach itself is not that complicated really. We have it boiled down to a more participatory version of Wiggins' well known Understanding by Design. In fact we have taken to calling our approach Participation by Design (or if he sues us, Designing for Participation). But the theory behind our approach is maddeningly complex , because it has to span the entire range of activity timescales (from moment-to-moment classroom activity to long-term policy change) and characterizations of learning (from communal discourse to individual understanding to aggregated achievement).

Portfolios and Positioning
Now it is clear to me that the best entry point is the familiar notion of the portfolio. Portfolios consist of any artifacts that learners create. Thanks to Melissa Gresalfi, I have come to realize that the portfolio, and the artifacts that they contain, are ideal for explaining participatory assessment. This is because portfolios position (where position is used as a verb). Before I get to the clarity part, let me first elaborate on what this means.

It turns out that portfolios can be used to position learners and domain content in ways that bridges this void between communal activity and aggregated attainment. In a paper with Caro Williams about the math project that Melissa and I worked on together, Melissa wrote that

“positioning, as a mechanism, helps bridge the space between the opportunities that are available for participation in particular ways and what individual participants do”

Building on the ideas of her doctoral advisor Jim Greeno (e.g., Greeno and Hull, 2002) Melissa explained that positioning refers to how students are positioned relative to content (called disciplinary positioning) and how they are positioned relative to others (called interpersonal positioning). As I will add below, positioning also refer to how instructors are positioned relative to the students and the content (perhaps called professorial positioning). This post will explore how portfolios can support all three types of positioning in more effective and in less effective ways.

Melissa further explained that positioning occurs at two levels. At the more immediate level positioning concerns the moment-to-moment process in which students take up opportunities that they are presented with. Over the longer term, students become associated with particular ways of participating in classroom settings (these ideas are elaborated by scholars like Dorothy Holland and Stanton Wortham). This post will focus on identifying two complementary functions for portfolios helps them support both types of positioning.

Portfolios and Artifacts
Portfolios are collections of artifacts that students created. Artifacts support participation because they are where students apply what they are learning in class to something personally meaningful. In this way they make new meanings. In our various participatory assessment projects, artifacts have included

  • the “Quests” that students complete and revise in Quest Atlantis’ Taiga world where they explain, for example, their hypothesis for why the fish in the Taiga river are in decline;
  • the remixes of Moby Dick and Huck Finn that students in Becky Rupert’s class at Aurora Alternative High School create in their work with the participatory reading curricula that Jenna McWilliams is creating and refining.
  • the various writing assignments that the English teachers in Monroe and Greene County have their students complete in both their introductory and advanced writing classes;
  • the wikifolio entries that my students in my graduate classroom assessment course complete where they draft examples of different assessment items for a lesson in their own classrooms, and state which of the several item writing guidelines in the textbook they found most useful.

  • In each case, various activities scaffold the student learning as they create their artifacts and make new meanings in the process. As a caveat, this means that participatory assessment is not really much use in classrooms where students are not asked to create anything. More specifically, if your students are merely being asked to memorize associations and understand concepts in order to pass a test, stop reading now. Participatory assessment won’t help you. [I learned this the hard way trying to do participatory assessment with the Everyday Mathematics curriculum. Just do drill and practice. It works.]


Problematically Positioned Portfolios
Probably the most important aspect of participatory assessment has to do with the way portfolios are positioned in the classroom. We position them so they serve as a bridge between the communal activities of participatory classroom and the individual accountability associated with compulsory schooling. If portfolios are to serve as a bridge, they must be firmly anchored. On one side they must be anchored to the enactment of classroom activities that support students’ creation of worthwhile portfolios. On the other side they must be anchored to the broader accountability associated with any formal schooling.



To keep portfolio practices from falling apart (as they often do) it is crucial that they rest on these two anchors. If accountability is placed on the portfolio, the portfolio practice will collapse. In other words, don’t use the quality of the actual portfolio artifacts for accountability. Attaching consequences to the actual artifacts means that learners will expect precise specifications regarding those artifacts, and then demand exhausting feedback on whether the artifacts meet particular criteria. And if an instructor’s success is based on the quality of the artifacts, that instructor will comply. Such classrooms are defined by an incessant clamor from learners asking “Is this what you want???”

When portfolios are positioned this way (and they often are), they may or may not represent what students actually learned and are capable of. When positioned this way, the portfolio is more representative of of (a) the specificity of the guidelines, (b) their ability to follow those guidelines, and (3) the amount of feedback they get from the instructor. Accountability-oriented portfolios position disciplinary knowledge as something to be competitively displayed rather than something to be learned and shared, and portfolios position students as competitors rather the supporters. Perhaps most tragically, attaching consequences to artifacts positions instructors (awkwardly) as both piano tuners and gatekeepers. As many instructors (and ex-instructors) know, doing so generates massive amounts of work. This is why it seems that many portfolio-based teacher education programs rely so heavily on doctoral students and adjuncts who may or may not be qualified to teach courses. The more knowledgeable faculty members simply don’t have the time to help students with revision after revision of their artifacts as students struggle to create the perfect portfolio. This is the result of positioning portfolios for production.

Productive Positioning Within Portfolios
Portfolio are more useful when they are positioned to support reflection. Instead of grading the actual artifacts that students create, any accountability should be associated with student reflection on those artifacts. Rather than giving students guidelines for producing their artifact, students need guidelines for reflecting on how that artifact illustrates their use of the “big ideas” of the course. We call these relevant big ideas, or RBIs. The rubrics we provide students for their artifacts essentially ask them to explain how their artifact illustrates (a) the concept behind the RBI, (b) the consequences of the RBI for practice, and (c) what critiques others might have of this characterization of the RBI. For example:

  • Students in my classroom assessment course never actually “submit” their wikifolios of example assessments. Rather, three times a semester they submit a reflection that asks them to explain how they applied the RBIs of the corresponding chapter.
  • Students in Taiga world in Quest Atlantis submit their quests for review by the Park Ranger (actually their teacher but they don’t know that). But the quest instructions (the artifact guidelines) also include a separate reflection section that asks students to reflect on their artifact. The reflection prompts are designed to indirectly cue them what their quest was supposed to address.
  • Students in Becky Rupert’s English class are provided a rubric for their remixes that ask them to explain how that artifact illustrates how an understanding of genre allows a remix to be more meaningful to particular audiences.
Assessing the resulting reflections positions portfolios, students, and teachers in ways that strongly support participation. For example, if the particular student’s artifact actually does not lend itself to applying the RBIs, my classroom assessment students can simply indicate that in their assignment. This is important for at least three reasons:

  1. it allows full individualization for students and avoids a single ersatz assignment that is only half-meaningful to some students and mostly meaningless to the rest;
  2. understanding if and how ideas from a course do not apply is a crucially important part of that expertise.
  3. The reflection itself provides more valid evidence of learning, precisely because it can include very specific guidelines. We give students very specific guidelines asking them to reflect on the RBIs conceptually, consequentially, and critically.

For example, the mathematics teachers in the classroom assessment course are going to discover that it is very difficult to create portfolio assessments for their existing mathematical practices. Rather than forcing them to do so anyways (and giving them a good grade for an absurd example), they can instead reflect on what it is about mathematics that makes it so difficult, and gain some insights into how they might more readily incorporate project-based instruction into their classes. The actual guidelines for creating good portfolios are in the book when they need them; reflecting on those guidelines more generally will set them up to use them more effectively and meaningfully in the future.

Another huge advantage of this way of positioning portfolios is that it greatly eliminate a lot of the grading busywork and allows more broadly useful feedback. In the Quest Atlantis example, our research teacher Jake Summers of Binford Elementary discovered that whenever the reflections were well written and complete, the actual quest submission would also be well done. In the inevitable press for time, he just started looking at the artifacts. Similarly in my classroom assessment course, I will only look need to go back and look at the actual wikifolio entries when a reflection is incomplete or confusing. Given that the 30 students each have 8 entries, it is impossible to carefully review all 240 entries and provide meaningful feedback. Rather throughout the semester, each of the students have been getting feedback from their group members and from me (as they specifically request and as time permits). Because the artifacts are not graded, students understand the feedback they get as more formative than summative, and not as instructions for revision. While some of the groups in class are still getting the hang of it, many of the entries are getting eight or nine comments along with comments on comments. Because the entries are wikis it is simple for the originator go in and revise as appropriate. These students are starting to send me messages that, for me, suggest that the portfolio has indeed been positioned for participation: “Is this what you meant?” (emphasis added). This focus on meaning gets at the essence of participatory culture.

In a subsequent post, I will elaborate on how carefully positioning portfolios relative to (a) the enactment of classroom activities and (b) external accountability can further foster participation.

Friday, July 17, 2009

getting students off of Maggie's farm

I stumbled across an interesting cross-blog conversation about Social Media Classroom and similar Learning Management Systems (LMS's). I have been, and continue to be, a strong and vocal supporter of Social Media Classroom (SMC), Howard Rheingold's Drupal-based, open-source educational technology intended to support participatory practices in formal learning settings.

Most significantly for me, it was participation in SMC that led to my passion for all things open-source. This is not a trivial thing: If participation in an LMS fosters a disposition toward increased openness, collaboration, and sharing, then it's clearly putting its money where its mouth is.

Blogger and computer scientist Andre Malan writes that he recently took SMC for a spin around the block and found it impressive in some ways and lacking in others. He writes:

  1. It seems to be closed off and private by default (although this may have just been the system I used). If outsiders can participate (as has been shown by Jon Beasley-Murray, Jim Groom and D’Arcy Norman) magic can happen. We need to let the world see what students are doing in university.

  2. The “Social Media Classroom” is missing one little word in the title. A game changer would rather be a “Social Network Media Classroom”. Although students can edit their own profiles in the Social Media Classroom, there is no way to form groups or to add people to their network. The network is often the most powerful part of any social media applications and it is a terrible oversight to not include it.


  3. The training wheels don’t come off. This application is great for students who do not know of, or use social media tools. However, it sucks for those that do. They are not able to use their current networks or applications. Most people who have blogs would want to use their own blogs for a class. Or use their own social bookmarking service. These people (the ones who would be very useful in this environment as they could guide their peers and instructors in the use of social media) will feel alienated and resent having to use the Social Media Classroom. If an education-based social media application is ever to be successful it has to provide an easy way for experienced students to show others the tricks of the trade and for novice students to take the wheels off of the bicycle and use real tools when they are ready for it.



D'Arcy Norman, writing from the University of Calgary, responded to the above points first in the comments section and then in a full post on his own blog. Norman doesn't have a problem with fostering student engagement within "walled gardens"--he writes:
The goal isn’t to publish content to the open internet. The goal is to engage students, in creation, discussion, and reflection. If they need a walled garden to do that effectively (and there are several excellent reasons for needing privacy for a community) then so be it. If they’d like to do it in the open, that’s just a checkbox on a settings page.


And, in the most spectacular finish to a post I've so far read anywhere, by anyone, Norman ends with this:
That option isn’t available for users of The Big Commercial LMS Platform. If it’s in an LMS, it’s closed. End of discussion. And people only gain experience in using the LMS, in farming for Maggie.


Norman is right and he's wrong. A closed LMS that lacks the capacity for open participation in a larger community turns learners into day laborers reduced to carting bushels of cognitive work from the fields to the barn and taking home only what they can hide away in their pockets. But in many ways, a "walled garden" isn't much better. Not to overstretch the metaphors here, but legend has it that Prince Siddhartha spent his youth inside of a walled garden. The kind of participation his surroundings supported was absolutely voluntary, and probably felt authentic, in the main. But when he left the garden, everything he knew to be true was true no longer.

One of the big failings of educational institutions is that they too often offer a beautiful walled garden. Inside the garden, food is abundant, and everybody eats equally well. (Well, that depends on the garden you've walked into, how you got there, how long you can stay, and whether you have comparable walled garden experience in your past.)

Sure, participation in a closed system engages students "in creation, discussion, and reflection." This is, I agree, a necessary component of higher education. But I disagree with Norman that this type of participation is sufficient. In fact, creation, discussion and reflection are only useful learning experiences insofar as they support learners' ability and willingness to engage with wider, more public, and less protected communities of practice. This means that publishing content on the open internet should--indeed, must--be a key curricular element. The internet isn't a garden; it's an ecosystem complete with backlots, busted glass, some ragged sunflowers and lots of rich material ripe for harvesting--but only if you've learned what it takes to grow and then harvest that material.

Thursday, July 9, 2009

Participatory Assessment for Bridging the Void between Content and Participation.

Here at Re-Mediating Assessment, we share our ideas about educational practices, mostly as they relate to innovative assessment practices and mostly then as they relate to new media and technology. In this post, I respond to an email from a colleague about developing on-line versions of required courses in graduate-level teacher education courses.

My colleague and I are discussing how we ensure coverage of “content” in proposed courses that focuses more directly on “participation” in the actual educational practices. This void between participation (in meaningful practices) and content (as represented in textbooks, standards, and exams) is a central motivation behind Re-Mediating Assessment. So it seems worthwhile to expand my explanation of how participatory assessment can bridge this void and post it here.

To give a bit of context, note that the course requirements of teacher education programs are constantly debated and adjusted. From my perspective it is reasonable to assume that someone with a Master’s degree in Ed should have taken a course on educational assessment. But it also seems reasonable to have also had a course on, say, Child Development. But it simply may not be possible to require students to take both classes. Because both undergraduate and graduate teacher educator majors have numerous required content area courses (i.e., math, English, etc.), there are few slots left for other courses that most agree they need. So the departments that offer these other required courses have an obvious obligation to maintain accountability over the courses that they offer.

I have resisted teaching online because previous courseware tools were not designed to foster participation in the meaningful discourse that is what I think is so important to a good course. Without a classroom context for discourse (even conversations around a traditional lecture), students have few cues for what matters. Without those cues, assessment practices become paramount in communicating the instructor values. And this is a lot to ask of an assessment.

This is why, in my observation, online instruction heretofore has mostly consisted of two equally problematic alternatives. The first is the familiar on-line tools for pushing content out to students: “Here is the text, here are some resources, and here is a forum where you can post questions, and here is the exam schedule.” The instructors log on to the forums regularly and answer any questions, students take exams, and that is it. Sometimes these courses are augmented with papers and projects and perhaps with collaborative projects; hopefully students get feedback, and they might even use that feedback to learn more. But many many on-line course are essentially fancy test prep. My perceptions are certainly biased by my experiences back in the 90s in the early days of on-line instruction. The Econ faculty where I was working could not figure out why the students who took the online version of Econ 101 always got higher exam scores than the face-to-face (FTF) students, but almost always did far worse in the FTF Econ 201. This illustrates the problem with instruction that directly preparing students to pass formal exams. Formal exams are just proxies for prior learning, and framing course content entirely around tests (especially multiple choice ones) is just a terrible idea. Guessing which of four associations is least wrong is still an efficient way of reliably comparing what people know about a curriculum or a topic. But re-mediating course content to fit into this format makes it nearly useful for teaching.

The other extreme of on-line instruction is “project based” classes that focus almost entirely on developing a portfolio of course-related projects. These approaches seem particularly popular in teacher education programs. The problem with on-line portfolios is that the lack of FTF contact requires the specifications for the portfolios to be excruciatingly detailed. Much of the learning that occurs tends to be figuring out what the instructor wants in order to get a good grade. The most salient discourse in these classes often surrounds the question “Is this what you want?” These classes are usually extremely time-consuming to teach because the accountability associated with the artifacts leads students to demand, and instructors to provide, tons of detailed feedback on each iteration of the artifacts. So much so that the most qualified faculty can’t really afford to teach many of these courses. As such, these courses are often taught by graduate students and part-time faculty who may not be ideal for communicating the “Relevant Big Ideas” (RBIs, or what a learning scientist might call “formalisms") behind the assignments, and instead just focus on helping students create the highest quality artifacts. This creates a very real risk that students in these classes may or may not actually learn the underlying concepts, or may learn them in a way that they are so bound to the project that they can’t be used in other contexts. In my observation, such classes seldom feature formal examinations. Without careful attention, lots of really good feedback, and student use of feedback, students may come away from the class with a lovely portfolio and little else. Given the massive investment in e-Portfolios in e-learning platforms like Sakai, this issue demand careful attention. (I will ask my friend Larry Mikulecky in Indiana’s Department of Culture, Communication, and Language Education who I understand has been teaching non-exam online courses for years and has reportedly develops considerable evidence of student’s enduring understanding.)

A Practical Alternative
I am teaching on-line for the first time this summer. The course is P540, Cognition and Learning, a required course for many M. Ed programs. I am working like crazy to take full advantage of the new on-line resources for social networking that are now available in OnCourse, IU’s version of Sakai (an open-source collaborative learning environment designed for higher education). In doing so I am working hard to put into place an on-line alternative that balances participation and content. I also plan to use some of the lessons I am learning in my Educational Assessment course this Fall—which is partly what prompted that aforementioned conversation with my colleague. I want to put some of my ideas as they are unfolding in that class out there and seek input and feedback, including from my current students who are (so far) patiently hanging with me as I refine these practices as I go.

In particular I am working hard to incorporate the ideas about participatory culture that I have gained from working with Henry Jenkins and his team at Project New Media Literacies over the last year. Participatory assessment assumes that you can teach more "content" and gather more evidence that students “understand” that content by focusing more directly on participation and less directly on content. Theoretically, these ideas are framed by situative theories of cognition that say participation in social discourse is the most important thing to think about, and that individual cognition and individual behavior are “secondary” phenomena. These ideas come to me from three Jims: Greeno (whose theorizing has long shaped my work) Gee (who also deeply influences my thinking about cognition and assessment and whose MacArthur grant funded the aforementioned collaboration and indirectly supports this blog) and Pellegrino (with whom I did my doctoral studies of assessment, transfer, and validity with but who maintains an individual differences approach to cognition).

Per the curriculum committee that mandated a cognition and learning course for most masters degrees for teachers, my students are just completing ten tough chapters on memory, cognition, motivation, etc. I use Roger Bruning’s text because he make is quite clear and puts 5-7 “implications for teaching” at the end of each chapter. But it is a LOT of content for these students to learn, especially if I just have them read the chapters.

I break students up into domain groups (math science, etc.) and in those groups they go through the 5-7 implications for teaching. Each group must use the forum to generate a specific example of that implication, and then rank order the implications in terms of relevance and warrant those rankings and post them to the OnCourse wiki. The level of discourse in the student-generated forums around the content is tremendous. Then the lead group each week synthesizes the postings of all five groups to come up with a single list. I also have now asked them to do the same with “things worth being familiar with” in the chapter (essentially the bolded items and any highlighted research studies). What I particularly like about the discussions is the way that the discourse around agreeing that an implication or topic is less relevant actually leads to a pretty deep understanding of that implication or idea. This builds on ideas I have learned from my colleague Melissa Gresalfi about “consequential engagement.” By struggling to conclude that the implication is least likely to impact practice makes it more likely that they will remember that implication if they find themselves is a situation that makes it more relevant.

This participatory approach to content is complemented by four other aspects of my class. Illustrating my commitment to content, I include three formal exams that are timed and use traditional MC and short answer items. But I prioritize the content that the class has deemed most important, and don't even include the content they deem least important.

The second complement is the e-Portfolios each student has to post each week in OnCourse. Students have to select the one implication they think is most relevant, warrant the selection, exemplify and critique it, and then seek feedback on that post from their classmates. Again following Melissa’s lead, the e-Portfolio asks students for increasingly sophisticated engagement with the implication relative to their own teaching practice: procedural engagement (Basically explain the implication in your own words), conceptual engagement (give an example that illustrates what this implication means), consequential engagement (what are the consequence of this implication for your teaching practice, what should you do differently now that you understand this aspect of cognition?) and critical engagement (why might someone disagree with you and what would happen if you took this implication too far?). I require them to request feedback from their classmates. While this aspect of the new on-Course e-Portfolio tools is still quite buggy, I am persevering because the mere act of knowing that a peer audience is going to read it pushes them to engage more deeply. Going back to my earlier point, it is hard for me to find time to review and provide detailed feedback on 220 indivdiual submissions across the semester. When I do review them (students submit them for formal review after five submissions), I can just look at the feedback from other students and the students' own reflection on what they have learned for pretty clear evidence of consequential and critical engagement.

The third complement is the e-Portfolio that each student completes during the last five weeks of class. While each of the groups leads the class in the chapter associated with their domain (literacy, comprehension, writing, science and math), students will be building an e-portfolio in which they critique and refine at least two web-based instructional resources (educational videogames, webquests, the kind of stuff teachers increasingly are searching out and using in their classes). They select two or more of the implications from that chapter to critique the activities and provide suggestions for how it should be used (or if it should be avoided), along with one of the implications from the chapter on instructional technology, and one of the implications from the other chapters on memory and learning. If I have done my job right, I don’t need to prompt them to the consequential and critical engagement at this stage. This is because they should have developed what Melissa calls a “disposition” towards these important forms of engagement. All I have to do is include the requirement that they justify why each implication was selected, the feedback from their classmates, and their reflection on what they learned from feedback. It turns out the consequential and critical engagement is remarkably easy to recognize in discourse. That seems partly because it is so much more interesting and worthwhile to read than the more typical class discourse that is limited to procedural and conceptual engagement. Ultimately, that is the point.

Tuesday, July 7, 2009

Five tips for seeding and feeding your educational community

Dan Hickey's recent post on seeding, feeding, and weeding educators' networks got me thinking, for lots of reasons--not least of which being that I will most likely be one of the research assistants he explains will “work with lead educators to identify interesting and engaging online activities for their students.”

This got me a-planning. I started thinking about how I would seed, feed, and weed a social network if (when) given the chance to do so. As David Armano, the author of "Debunking Social Media Myths, the article that suggests the seeding, feeding, and weeding metaphor, points out, building a social media network is more difficult than people think—this is not a “if we build it, they will come” sort of thing. Designing, promoting, and growing a community takes a lot of work. People will, given the right motives, participate in the community for love and for free, but you have to start out on the right foot. This means offering them the right motivations for giving up time they would otherwise be spending on something else.

A caveat
First, know that I am a True Believer. I have deep faith in the transformative potential of participatory media, not because I see it as a panacea to all of our problems but because participatory media supports disruption of the status quo. A public that primarily consumes media primarily gets the world the media producers decide they want to offer. A public that produces and circulates media expressions gets to help decide what world it wants.

Social media, because of its disruptive and transformative potential, is both essential and nigh on impossible to get into the classroom. This is precisely why it needs to happen, and the sooner it happens, the better.

But integrating participatory media and the participatory practices they support into the field of education is not a simple matter. Too often people push for introduction of new technologies or practices (blogging, wikis, chatrooms and forums) without considering the dispositions required to use them in participatory ways. A blog can easily be used as an online paper submission tool; leveraging its neatest affordances--access to a broad, engaged public, joining a web of interconnected arguments and ideas, offering entrance into a community of bloggers--takes more effort and different, often more time-consuming, approaches.

Additionally, while social networks for educators hold a great deal of promise for supporting the spread of educational practices, designing, building, and supporting a vibrant community of educators requires thinking beyond the chosen technology itself.

Five Tips for Seeding and Feeding your Community

With these points in mind, I offer my first shot at strategies for seeding and beginning to feed a participatory educational community. (Weeding, the best part of the endeavor, comes later, once my tactics have proven to work.)

1. Think beyond the classroom setting.
In the recently published National Writing Project book, Teaching the New Writing, the editors point out that for teachers to integrate new media technologies into their classrooms, they "need to be given time to investigate and use technology themselves, personally and professionally, so that they can themselves assess the ways that these tools can enhance a given curricular unit."

The emerging new media landscape offers more than just teaching tools--it offers a new way of thinking about communication, expression, and circulation of ideas. We would do well to remember this as we devise strategies for getting teachers involved in educational communities online. After all, asking a teacher who's never engaged with social media to use it in the classroom is like asking a teacher who's never used the quadratic equation to teach Algebra.

Anyone who knows me knows what a fan of blogging I am. I proselytize, prod, and shame people into blogging--though, again, not because I think blogging is the best new practice or even necessarily the most enjoyable one. Blogging is just one type of practice among a constellation of tools and practices being adopted by cutting edge educators, scholars, and Big Thinkers across all disciplines. Blogging was, for me, a way in to these practices and tools, and I do think blogging is one of the most accessible new practice for teacherly / writerly types. The immediacy and publicness of a blogpost is a nice preparation for increased engagement with what Clay Shirky calls the “publish, then filter” model of participatory media. This is a chaotic, disconcerting, and confusing model in comparison to the traditional “filter, then publish” model, but getting in synch with this key element of participatory culture is absolutely essential for engaging with features like hyperlinking, directing traffic, and identifying and writing for a public. In a larger sense, connecting with the publish, then filter approach prepares participants to join the larger social networking community.

2. Cover all your bases--and stop thinking locally
One of the neatest things about an increasingly networked global community is that we're no longer limited to the experts or expertises of the people who are within our physical reach. Increasingly, we can tap into the knowledge and interests of like-minded folks as we work to seed a new community.

Backing up a step: It helps, in the beginning for sure but even more so as a tiny community grows into a small, then medium-sized, group, to consider all of the knowledge, experience, and expertises you would like to see represented in your educational community. This may include expertise with a variety of social media platforms, experience in subject areas or in fields outside of teaching, and various amounts of experience within the field of education.

3. In covering your bases, make sure there's something for everyone to do.
Especially in the beginning, people participate when they feel like they a.) have something they think is worth saying, b.) feel that their contributions matter to others, and c.) can easily see how and where to contribute. I have been a member of forums where everybody has basically the same background and areas of expertise; these forums usually start out vibrant, then descend into one or two heavily populated discussion groups (usually complaining or commiserating about one issue that gets up in everyone's craw) before petering out.

Now imagine you have two teachers who have decided to introduce a Wikipedia-editing exercise into their classrooms by focusing on the Wikipedia entry for Moby-Dick. Imagine you have a couple of Wikipedians in your network who have extensive experience working with the formatting code required for editing; and you have a scholar who has published a book on Moby-Dick. This community has the potential for a rich dialogue that supports increasing the expertise of everybody involved. Everybody feels valued, everybody feels enriched, and everybody feels interested in contributing and learning.

4. Use the tool yourself, and interact with absolutely everybody.
Caterina Fake, the founder of Flickr, says that she decided to greet the first ten thousand Flickr users personally. Assuming ten thousand users is several thousand more than you want in your community, you might have the time to imitate Fake's example. It also helps to join in on forums and other discussions, especially if one emerges from the users themselves. Students are not the only people who respond well to feeling like someone's listening.

Use the tool. Use the tool. Use the tool. I can't emphasize enough how important this is. You should use it for at least one purpose other than seeding and feeding your community. You should be familiar enough with it to be able to answer most questions and do some troubleshooting when necessary. You should be able to integrate new features when they become available and relevant, and you should offer a means for other users to do the same.


5. Pick a tool that supports the needs of your intended community, and then use the technology's features as they were designed to be used.

Though I put this point last, it's the most important of all. You can't--you cannot--build the right community with the wrong tools. Too often, community designers hone in on a tool they have some familiarity with or, even worse, a tool that they've heard a lot about. This is the wrong tack.

What you need to do is figure out what you want your community to do first, then seek out a tool that supports those practices. If you want your community to refine an already-established set of definitions, approaches, or pedagogical tenets, then what you're looking for is a wiki. If you want the community to discuss key issues that come up in the classroom, you want a forum or chat function. If you want them to share and comment on lesson plans, you need a blog or similar text editing function.

Once you've decided on the functions you want, you need to stick with using them as god intended. Do not use a wiki to post information that doesn't need community input. Don't use a forum as a calendar. And don't use a blog for forum discussions.

It's not easy to start and build a community, offline or online. It takes time and energy and a high resistance to disappointment and exhaustion. But as anybody who's ever tried and failed (or succeeded) to start up a community knows, we wouldn't bother if we didn't think it was worth the effort.

Sunday, July 5, 2009

On collaborative platforms for sharing educational practices

I've been in conversation with lots of educators recently about strategies for developing and supporting collaborative communities of teachers within various social networks online. Most recently I am talking with IU Mathematics Education Professor Cathy Brown about the lovely site that she has created in Moodle to support the math teachers who are teaching at the New Tech High Schools in Indiana. We are going to meet to see if some of the ideas we have been developing about participatory activities and assessment might help NewTech teachers use the site to do what they are doing--Helping integrate mathematics into interesting and engaging projects. Because Indiana is now rolling out End of Course assessments in Algebra (along with English and Biology) I assume that these teachers are under significant pressure to show not only that thier students are passing (required to get credit for the course) but exceling. This creates an important tension that gets at the heart of what we care about here at Re-Mediating Assessment.

Though I'd like to say otherwise, there is unfortunately no perfect tool--no single network that magically fosters community, cooperation, and collaboration. Part of this is due to the fact that all platforms are designed to support only certain kinds of engagement and therefore have benefits and drawbacks inherent to them; the other factor is that too often, people try to bend a community to the affordances of the technology instead of finding a tool or set of tools that align most closely to the needs of the community.As for platforms, I have bounced around a lot from several which have distinct advantages and disadvantages. I want to take a minute and share my experiences and them make the point I want to make.

I used SocialMediaClassroom for my graduate classes in Spring 2009 and that was very informative and help.. One of the great things about using it was that it hooked us up with it sponsor, social networking pioneer Howard Rheingold and his deep and interesting community who kibbutz at his installation of SMC at http://socialmediaclassroom.com/. It also hooks you up with the open-source Drupal community, which also has a lot of potential. It was a bit buggy, which was not surprising at it was an early stage open source program. Sam Rose did a tremendous job setting it up and was really helpful both in getting it installed and then working out the many bugs that resulted from my ignorance. MacArtur’s Digitial Media and Learning initiative funded the initial development, and are using it in the DML hub which is also important

This summer I have been using Indiana University's OnCourse CL, an online collaborative learning environment designed through the open-source Sakai Project. OnCourse brings the whole Sakai community and is very stable. Now that it has e-portfolios and wikis it has a lot of potential for the kinds of participatory activites and assessments that are so important to me. Stacy Morrone has pushed hard on the e-Portfolio features and they really have tremendous untapped potential. A big personal advantage for me in using OnCourse is the tremendous support that I get from the IU staff who are quite committed to it. The Learning Sciences graduate program just got a grant to expand our online course offerings, and we aim to use this to build a strong community of scholars around these courses, and will be using OnCourse.
The big drawback with OnCourse is that it is so closed--it only supports participation from IU affiliates and therefore restricts participation across multiple institutions. Case in point, I was planning on having my students in my Cognition and Learning course seek feedback from at least one outside expert or peer on the e-Portfolios that each of the students are drafting. The author of our textbook Roger Bruning has even agreed to review some. But for non-IU folks to do so they have to register for guest accounts. I have to do the same all the time so I can view my class as a student (another hassle of OnCourse) and I know it is a huge hassle. I have to get a new password every time. So I really can't include that in the course requirements as it will cause a revolt and a lot of headaches. Of course, the beauty of the Sakai platform is that I should be able to build and mount my own version for this. I will keep you posted!

For the last year, we have been working with an ELA curriculum designed by Project New Media Literacies, a project headed by media scholar Henry Jenkins and funded by the MacArthur Foundation's Digital Media and Learning Initiative. Our collaboration with Project NML revolved around a site in Ning which, like Moodle, is very popular with teachers. (Ning has dominated the "best educational use of a social networking service" category of the Edublogs Awards for the last two years: In 2008, 9 out of the 10 finalists were Ning-based, and in 2007 all ten finalists were based in Ning.) Our thoughts are influenced as usual by Clay Shirky. In Here Comes Everybody he pointed out that "there are no generically good tools, only tools that are good for certain purposes."

The point I want to make here is that focusing too much on the actual hub ends up as technological determinism--and leads to efforts to squeeze the community into the tool instead of using the tool to support the community. We must be much more focused on the participatory cultures and practices that the networks support. Often, this means supporting layered use of various technologies, according to the interests, needs, and dispositions of community members. In fact, the most important evidence that you have established a participatory culture around a network is that the practices you are fostering in your network spread to other networks. In other words, if you lurk on other networks, you should see reference to your network and practice.