To have a system in which people can take up different collaboration roles (i.e., neither 'everybody body is the same' nor 'there are only two kinds of people -teachers and learners'), some kind of identity and reputation management is obviously necessary. So people who over time prove themselves to be good at, for example, copy-editing documents, increasingly have a free rein to or are asked to copy-edit documents.
Identities need not be universal or immutable for this to work - one might be known by different names and have a reputation for different things in different communities of practice. Reputations also need not be centrally set - I may be seen as having a different reputation by individuals A and B, because of the different networks of trust that these individuals are embedded in (e.g. several people A trusts can vouch for me, but nobody B trusts even knows me).
Many community systems (such as slashdot) have reputation management systems where people accumulate credits (sometimes called 'karma') based on how they have been rated by others in the past.
Network-fostering systems such as Friendster take a different approach to reputation management, allowing people to indicate who they consider to be friends so that it is possible to decide if one wants to interact with somebody based on whether one has mutual friends.
Reputation
Manifesto for the Reputation Society by Hassan Masum and Yi�Cheng Zhang is a fascinating article (published in First Monday) on the wider social implications of reputation systems.
From Martin Terre Blanche's Collaborative Learning Environments blog: Collaborative learning and collaborative work often (although by no means always) depends on people knowing who they're working with - unfortunately often leading to cumbersome "log in" procedures and complicated permission systems. This is the sort of thing being addressed by people working on Federated Identity Management, which is all about sharing user identity and trust information among systems. Small groups of organisations can relatively easily negotiate policies and set up technologies for sharing user identities, but larger scale identity sharing requires standard policies and protocols that allow systems to automatically share user information in an efficient and ethical way. Not surprisingly, there are very different ideas about how to do this and as usual Microsoft seems determined to play the role of the bad guy. In crude terms their Passport scheme seems to involve them keeping a database of information about people which they then selectively share with others - maybe in future for a fee. The Liberty Alliance (a seriously creepy name if you ask me) is better in that it includes 160 plus companies and is more committed to open standards (there's even an open source implementation, SourceID, being developed). However, it's still very much oriented towards business concerns, so the focus is really on making things easier for them and not (despite their nice rhetoric) on protecting consumers' privacy. A similar initiative with more of a higher education slant is Shibboleth which is focussed on determining "if a person using a web browser... has the permissions to access a resource" - and this permission info (rather than primarily the user's personal data) is what is passed among organisations. The Friend of a Friend (FOAF) Project takes a very different route. FOAF is an XML vocabulary similar to RSS which allows one to create (and personally control on your own server) a file with personal details (including encrypted e-mail address and lists of 'friends') which can then be harvested by anybody in much the way as RSS. I don't understand these things well enough to know what FOAF is good at and where it falls down, but I definitely like the idea of having one's identity data radically distributed and under one's direct personal control rather than in central databases. A comment (also from the blog): Have a look at Athens. It's used to provide a good enough identity management system in the UK http://www.athensams.net
Critiques of identity and reputation systems
In The Unspoken of Groups David Weinberger (2003) argues that identity and reputation systems try to make explicit what cannot be made so:
"In general, making explicit does violence to what is being made explicit... Making things explicit isn�t like unearthing an archaeological find that�s just been sitting there, waiting to be dug up. Making explicit often � usually � means disambiguating and reducing complexity.
The reason is simple. The things of the world exist as they are only within deep, messy, inarticulate, shifting, continuous, fuzzy contexts. This is certainly true of human relationships, although I believe it�s also true of all that we find on the earth, waiting in it, or promised above it. The analog world � the real world � is ambiguous. That�s a source of its richness. In making a piece of it explicit, we make it less ambiguous and thus lose some of its value and truth."