by Marc Resnick, Professor of Human Factors at Bentley University
As I concluded in Part I, there is nothing that can destroy the value of a peer to peer support group faster than harmful information, exploitive behavior, and disrespectful interactions. Once inaccurate information or negative interactions begin to spread across a network, they are difficult to stop. It is critical to prevent them from gaining any traction, usually through the presence of moderators with the authority to intervene when necessary.
Moderators have three primary responsibilities. First is to prevent the development of negative information cascades. Negative information can either be inaccurate medical information or therapy recommendations that are inappropriate. Because of the prevalence of motivated reasoning and emotional information processing in these domains, information cascades can develop frequently and spread rapidly if they are not actively contravened. The best way to interrupt an information cascade is to post an immediate reply that links to the correct information on a trusted content source that is written for the minimum education level of members.
Second, the endorsement of behaviors that conflict with the healthy behaviors the network is trying to promote must be contradicted immediately. Behavioral cascades also are sensitive to motivated reasoning and emotional information processing. Members can be tempted to model the behavior of their contacts, even when it is unhealthy. This is especially true for popular members. Behavioral cascades can be interrupted simply with an immediate reply reminding readers of the counteractive effects of the suspect behavior. Repeat offenders should be warned not to repeat the practice.
Third, moderators should be quick to identify disrespectful posts on the public forum and delete them, warn the member who posted it, and keep records so that repeat disrespectors can be removed from membership or have posting privileges limited. For one-to-one communication channels, the network needs to rely on individuals to cut off links with unhealthy contacts. Moderation of one-to-one communication would violate the necessary privacy.
Ideally, a professional moderator would monitor all discussions in real time and immediate intervene when necessary. However, peer to peer support groups are often non-profit, underfunded, and inappropriate for a great deal of revenue-generating advertising. Therefore these networks often need to self-regulate by relying on user-based moderation and self-regulation. This is a challenge because members are anonymous when then join, intentionally private, and it may take time to determine which ones can be trusted with moderation responsibilities.
When the support network is first established, a professional moderator is therefore needed. As the network’s membership grows, the professional moderator can identify knowledgeable and responsible (and willing) individuals to transfer these duties. A reputation management system can be implemented through which members can rate and recommend each other for moderation authority. Design guidelines for effective reputation management are available in the social networking literature.
With effective moderation, online social networks can be an effective medium for peer to peer support in healthcare. Using peer to peer approaches reduces the expense of using medical professionals to answer every patient question, concern, or need for assurance. Furthermore, peer to peer support provides a personal relationship value that medical providers cannot offer. This series of articles presents some of the fundamental structural requirements for the design and implementation of effective peer to peer support groups on a social networking platform.