I’m it seems that work has stopped on the creation of the CoC, so I’m forking the original forum post and cross posting the GitHub PR here so we can discuss what we want to go in to the CoC.
This is not a place to debate the existence of the CoC. This thread exists on the premise “if a CoC were to exist, what would it say?” This thread only for discussing the language we what and how we plan on enforcing it.
While this CoC is under construction, I suggest we collaboratively edit it in this git repo. Once we have some amount of consensus on the CoC, we can move it to a more official location.
As a note, I’m actively incorporating feedback from the various place we’ve discussed it, so if you look in the next day or two, some things will be missing. This was’t oversight; I just wanted to alert people as soon as possible that I have started doing this so they can participate
I have been attempting to merge all the comments and points addressed in the various threads, public, and private chats, but those are a little unstructured. I feel like I’m reverse engineering intent by compiling the various DOs and DON'Ts. I think we should take a step back and start by describing what the CoC needs to address and then working from there.
The CoC’s we like so far and are using as models are (in alphabetical order to avoid assuming preference):
The horizontal SecureDrop Community cannot do that because noone has authority, by definition.
I covers all people who self proclaim themselves to be part of the SecureDrop Community. Again, in a horizontal community this is a little tricky. In theory nothing prevents someone from claiming to be a member of the SecureDrop Community and at the same time behave in contradiction to the Code of Conduct. And since they cannot be banned because there is no central authority, the only thing we can do (beyond mediating) is discuss and try to resolve the problem on a case by case basis. I don’t think anyone has a generic answer to this problem and I propose we acknowledge it exists. But I don’t see it as a blocker.
@heartsucker Thank you so much for moving the conversation forward. The provisions you identified make sense to me.
Mods will not contact law enforcement or other authorities without the victim’s consent
I would add the same qualifier there as to the previous bullet point (“imminent threat”).
Note that any kind of face-to-face event should have CoC representatives who can be contacted in person by event attendees. Those people may or may not be permanent members of the “mods” group, but they should be familiar with the CoC and the history of its enforcement.
This makes it rather hard to enforce then. If the community can’t ask someone to leave if they’re being abusive, then I think we wouldn’t be doing enough protect victims. What would you propose?
I suppose that could be rephrased to say “will be kicked of channels the mods control like gitter or GH” or asked to leave SD events if they show up. It is horizontal enough that someone could host an SD hackathon, and allow them to attend.
Yes yes. I was just getting bullets down for simplicity.
Then would you suggest that we add a note that mods can and should delegate responsibility during IRL events so that there’s a point person who can handle things during events?
Yes. Without a central authority enforcement is impossible. Which is also why horizontal communities are interesting
Members of the community, including the point of contact, can certainly ask someone to leave, if and when all other attempts to preserve a civilized conversation and a safe environment failed. And it is possible the person will refuse because there is no central authority. But since horizontal communities do not exist in Free Software, it is quite difficult to guess how this will go down. I propose we take our chances and try our best to be patient, open and inventive when and if that case presents itself.
Alternatively, I think organizers of events should be able to subscribe formally to this CoC, which would mean they are tasked with in-person enforcement and debriefing with a standing group such as a CoC committee (what you call “mods” in your summary).
Regarding @dachary’s point, while it’s true that enforcement can be tricky in a decentralized community, the notion of “subscribing” to a CoC could scale to multiple uses. For example, the operator(s) of this forum could subscribe to the CoC, which would mean that they are tasked with enforcement actions in this context. FPF could be tasked with enforcement actions in the FPF-operated SecureDrop GitHub repository. And so on.
The relationship between folks implementing enforcement actions (orgs/individuals subscribing to the CoC who operate spaces used by the SecureDrop community) vis-a-vis a standing CoC committee would need to be clarified, but I think that’s doable. Basically, the standing committee would deal with any non-emergency situation and sharpen policies and processes over time.
I guess what I’m trying to prevent is parts of the SecureDrop community choosing to not have or not enforce the CoC. If we’re decentralized, we (those of us talking right now) have no way to tell others they must use our CoC, but they might still host SecureDrop events. How do we work around that? Would this forum be a reference point for who has or has not subscribed to the CoC thus making us a de facto central authority on the matter?
Or maybe there’s no clean answer right now and the best we can do is finish up this CoC and push for adoption.
I think the “SecureDrop Community” is developing a robust identity through these forums, the GitLab and Weblate instances, the Liberapay account, and so on. If individuals set up new social infrastructure in support of the community and bring it to our attention, we can request that they also subscribe to the CoC. If they don’t, we can choose not to use said infrastructure.
FPF does have the ability to enforce the trademark if needed to prevent misleading or malicious uses of the term “SecureDrop”, but I think we should only resort to that sort of top-down enforcement if absolutely necessary.
Indeed. CoC are mostly discussed in the context of centralized organizations and enforcement is a byproduct of centralization.
With a decentralized community, enforcement is the sum of what individual members decide to do, either unilaterally or by reaching out for consensus before acting. It already happens and will continue to evolve. As @eloquence mentions, the community develops an identity over time. It has inertia, you can see patterns emerging, etc. It is organic and imperfect but I believe it will be more robust and sustainable than the centralized model.
If that was perceived as a possibility, it would mean there is an ultimate centralized control operated by FPF. And the SecureDrop Community would therefore no longer be decentralized. We should instead operate under the assumption that FPF won’t be able to operate any kind of unilateral control over the community and find creative ways to make it healthy and sustainable at the same time.
@heartsucker Thanks for driving this forward! The preamble you posted is an excellent start. We have much more work to do, and the discussion above provides several strong examples to inform next steps. Let’s continue to poke for frequent review, e.g. reminding others in Gitter about proposed changes, so we can maintain motion on this front.
A note on horizontal organizing and enforcement: enforcement does not need to involve centralization. A horizontally organized community comes to consensus on what behaviors are and are not acceptable, the community agrees that people with problematic behaviors should be expelled (not all behaviors of course result in this action, a gentle course correction are likely sufficient for minor issues) and how this will happen. This can all be done via consensus (it just involves some very long meetings ).
If the community can’t ask someone to leave if they’re being abusive, then I think we wouldn’t be doing enough protect victims.
Members of the community, including the point of contact, can certainly ask someone to leave, if and when all other attempts to preserve a civilized conversation and a safe environment failed. And it is possible the person will refuse because there is no central authority.
For example, in this situation, if as a community we decided that someone’s behavior was so problematic that they should leave, then we would politely ask them to leave, and then kick/ban them from all community spaces if they do not do that. If it sounds harsh to anyone, I would remind you that keeping toxic and very problematic people in one’s community is far, far worse.
A horizontally organized community comes to consensus on what behaviors are and are not acceptable, the community agrees that people with problematic behaviors should be expelled (not all behaviors of course result in this action, a gentle course correction are likely sufficient for minor issues) and how this will happen.
First, I want to clarify that just because you are horizontally organized does not absolve the greater community from their responsibility from enforcing the CoC. In fact, I would argue that a horizontally organized community has a greater responsibility to enforce a CoC than a hierarchical organization. If a community says we all wish to be horizontal then we all have a responsibility to make sure everyone in that community feels welcome. If the community can not do this, then “horizontalism” just re-creates a situation where a people who is truly toxic is allowed to continue bad behavior without consequences.
Regarding the content of the Code of Conduct, although my preference still is that we reuse the OpenStack Code of Conduct, I’m happy with whatever is agreeable to people actively working on that and I’m grateful that @heartsucker is driving this.
As long as it is short enough so people can actually read it and simple enough so people can actually understand what it stands for, I think it will serve its purpose.
We cannot forsee all use cases, nor can we predict how to push a toxic person out but I’m sure we will find ways to do that collectively as @bmeson so eloquently puts it or by various other means as suggested by @redshiftzero, including lenghty discussions. My hope is that toxic people will keep away from our community because it is abundantly clear they will not prosper.
+1 @redshiftzero. In my opinion, the very most important aspect of a CoC is the ability to ban toxic people from the community.
Tor Project used to be an incredibly toxic, unwelcome, and, for many women, dangerous community to participate in. A group of whistleblowers put a massive amount of work into banning Jake and his apologists from the community. It resulted in a complete restructuring of the organization, the resignation of the whole board, the forming of a membership policy, a Community Council that meets every week and members vote them in, and finally a Code of Conduct and Statement of Values. Years later, the community is still dealing with aftermath of Jake (not to mention his victims still get online abuse for speaking out against him), but it’s in a much stronger place now, and it’s no longer dangerous for some people. I think the reason this was (and still is) such a painful process for Tor is because they didn’t have a mechanism to ban Jake.
Tor is also a horizontal community, and they solved the problem by creating a membership policy. Once you have membership, the community can vote or come to consensus on revoking someone’s membership. I’m sure there are other ways to achieve the same result that don’t involve formal membership.
I’m hoping that a strong CoC will make sure that, if abusive people become a part of the SecureDrop community, we’ll have a path forward to deal with them without having to go through what Tor went through.