12.8 C
London
Sunday, May 19, 2024

Can we embed dignity into social media?

- Advertisement -spot_imgspot_img
- Advertisement -spot_imgspot_img

I’m working on a philosophy paper with an ethicist named William Cochran. I’ll post a link to it once it’s written but in the meantime I have decided to use this neglected space to think through parts of my work on the paper.

Namely, I’m trying to work out whether it’s possible or practical to embed dignity into social media. That sounds like a hard question to make precise, and my approach is to make use of Donna Hick’s amazing work, which came out of peace treaty negotiations, and you can learn about here or read her book Dignity.

Specifically for our purposes, Donna has a list of required conditions for dignity, which can be found here.

There are ten of them, and I was thinking of taking off bite sized chunks, namely to work with one or two at a time and think through how the design of social media’s algorithms or space on the platform itself could be redesigned (if necessary) to confer that particular condition of dignity.

The thing I’ll say before beginning is that, as of now, I don’t think this is being done well, and as a result I consider the human experience on social media to be mostly bad if not toxic. And yes, I do understand that people get a lot out of it too, which is why we should try to make it better rather than to abandon it.

Also, even if we do embed dignity into social media in an upheaval of design, which is hard enough to imagine: I do not think that means it will always be a great place to be. We should know by now that it’s a tool, and depending on how that tool is used it could be wielded as a weapon, as the Rohingya in Myanman learned in 2017.

Finally, I fully expect this to be hard, maybe impossible. But I want to try anyway, and I’d love comments from my always thoughtful readers if you think I’ve missed something or I’m being too blithe or optimistic. Thank you in advance.

So, let’s start with the first essential element of dignity:

Acceptance of IdentityApproach people as neither inferior nor superior to you; give others the freedom to express their authentic selves without fear of being negatively judged; interact without prejudice or bias, accepting how race, religion, gender, class, sexual orientation, age, disability, etc. are at the core of their identities. Assume they have integrity.
https://www.ikedacenter.org/thinkers-themes/thinkers/interviews/hicks/elements

The first part of this, the expressions part, looks pretty straightforward. On a social media platform, we should be able to self-identify in various ways, and we should be able to control how we are identified. All of that is easy to program. The second part is where it gets tricky, though: how do we do so without fear of being judged? Fully half of the evil shit going on now on social media is related to ridiculous, bigoted attacks on the basis of identity. How do we protect a given person from that? Automated bots looking for hate speech does not and will not work, and having an army of underpaid workers scanning for such speech is expensive and deeply awful to them.

It’s possible my experiment is already over, before it’s begun. But I have a couple of ideas nonetheless:

First, Make it much harder to broadcast bigoted views. This could be done iteratively, first by hiding identity-related information from people that have not been invited into a particular space on social media, and next by making general broadcasts (of anything) be held up to much higher scrutiny.

There’s always been a balance struck in social media of making it easy to connect people, for the sake of building enough of a network to keep somebody interested in spending time there, with making sure unwanted people aren’t invading spaces and making them toxic for the group that’s happy to be there. Folks such as Facebook group moderators (or other group moderators on other social media) do a lot of this work, for example.

So, here’s a model that might do the trick (one of many). Imagine a social media that is formed as a series of hotel rooms set off of a main hallway, where you really don’t know who is inside yet, you have to apply to go in, and there’s a moderation system that will kick you out if you don’t conform to rules. That might be too much of a burden to be instant fun, but it also might lead to better conversations and way less disbursement of hate speech. Does such a social media like this already exist?

On the other hand, there are going to be plenty of folks who actually want to engage in bigotry. They would clearly set up their rooms to be hate speech and bigot friendly. Would that be ok, or would there also need to be super-moderators who kick out rooms for violating rules?

Next, is there a third model that is somewhere in between the one that exists now, where you can pay to broadcast your views practically anywhere, and the much more zipped up model I outlined above? The critical use case is that someone should be able to identify themselves in all sorts of ways without fear of being yelled out or judged.

I’m also kind of prepared to be told that’s just what we humans do, there’s no way to build a policy that bypasses that. And when I say kind of, I just want to point out that on Ravelry, which is my knitting and crocheting community website, I don’t see a lot of this. I really don’t. And I think it’s because it’s already got something to talk about, so we don’t have to name call, because we’re busy.

- Advertisement -spot_imgspot_img
Latest news
- Advertisement -spot_img
Related news
- Advertisement -spot_img

LEAVE A REPLY

Please enter your comment!
Please enter your name here