-
Johannes Castner posted in the group The Community
Here is an abstract I quickly flicked together, for an upcoming talk. I would love some feedback from the geniuses of the collective:
Abstract:
Biases and Privacy concerns as well as unexplainability have dominated the AI ethics discourse. Concerns for them are necessary but insufficient. I arrive at the ethical necessity of participatory approaches to AI from Amartya Sen’s theory of Justice. This theory puts human empowerment at the center. Participatory approaches are just now starting to prove themselves and these are related to both gamification and collective intelligence. A serious ethical framework can in no way be reduced to avoiding harm from the particular afflictions of biases and privacy questions. It requires a positive ethical view of what AI should be built and what purposes it should serve. It also has to address the question of power. Whose objectives are maximized? There are now multiple institutes that are based on particular positive ethical theories, particularly human rights, utilitarianism and rawlsian egalitarianism. On hand of a story adopted from Amartya Sen’s book, the Idea of Justice, I will show that these theories all have merits yet are often at odds with each other, thus making it impossible to be agnostic or neutral. Sen introduces private domain in order to overcome Arrow’s impossibility theorem, which among many such impossibility theorems, is relevant in questions of representation. In the way of private domain, we get around the ethical equivalence between various states of the world. Similarly, in a participatory approach, we can give participants authoritative weights according to the degree to which they are affected by an algorithm. This then outlines a practice of building AI systems that put ethics at the center and that deal with its paradoxes via participation.
16 Comments-
-
What you write is very deep from a philosophical perspective Johannes Castner. As you know, I tend towards a simplified mathematical/statistical view of the world, e.g. see attached article. You might find some useful middle ground in the form of Stuart Russell’s musings – https://www.bbc.co.uk/programmes/articles/3pVB9hLv8TdGjSdJv4CmYjC/nine-things-you-should-know-about-ai – which I’m sure you know well, so just a reminder in case you hadn’t considered recently.
Small world indeed – I’m looking forward to your talk at the Inst. Science & Technology on the 26th October. I’ve asked the IST to get proper details on https://www.eventbrite.co.uk/e/ist-ai-seminar-tickets-426738806257 but people here might wish to register in any case knowing that’s it’ll be you.
You (and others) might also be interested in https://www.westminsterforumprojects.co.uk/conference/AI-in-the-UK-2022.
-
Thank you very much for the links! Yes well I came from the social sciences and I’ve always been grappling in my work with the necessary complexity to make sense of things (the famous demand for context by anthropologists) while doing my best not to introduce unnecessary and frivolous complexity of my own. I think that the second part of the abstract is still too muddled, for example (I think that for it all to make sense one shouldn’t have to know about Arrow’s Impossibility Theorem) …so I’m still working on it.
-
Yes, I’d definitely simplify the abstract and aim to hook a suitable audience. You can introduce complexity in your actual talk but not too much – no one will know your material as well as you so less is definitely more in true Newspeak terms.
-
I think that this is more plain and says exactly the same; what do you think Richard Saldanha ? Better?
Abstract:
Biases and Privacy concerns as well as unexplainability have dominated the AI ethics discourse. Concerns for them are necessary but insufficient. I arrive at the ethical necessity of participatory approaches to AI from Amartya Sen’s theory of Justice. This theory puts human empowerment at the center. Participatory approaches are related to both gamification and collective intelligence and they are just now starting to prove themselves in practical applications. A serious ethical framework can in no way be reduced to avoiding harm from the particular afflictions of biases and privacy questions. As soon as humans are affected in any way, it requires a positive ethical view of what AI should be built and what purposes it should serve. It also has to address the question of power. Whose objectives are maximized? On hand of an illustrative story, adopted from Amartya Sen’s book, the Idea of Justice, I will show that reasonable ethical demands can be at odds with each other so that it is impossible to satisfy them all. What is needed then is representation. In questions of representation, unfortunately, there are a number of mathematical theorems that prove optimal representation to be an impossible goal. We can mitigate these apparent impossibilities in a participatory approach, if we give participants authoritative weights according to the degree to which they are affected by the algorithmic system we are building. This then outlines a practice of building AI systems which put ethics at the center and that deal with its paradoxes via participation.
-
Perhaps a gross simplification of your words Johannes Castner so please feel free to ignore me: ”Privacy concerns, unfair biases and the inability to explain AI methods more generally have dominated the AI ethics discourse. Such concerns are necessary but insufficient to deal with all important ethical considerations. Those of you familiar with Amartya Sen’s Theory of Justice will understand why I regard ethics as paramount in AI design. In this seminar, I outline best practice in the building of any AI system by combining ethical considerations with a participatory approach to the design, development and implementation of AI systems.”
-
-
-
-
So how about this final one:
Privacy concerns, unfair biases and the inability to explain AI methods more generally have dominated the AI ethics discourse. Such concerns are necessary but insufficient to deal with all important ethical considerations. By introducing elements of Amartya Sen’s Theory of Justice you will come to understand why I regard forms of democratic representation as ethically paramount in AI design. In this seminar, I outline an ethical approach to building AI systems that is built on human participation, such that the system learns to take actions that represent the full range of the participants’ ethical views and makes fair compromises where required.
-
Let me entertain you with a summary generated by AI based on your abstract:
”There are lots of different ways to think about what’s ethical when it comes to building AI systems. Some people think that we should focus on avoiding harm, while others think that we should think about what would make the world a better place. There are also different theories about what fairness looks like. Some people think that everyone should have the same chance to be successful, while others think that we should focus on making sure that everyone’s basic needs are met.
One way to think about all of these different ideas is to look at a story from Amartya Sen’s book, The Idea of Justice. In the story, there are different characters with different ideas about what justice looks like. Some of the characters think that everyone should be treated the same, while others think that we should focus on making sure that everyone has what they need.
The point of the story is that there are merits to all of these different ideas, but they often conflict with each other. This means that it’s impossible to be neutral when it comes to thinking about what’s ethical.
One way to deal with this problem is to have a participatory approach, where we involve people who are affected by an algorithm in the decision-making process. This way, we can take into account different perspectives and come up with a solution that works for everyone.”
-
It’s interesting but it made the story part up from whole cloths 🙂 …it’s this story …to give it away …to simplify should not be the same as to make it more vague, maybe to lose some less important details but not to generalise to the point where it could mean almost anything …the ending is completely vague in my view in the AI version …but the beginning is pretty good actually
-
-
While reading your latest abstract, many questions about the implementation, balance of representation and effectiveness of such AI models came to mind, which I take as a sign of its clarity and provocativeness—looking forward to the talk.
-
It would become too technical quickly if I were to address the implementation, so I probably can’t really get into that. This paper here gives a more in debt sketch, but even that is still a sketch. Complete solutions would depend on the exact use case. But building (essentially co-creating) those sorts of applications is what I’d like to offer!
-
Why not join in with Johannes Castner on the 26th: https://bit.ly/3Ri6Mig @jesus, Ella Bernie et al.
-
-
-
-
-
Thanks for sharing Johannes, tagging some experts who might be able to offer some feedback – Richard Saldanha, Roland Szabo, Sabina Firtala, Andrea Isoni, DigiZhets, Tarun Rishi, Mike Smales, Chris Bracegirdle