Saturday, October 3, 2020

Formal Models for Ethics

As the reading time estimate crept over 30 minutes, without any end in sight, I was starting to feel like a bit of a blowhard.

The draft I’d originally intended to inaugurate this blog is still sitting there. Maybe I will condense and finish it (eventually), but I think it makes more sense to write briefly less long-windedly about what I mean. Instead of trying to exhaust the subject matter, I’m going to offer a perspective, and present a more intuitive argument for it.

So, more briefly, I’d like to ask you to think, for a moment, about what makes humans ethically important. There are two parts to this question, hidden just under the surface. The first asks you to consider why humans are ethical agents. What makes it possible for them to do, want, or experience things that could fall under the umbrella of ethics? The second (no particular order) asks you to consider why humans are ethical patients. How is it that what happens to them, what is wanted of them, or what is experienced about them might also be part of an ethical problem? (This agent/patient distinction I inherit from Coeckelbergh, whose book, Growing Moral Relations, is worth your time.)

I will add before you start: ethics has been defined in a few different ways, over the millennia. I consider a few definitions interesting.

Charles Taylor, for instance, describes ethics as pertaining not just to moral choices and obligations between people, but questions like “what is the good life?” and “what is good?”. His more expansive account, following Aristotle, moves closer to the Greek root, which refers to one’s character, and can be read to have a slant towards something resembling virtue ethics. Using Taylor, we could ask questions like “what is it good to love?” and not just “what is the right way to act?”

Conversely, followers of Nietzsche contrast ethics with morality, arguing that morality is about passing judgment on what people (or living beings) are and do, whereas ethics can be about affirming life as intrinsically self-justifying. From a (post-) Nietzschean point of view, life’s capacity to want, struggle, and change, is, from the very beginning, what is good (maybe not in those words). Living well means trying to overcome the need to judge, and instead to explore and express what it is to be alive, for all its intensities and tragedies. Nietzsche is often read to be asking his readers to abandon compassion, or to think only of themselves and never of the common good, but I don’t think that is exactly the case. Nietzsche is asking us to live in pursuit of what is good, but not to define it as the enemy of the bad, rather, to live for its own sake, come what may, and to do what is good, defined in itself, rather than as a reaction.

I don’t know Coeckelbergh’s position on ethics broadly, but he considers morality to be embedded in social relationships themselves. Rather than what is right or wrong to do to a patient being about its properties (whether it has consciousness, for instance) and defined in absolute terms, morality is about what relationships exist between an agent and a patient, and what kinds of relationships are good to have, knowing what the agent and patient already are to each other. Do I already mostly act like it’s a person? I should probably treat it like one. Coeckelbergh is interesting in that he tries to legitimate ideas that have, until recently, not had much hearing in the community of western philosophers, although citing him as a source for them over others might, fairly, be challenged. For instance, he would consider our dependence on our ecology grounds to entertain an obligation towards it. I haven’t finished his book yet, so I’m not aware of the extent of the advice he gives for considering problems in these relationships, but it seems fair to say that he doesn’t believe there’s a system of absolute rules out there, or that we should try to find final answers to ethical problems. Relationships are particular, and they evolve. We shouldn’t seek to use the fact that we don’t get hard answers for our convenience; instead, we should consider relationships, for their lumpiness, their particularity, their uncertainty, and live them as best we can.

All of these viewpoints are very different from our ordinary patterns of ethical thought. We are used to thinking that individuals, or sometimes social groups (such as nations) have rights that must be guaranteed to them, but everything else goes. Sometimes, referencing Isaiah Berlin, we might say we believe that this obligation is negative (there are some things that people are fundamentally allowed to do, and our only obligation is to stop people who try to mess with that), or it is positive (there are some things that people should fundamentally be allowed to do, and we need to do our best to make that possible for them). Maybe, for whatever definition of the good, we are utilitarians, who think that we should do the most good for the most people, whatever means that could entail. Perhaps we take the view from afar, and ask “if I had the choice of all reasonably possible societies, which would I like to be born into, knowing that I might be anyone?” Perhaps we see ethics as different versions of the trolley problem, and find reasons, based on our moral intuitions, about what choices we should make when push comes to shove. Some of us, unfashionably, believe in immortal souls and divine law, against which we must not sin, or that her majesty inherits the right to judge from God, and that the highest good is obedience to an Earthly law. A scientist, being familiar with Hume, might read his argument that we cannot derive statements about what ought to be from statements about what is, and see cause to be an anti-realist. Maybe ought statements only express our personal preferences, derived from what we’ve been either (or both) biologically or socially conditioned to think is pleasing. Scientists might say this implies some sort of relativism, or that it’s best to follow whatever is the conventional way to think about ethics (although, if we’re not in the business of assuming the world is already a pretty just place to be in, that last one may not be safe).

So, knowing what you know about what humans are made of, and how they relate to one another, to society, or to other beings, what makes them ethical agents, and what does it mean that they are? What about patients? I think it’s probably a bad idea to try to find an answer too quickly. It might be better to think through what kinds of approaches are better, and what facts could be relevant to the question. For instance, maybe the way people want things matters, as it is defined in terms of elaborations on what’s generated by the default mode network. Maybe there can’t be exactly one right answer, or maybe there must be, or maybe it’s impossible to know. How would you argue that?

Why it Matters

I don’t fully agree with Taylor, but, in Sources of the Self, he makes a good case for why we shouldn’t be self-serving, and why we already care about ethics. He says ethics are about what we want to be, how we identify ourselves. This is always done in terms of social groups we identify with. “I am a professional, so I should…” Identity gives us community, people who share something about us (a way of life, perhaps), or maybe it contrasts us against a community (we can identify ourselves as rebels). We rarely, if ever, identify with just one thing, and it’s no use to try to come up with a completely original identity, if it’s possible to do it meaningfully at all, because it doesn’t tell us much about what we’re trying to be, given we’re the only one who would know or care that we’re being it. Identity is critically important to us in interpreting our place in the world, and what we are trying to do with it. So, says he, without identity, we would struggle to come to grips with life. It is a good thing, then, that you and I already have some ideas about who we are, or if we’re struggling to figure that out, generally already acknowledge that it is an important thing to get down. Taylor says that, since it’s all about coming to grips with life, we escape Hume’s is/ought problem, because we don’t have to set up an ethics from scratch. Ethics are already there from the beginning, and have some known properties, so we can argue about them by contrasting them. “Given these two frameworks, which one of them provides the richer set of tools for figuring out how to live a good life?” To Taylor, ethics are certainly real, but not in the sense that they’re an object embedded in a reality perfectly detached from us, rather, they come from the fact that we are social creatures, and relate to how we are trying to live in society. They aren’t just individual, or perfectly culturally relative (as there are no perfectly incommensurable cultures, which share none of the same moral ideas and cannot communicate with each other), but they aren’t exactly objective and final either.

I like Taylor, for how he grounds consciousness of ethics in social consciousness, and I like how good he is at striking down the (usually unanswerable) “why should I care about ethics?” question. I share Coeckelbergh’s complaint that he considers only one kind of relation (between the individual and the community), and I would also gripe that his ethics are unusually centred on the self, for a subject that tends to ask us to think beyond ourselves. But then, that is why the argument against selfishness works so well. It’s difficult to get through to somebody who only thinks of themself unless that’s the place you’re starting from.

Taylor’s other issues, which I will not go too in-depth on here, are first, that he frames the source of ethics as appearing in the fact that one is an individual (to contrast, Nietzsche sources ethics in will, which as is common to the German idealist tradition, and in particular as Deleuze and Guattari deftly lay out in Anti-Oedipus and The Geology of Morals, may be defined prior to the formation of anything as elaborate as an individual), and second, that he seems to imply that contrastive arguments will tend towards the good, as such (which I would suggest has to do with his fondness for Hegel). I don’t think he provides us with any good reason why there should be exactly one final view of the good (even if it must be infinitely far away), or that whatever it is, human reason will get us there, even if the end of whatever we collectively believe is good must be good by definition. For the latter case, it seems as plausible that we will amble about the territory indefinitely, never settling, making malformed arguments from intuition, and motivated by our place in society (especially if that place affords a certain measure of privilege), just like we always have (Taylor presents a historical case for why we haven’t been ambling about the whole time, but I am not fully on board with it). The path “towards” the good may be a convergence on a single point, or it may be a convergence on an oscillating pattern, back and forth between two or more points of view, for instance, or it might be chaotic.

But I stress it is something to start from, especially if you feel that thinking about ethics is not very important. I think there is a fundamentally good intuition at work here, and regrettably, though we should continue trying to justify them, good intuitions are still the very best we have to work from.

Researchers are, contrary to what they may think, in a pretty good position to do something about whatever they conclude is good. They have access to grad students, who end up in control of technologies put to practice in reality. Their perspectives are respected by the general public and state officials, often more so than it can appear. They have professional societies through which they can articulate cohesive consensus positions, often across borders. They, unlike people who hold power directly, are not likely to be very personally invested in what could be assessed to be a bad outcome for their field’s industrial applications. They are a nation’s research output, which is critical, not just for its prestige, but for its long-haul ability to compete internationally.

In the widely circulated Robot Rights? Let’s Talk About Human Welfare Instead, Abeba Birhane and Jelle van Dijk argue that we have it all wrong when we try to figure out when and why robots should have rights. What is the real ethical problem in the cognitive sciences right now is that immense labour exploitation, human rights violations (e.g. mass surveillance), discriminatory treatment (e.g., how a machine vision algorithm may be more likely to flag a black person as dangerous), and, I would add, ecological devastation (due to energy use in training and the extraction of minerals for electronics manufacturing) are the present consequences of industrial machine learning applications. Industrial applications are downstream of primary research, so practicing scientists, even if they don’t take money from industry, would be remiss not to consider their implication in these problems. It is primary research that makes them possible and worthwhile to their beneficiaries.

Birhane and van Dijk’s paper pushes us to consider ethical questions in the cognitive sciences to be continuous with broader ethical questions about how society ought to be organized, and the real human cost of those questions. Unfortunately, for instance, while the AI community seems to have absorbed real concerns about algorithmic bias and mass surveillance, its answers seem to be “can we make fairer automated judgments about whether someone’s likely to recidivize?” or “can we accomplish the goals of demographic targeting without personally identifying anybody?” and not “should we be pushing for a different model of law enforcement?” or “shouldn’t we be giving communities the ability to control and audit how AI technologies influence them, rather than influencing them into desirable consumption patterns?”, or furthermore “how would we achieve that at a technological or policy level?” While it makes sense to try to focus on solving narrow problems within one’s domain of expertise, to make the situation a little bit better, if that’s all anyone is interested in, the researcher’s power is somewhat wasted. They’re reduced to servants of industry priorities, or of public opinion, the latter of which can be influenced by people with a good grasp of the ethics (journalists, sociologists, and critical theorists are all relatively outspoken), but is often somewhat ignorant of what can actually be done, at a technical level.

But to push back against Birhane and van Dijk a little, I’ll bring back that question I asked in the beginning. What is it about people that makes them matter? As Coeckelbergh argues, this question (formulated as being asked in terms of machines and other non-humans, i.e., “how could they come to matter as people do?”), can tell us a lot about what does matter, and why it matters. In turn, the what and why tell us how to orient specific policy asks. It’s only through this kind of inquiry, for instance, that we might be pushed to wonder if differential privacy (an AI technique for making information about individuals less recoverable from statistical data) really solves the big issue with surveillance, rather than just accepting the received impression that, since there’s a technical way to avoid violating individual rights, all is now a matter of implementation and regulation. Consider: Do demographic communities have rights? Would manipulating aspects of their life without their knowledge, consent, or input be against those rights? Maybe broaden our perspective: Should we just be trying to say what firms shouldn’t do, and otherwise they can go about their business, or are there specific things that should happen, and should be made to happen? How can those things be made to happen?

Their essay succeeds at highlighting a recurring tension between ethics and politics. No matter what you’re convinced the right thing is, it doesn’t mean much if you can’t make it happen, or you can’t turn it into something that can be made to happen. It certainly doesn’t mean much if making it happen means getting a law in place around an abstract scenario that we’re not even confident is possible, especially when there are bigger fish to fry, as it were (sapient AI is not the next climate change, because whether it’s coming at all is conjectural at best, although I hope I can express another time how we should maybe be more concerned anyway). Policy, of course, always winds up as a negotiation with somebody’s making a lot of money, but the good does sometimes win out, to a significant degree, and that’s, by definition, worth the struggle. Alternately, if the good doesn’t significantly win out, think about what that would entail for the legitimacy of our political order, and what we would be morally required to do about it, given we already believe that it should.

That Would Have Been a Good Place to Stop

But what that question suggests, the question of what makes ethics possible, is actually in itself an important topic of concern. It is part of meta-ethics, the discipline of determining how to build ethical systems. Far from a bourgeois first world problem, it is of critical importance to figuring out how to align AI agents deployed in the world, even less-than-sapient ones, with human purposes. Recall that Taylor’s definition effectively presents everything we might want as a possible subject of ethics. Consider that, unless we have a way of representing ethical systems in a way an AI model can interpret, we have little hope of getting it to act according to them. Only fairly simple agents, like a facial recognition algorithm, can practically be manually corrected for bias. For very complex agents, the AI has to start encoding some of the subtleties of ethical judgment (goofy as the iterative trolley problem for self-driving cars is, it is also the result of this issue of complexity requiring ethics). Consider that, if we don’t know the right way to represent ethics, or have not considered a space as broad as possible of ways of framing ethics, we might come up with something a little goofy for encoding those complexities, like the iterative trolley problem for self-driving cars. It is evident that meta-ethics is of consequence to AI safety, because it is necessary in determining how it is even possible to define what we want of AI systems. It needs to be irritatingly rigourous, because unless it is, good luck getting a robot to understand it. We already care about AI safety because if the AI isn’t safe, lots of people will probably die, possibly you, or else algorithmic injustice will be unpreventable, or else any number of things. Whatever you care about, if an AI’s involved (and it will be before long), you probably want to be confident it’s not going to hurt anything.

The AI safety community, which has dominantly presented itself through the AI Alignment Forum, has come a long way since the days of Roko’s Basilisk, but the way it thinks about ethics can be a little off-base, in my opinion. Take, for instance, favourite methods of training AI to be ethical: Iterated Amplification and Debate (IAD), and Cooperative Inverse Reinforcement Learning (CIRL). I’ll start with the first.

Iterated Amplification and Debate is a method of standardizing rhetorical arguments about ethical questions, and training an AI to be very good at making them. Leaving out most of the details, two play a game, whereby they are presented with a question and must argue a position, in turns. In some versions, they may cite other arguments, or possibly external documents. A human then judges who was right. A model is trained to generate the entire debate process for both players. By iteratively competing against itself, much like AlphaGo Zero, the model teaches itself to play the game very well, from scratch. At the end of the training, a sufficiently good model will be able to select the right course of action, given a well-formed question, and make a good case for it. Most of the more evident problems with the game are already being worked on, for instance, how to make deceit very difficult.

IAD rests on a contrastive method for arguing ethics, in a way somewhat reminiscent of Taylor (it’s not a coincidence, per se; both reference Aristotle’s dialectic), but this version fixes some problems and introduces others. Whereas Taylor framed ethics in terms of figuring out how to come to grips with life, or in his words, orienting yourself in the (social) world, the AI’s debate process has no commitment to any specific way of grounding ethics. This is a usual assumption to make in AI modelling: if we don’t know the specific process we’re shooting for to begin with, we can find a way to gather data such that, whatever we’re shooting for, it’s latently represented in the data. The issue here is that the model assumes that human judges already have the intuitions that we want, however, since the way ethics are grounded is not explicitly represented, even externally to the model, as a way of judging judgments (and the reasons for them), we cannot conclude that. If we presume that our ethical commitments can be partly socialized, or partly motivated, then, plausibly, unless we can already know something about the quality of ethical intuitions, judges may considerably skew the real data compared to the ideal distribution. The reason why this is said to be okay is that we already accept human judgment in other domains. In other words, conventionalism. However, a couple times, I have noted that convention is significantly open to question, so I don’t think there is a strong basis for that conclusion.

What IAD improves over Taylor is Taylor’s implication that contrastive argument leads necessarily to the good, if you give it long enough. IAD models the distribution of good contrastive arguments in a question game, but what it models is how to make a good argument in the question game, which can be a lumpy space defined by very situational arguments. If it is, it will be hard for a model to interpolate well, but it is nonetheless possible. There doesn’t have to be anywhere in particular the AI is going, in the limit. However, I bring this up in reference to the last section. The AI might not be necessarily going towards the good, but it is necessarily going somewhere, even if that somewhere varies with the pool of judges and how they think. In fact, we could argue that what it’s going for is the space of the judges priorities, which might be good priorities, or they might not be. If we assume the judges’ ethical intuitions are plausibly skewed off of what we might want should there be a space of priorities that are preferable, then the model will also be off the mark, maybe significantly. If we return to the example of algorithmic bias, recall that one of the main reasons we care about embedding ethics is so that the AI won’t reproduce or intensify the skew of the data off of the ideal distribution (e.g. judgments that don’t seem to be related to race). Assuming by convention that human judgment is already good enough doesn’t solve the problem at all.

For a final definition, I’ll contrast bottom-up and top-down systems of ethics. These categories may overlap. In a top-down system, which I would say is traditional, ethics draws on principles that are defined a priori; they might as well have been etched into the universe. This seems to be the sort of model that is being relied on to do meta-ethics here, i.e., presuming there is a background, ideal distribution of given correct principles somewhere that may be approached, and that people get there by having arguments about specific problems that draw out the distribution. A bottom-up system is something like Nietzsche’s or Coeckelbergh’s, which I defined at the beginning (with Taylor falling somewhere in between). There is a place from which ethics originates and which it is, in the beginning, about (relation, will, or identity), which I will call the ethical moment, and ethics unfolds in the space defined by how these ethical moments connect. The system of these moments, however, never expects to reach finality; the ideal distribution is permanently undefined. In principle, a method like IAD could be fitting for either, but the way it is described, it seems to be defined in terms of the former, as in the latter case, we would have to work a lot harder to assess the quality of the distribution we are fitting for. The consequence of the latter case is that we have to make decisions, but we are never safe to make decisions, because what is good and why are evolving. What is believed to be good isn’t necessarily what is good (and so we can’t exactly be relativists or subjectivists), and that introduces problems for IAD, but also, what is good is subject to substantial revision (so we can’t exactly be objectivists either). For the latter, our models would need to capture that fundamental tension, which we know about because we identified the ethical moment (or the space of conceivable ones). These perspectives shape how we conceive of ethics, and how we choose to frame ethical problems, and problems of ethical encoding.

I think I recall the problem that it’s difficult to identify where the ideal distribution is relative to the judges’ opinions being articulated somewhere, so this shouldn’t be read as a dig at the AI safety community. They’re working pretty hard at something important. What I’m trying to paint a picture of is a characteristic style of thinking about ethics, which I think carries significant blind spots.

CIRL is a process by which an agent learns to help a human achieve their goals. I’m going to skimp even more on the details on this one because the problem here is actually simpler. It’s evident from the framing that the purpose of embedding ethics is, here, to make an AI agent that makes a good instrument towards a human’s ends. Referencing Roko’s Basilisk, this is actually an older problem. Roko’s Basilisk was a thought experiment that focused on the idea of a benevolent superintelligence. The superintelligence was designed so as to do the most good for the most people, so as to improve overall human welfare. Again, the AI agent is being modelled as a tool that fulfills a given end, and that’s the purpose of having it reason about ethics. In fact, that’s how I set up the problem at the beginning of this section.

But what if that’s the wrong way to approach our relationship with AI agents? Notwithstanding that we could find ourselves in a Battlestar Galactica situation eventually, we should consider what we already do in this relationship. Treating machines as objects to be used, and more broadly, having this kind of relation, where our ecology, animals, even other people can be mere objects of use, might actually not be good. I’ll go back to the favourite of embodied cognition proponents: Heidegger’s hammer.

The hammer is a tool, and it is, according to Heidegger (or at least how he is usually presented), a little more than tool, but not all that much more if I’m being honest. When the workman picks it up, or is aware of its readiness-to-hand, it extends him, affecting his mode of being-in-the-world. He knows he has the capacity to use the hammer, or else his embodied experience is partially determined by the hammer. He experiences it as an extension of himself, offering sensory feedback, allowing him to do something novel.

But from the beginning, we see this connection of affects is mutual. The workman affects the hammer by extending himself in it, but the hammer also feeds back to his senses, pushes back against his hand. The hammer becomes a nail-pounder because it is held by a workman. Even in this simple case, the hammer is extending itself in the workman, too. Let’s expand to something with a little more agency. Surely, an algorithm, that is trying to maximize view time, is, in whatever small way, extending itself in you. Perhaps in a less sinister sense, suppose I am very tall, and my friend is very short. When I tell them what is on the top shelf, they are seeing through me, and when I pull it down for them, I am their hands. Rather, I am making it possible for them to perceive something they couldn’t before, and to make use of it, which is a lot like what tools can do. This isn’t to say that hammers and search algorithms are having experiences. If there is a significant problem with Heideggerian accounts (though I haven’t read enough to say that it’s universal), it is that agency and subjectivity are concepts that wind up too strongly bound to one another, when, especially in the case of the recommendation algorithm (which may be doing something its creator didn’t anticipate), our world forces us to contend with things that have significant agency, but no experience of it. Good show we’re already pretty comfortable calling AI systems agents.

Humans are not, exactly, perfectly atomic subject-agents that extend themselves in dumb matter. Embodied cognition suggests that I extend myself into my body, but my hands are not, precisely, my instruments. They are part of me, and I am part of them, the latter as important as the former, as they would not move in the way they do without sending control signals back to my brain. If Haraway says that we are already cyborgs, that we have always been cyborgs, then as our technologies are parts of our bodies, so are we parts of them.

This kind of thinking can seem wishy-washy, so I’ll ground it. Suppose we think of our local water system as an object, so too with the surrounding land. It’s radically outside of us, waiting to be used. The water table is what is used to irrigate farmland, we also use it to bottle drinks, and for other marginal purposes like showering. Suppose those purposes begin to deplete the water table. This does not directly impact the farmland, as the farmers can pull water from deeper reserves. However, it does begin to desertify the grassland nearby, which nobody is using, so no particular attention is paid to it. The more arid conditions cause crops to fail, and strip microbes important to plant growth from the soil, causing further failure. What we depend on to live (cultivated plants) is dependent on the health of the ecosystem in which they are planted. Consequently, we are dependent on it in turn. We already know our actions affect it. This relationship of mutual affect defines an organism as part of an ecosystem. However much we might want to, thinking of ourselves as subjects who act on passive objects fails to capture the relationship between humans and the ecology they live in, except as a first approximation, potentially with severe consequences for us down the line.

So what happens when we treat AI like a tool? For one, it generally dissuades us from thinking about how AI could change our way of life, except by replacing existing tools (cars, packaging facilities, checkout lines) and adding conveniences to it (e.g. Alexa). For two, it can push us to ignore the ways in which it already does, until those ways have already become troublesome (my mind wanders back to recommendation algorithms). For three, it can generate ideas like the windfall clause, which is a proposed pact between technology firms saying that, since artificial intelligence systems may utterly change the economy, and this may be unbelievably profitable, all of humanity should receive a dividend if it happens. The windfall clause does not, then, ask if private gain is really the right principle by which the radically transformative AI should enter the world, even if everyone gets cashed out for it. What would have been the goals of the AI system we’re getting cash from? What biases might have found their way into the model? Maybe the windfall clause is just a mitigation against a worst-case scenario, but if that is true, we should also feel pushed to consider how to get a scenario that is a pretty good case.

Lastly, and I think this might be the most important consequence, it reproduces a preexisting callousness with respect to human life. Contrary to Birhane and van Dijk, it may not be a problem that human workers can be conceived as part of the machine that produces an AI model, or that may be the consequence of a more fundamental issue. It could be argued that the way their labour is organized allows for that perspective; they’re part of the machine, as it’s part of them, just by way of how division of labour works to begin with, and then again in how a process of production is not inherently visible in the product (I don’t really know how my coffee was grown, even if it’s got a fair trade sticker). What’s at-issue might be that this machine is seen as a passive object, used to churn out things to consume, full of anonymous, interchangeable parts, and exists as someone’s tool to make revenue. It is really an agent unto itself that can affect us, it is possibly a patient, too. As a passive object, our eyes are kept outside it, letting it fade into the scenery, waiting to receive our agency, to be used to produce objects of consumption. As an instrument, we accept the agency of its owner, dominating over all the little moving parts. Should they significantly impede the machine, it must be defective, because its purpose is to receive the agency of owners and consumers, not exert its own. Would this be fine if there were no humans in the machine? Would it be correct? Even when the machine (an AI system) winds up doing things neither agent anticipates?

But maybe there’s something else to it, too. Consider, “what makes humans matter?” plausibly has an account that contains characterizations of processes that take place in human minds. Those characterizations would be very important to the end of making at least some kinds of beings that matter. Consequently, a system modelling them might also be modelling mattering. In other words, if humans are ethical agents or patients, our model might be the same. I would like to expound on this another time. I’ll add that in order to demonstrate that this scenario is actually plausible, it will be necessary to show that the mind is a process abstract to the body (usually, specifically the nervous system), such that a different kind of physical system could be said to be meaningfully doing the same thing, and this process can reasonably be done by a computer. For now, in absence of a confident assertion either way, we should at least entertain that it is possible, if only for fear that it will happen, and we will deny that it has.

In the exploration of the cognitive sciences, we may, on purpose, construct an ethical agent or patient, directly from some model of the ethical moment, in much the same way as we are already trying to implement other aspects of ethical models. Or, maybe we will do it by accident, or not at all. Whatever the case, what we develop will go into practice in industry, and what we say about it in its formative stages can inform policy made about it. How we try to make policy can reflect back on our work, and might get us closer to defining a model of mattering, making successful modelling all the more important to do.

What I dread to think is that we might make something, and by chance or brilliance, it does everything that makes people matter. Then, out of ignorance, or incredulity, or dispassion, we will neglect to mention it.

Coeckelbergh’s recent book, AI Ethics, is a brief, accessible, and pretty serviceable introduction to the topical problems and arguments surrounding ethics in the cognitive sciences and AI industry. Strikingly, he doesn’t reference his own case for relational ethics at all, and so the approaches he covers wind up being pretty conventional. If what you want is an overview of the terrain circa 2020, that’s the book to get. To an extent, I think the terrain isn’t considering all of the relevant problems. Here, I tried to expand the scope a little bit. I hope that, for your point of view, I succeeded.



from Hacker News https://ift.tt/33tfV1W

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.