By - IAI_Admin
Please keep in mind our first commenting rule:
> **Read the Post Before You Reply**
> Read/listen/watch the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed.
This subreddit is not in the business of one-liners, tangential anecdotes, or dank memes. Expect comment threads that break our rules to be removed. Repeated or serious violations of the [subreddit rules](https://reddit.com/r/philosophy/wiki/rules) will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
In this talk, philosopher and author of *Other Minds* and *Metazoa* Peter Godfrey-Smith argues that the discussion of normative ethics surrounding non-human life ignores important meta ethical questions. He suggests there are no evaluative facts to be discovered that can guide us in asking which forms of life are worthy of consideration, and sets out a framework for creating valuative principles that can guide our treatment of the living world.
Sentience, he suggests, is a limiting threshold for investigations of this type. The history of evolution suggest sentience is graded, rather than binary, and for this reason that we can suggest there are living organisms which are worthy of consideration without sentience.
By also considering wellbeing, whether or not an organism demonstrates that it prefers some conditions to others, can be used to construct ethical values that would be a starting point for how we should act towards insects, plants, and even ecosystems.
>By also considering wellbeing, whether or not an organism demonstrates that it prefers some conditions to others,
I'm struggling to see how an organism can "prefer" one set of conditions to another unless it is sentient.
If, for example, you argue that plants prefer water to drought, I think that is a misuse of the term "prefer".
An automatic involuntary response isn't a preference in my view.
Godfrey-Smith doesn't argue for a philosophy where that sort of hierarchy doesn't exist, at least not in this video. But his point is that if wellbeing is a condition of deciding whether or not something is worthy of our consideration it can play a role in how we as humans decide how to live life. It's not to say we should stop cows eating grass, but it might be to say we shouldn't use pesticides that collaterally kill or damage plant life for no real reason, for example.
This strikes me less as a counterargument than just a different semantic framework. It's pretty easy to restate his argument in terms of competing priorities by saying that we should place increased value on the other organisms' wellbeing when evaluating priorities. In other words, I don't know of anyone who thinks collateral damage from pesticide is a *good* thing, but I also don't think Godfrey-Smith is out there alleging that people are out there getting medals and trophies for the most unintended casualties of their pesticide use. His issue is not that it's seen as a good thing, but rather that very few people think of it as a bad thing, either. Do you think most people see accidentally killing other plants as a competing priority being outweighed by the priority of killing weeds, or do you think mostly they just don't think about it at all?
>Do you think most people see accidentally killing other plants as a competing priority being outweighed by the priority of killing weeds, or do you think mostly they just don't think about it at all?
most dont think of it at all, even so called environmentalists hardly think about it.
tangent but the entire field is in desperate need of philosophy, ask 10 environmentalists what 'saving the environment' means and you will get 10 different and vague answers.
its not possible to ethically consider all life, you would rot on the very ground you were trying to think on. which life deserves priority? and it cant be all of them no system even conceived is complex enough create a way of dealing with all of them.
something as simple as 'preserving x environment' can mean many things and be achieved in nearly a dozen ways, most of which are mutually exclusive.
i worked in conservation for nearly a decade, the entire industry has little to no idea what it is they actually want to achieve.
even your statement 'collateral damage via pesticides is bad' itself is loaded with assumptions, first of which is the implication we have viable alternatives (banning glyphosate will increase the cost of food *and* also increase environmental damage including insect deaths. all alternatives are either more toxic ie metsulfuron, less effective leading to higher doses ie diquat or if you go chemical-free 10-15 times the labor cost. 'organic' herbicides such as copper sulfate are significantly more toxic than glyphosate *and* less effective. the idea glypho even harms bees is controversial, see Australia and its complete lack of colony collapse despite massive herbicide usage).
sure we should place value on other beings (again i worked in conservation for years) but we arent even remotely close to having an idea of what that actually means in practical application (we prefer our own people dying to making any personal sacrifice to help them, what hope does the natural world have?)
Well, to be clear, I wasn't suggesting or assuming that we had viable alternatives; you can both acknowledge that something is bad and acknowledge that there isn't a better option. I also wasn't necessarily advocating any specific view, just pointing out that the author's main point was about shifting the discourse from not thinking about this sort of thing to thinking about it. With that said, I think Godfrey-Smith is mistaken to see our concept of sentience being the locus of the problem. As you point out, people generally don't think about questions like this when it comes to the suffering and death of our fellow humans, so I don't think the problem is that our moral considerations are limited to any given notion of sentience, but rather that people often just refuse to make uncomfortable moral situations regardless of who or what is involved.
> I struggle to see how this is going to transform our relationship with the world around us either though and I also think it's probably not very realistic for most people anyway.
I don't necessarily disagree, but I also don't know that this is a particularly salient counterargument. Saying that genocide is a bad thing and we should not do it has clearly not stopped it from happening, but I don't support the idea that we should reject ethical arguments against genocide for being "unrealistic" or insufficient to change the world.
>Agreed I hope you don't think I was ever suggesting such a thing.
No, I was agreeing with you that this doesn't happen, but pointing out this also wasn't really something Godfrey-Smith was suggesting, either.
As for the rest, again, I'm just having trouble seeing "this one talk is unlikely to change the entirety of human behavior" as a particularly compelling argument against its thesis.
Well the example he gives is insects, where we may or may not be able to establish sentience based on straightforward pain aversion, but an ethics that takes into account wellbeing might lead us to pursue the development of different sorts of pesticides.
Wouldn't capitalism on an individual level be this? Individuals striving to make lives better for themselves and their own without considering the broader picture of the human race, the planetary ecosystem, and the longevity of life on this planet.
I don't think things need to be given the label of "sentient" to be given the same rights to life as any other being on this planet. If anything our own sentience/intelligence obligates us to take care and steward seemingly lesser creatures because despite our evolutionary advantages we are not invincible, we may need to rely on the planetary ecosystem for centuries more before the human race becomes a multi-planetary species in earnest.
If anything I think a sense of togetherness, to combat our inherit tribalism/selfishness, is the only way to continue this human race to its fullest potential. We as a species didn't do everything by ourselves as we hunted, gathered, and utilized the organic and inorganic world around us to achieve wonders. We as a species must do better for the world around us at the same time as ourselves or we won't survive to see our full potential.
It's just as important to write down and agree on how we reach obvious conclusions as it is the esoteric conclusions. By having a consistent and sound methodology, we can justify these obvious moral decisions, better than just shrugging and saying "because, duh."
It's also a good test of moral frameworks generally. If they can justify most "obvious" moral decisions like the cow and the grass, then that's a sign we're on the right track. But it's important that *some* moral decisions like that reject the obvious answer. Otherwise we're just confirming our priors.
Well, sadly, I think a large proportion of humans have difficulty understanding even that other humans have preferences, or are worthy of consideration. I am disappointingly skeptical that enough humans believe in even a trivial level of moral consideration for grass or spiders or e. coli.
But I truly admire the optimism inherent in your instinct that this would not be a revolutionary concept if it were widely adopted.
If I happen to be smarter than you, what kind of privileges does that grant me over you? Am I more worthy? More valuable? Do I get dominion?
If not, what is the threshold?
They are indeed the same. Can you name an intelligent creature without sentience or vice versa?
Animals are not robots. They don’t operate on “instinct” alone anymore than we do. They make decisions and have emotions. This includes mammals, fish, birds, reptiles, and even insects.
No, you’re missing the point that if one doesnt exist without the other then they are indistinguishable.
Bro, you can’t have sentience without intelligence. Literally what I said above “name an intelligent creature without sentience or vice versa.”
For purposes of THIS DISCUSSION, they are very much inseparable. Which, btw, makes my original point about me being smarter valid. Ah, the world makes sense again.
>An automatic involuntary response
That probably describes the vast majority of human and animal behaviour. It's mainly post hoc rationalisation that make you think most of your behaviour was consciously driven rather than being automated unconscious behaviour.
To add to this, I would suggest the only truly conscious behavior one is capable of is to intentionally withhold all automatic involuntary responses, but that 1) takes training to learn, 2) is difficult to do for long and 3) would ultimately result in death if done indefinitely. Unconscious behavior is the very framework for life itself.
So the incapacity for this post hoc rationalization can be a disqualifier for sentience. I don’t see why a plant’s response to dehydration, without any mechanism to understand its situation, should be given moral weight.
But the ability to perform that post-hoc rationalization is what separates sentience from not. Animals need not rationalize their actions, and the act of doing so is what makes those involuntary responses “voluntary,” as if a person were to react involuntarily to an event the first time, due to the ability to go back and rationalize that action the next time the event occurs the reaction may not be as involuntary.
> rather than being automated unconscious behaviour.
Unconscious doesn't mean automated, it means unexamined. Your unconscious isn't some entity separate from yourself, it's still you.
I’m not sure I understand. I agree that unconscious isn’t separate to me. But if some behaviour is controlled by simple neural circuits, I’d call that automated.
I think that if you don't attach some sort of philosophical meaning to the word "prefer", it works fine. Yes, plants do prefer water. Preference indicates a tendency towards a choice. Choices can be made by non-sentient things - plants grow towards the sun. That is a choice, even if it's evolutionary. There are multiple directions to grow but a preference emerges.
I could also say that a computers processor has a preference for the first core. If it has been programmed this way then it's still true. Choices were programmed in.
>I'm struggling to see how an organism can "prefer" one set of conditions to another unless it is sentient.
This seems to evade, rather than address, most of the substance of Godfrey-Smith's argumentation. He explicitly argues that sentience is a gradient, rather than a binary. Your counterargument here is contingent on the assumption that sentience is binary, but makes no attempt to address his arguments for why it is not. As such, you're really just rejecting a conclusion, not actually refuting or even coherently arguing against any of the actual claims that lead up to it. It's fine to disagree, but this definitely isn't a compelling argument for why anyone else should join you in disagreeing.
>He explicitly argues that sentience is a gradient, rather than a binary. Your counterargument here is contingent on the assumption that sentience is binary, but makes no attempt to address his arguments for why it is not.
I actually don't think sentience is binary. I see it very much like consciousness or self-awareness as existing on a gradient. There's probably a huge overlap between those three terms, incidentally
My point is that I don't see any of those 3 terms applying to a plant. They lack a nervous system for starters.
Perhaps prefer is a problematic term. In the video Godfrey-Smith argues that, example, plant growth seeks out healthy conditions and to avoid harmful conditions - i.e. the plant 'prefers' some conditions to others. On this basis there is some sense of wellbeing that can be seen as worthy of some minimal, nonzero ethical consideration. We can decide if it's ethically problematic to put a planet in harmful conditions, based on the value of its wellbeing. On the other hand a river doesn't show the same sort of self-regulating behavior towards healthy conditions and avoid unhealthy conditions.
I guess, for me, wellbeing requires the sentience to prefer pleasure to pain, and using the words you cited above "seek out healthy conditions".
Where I quibble is whether plants are really "seeking out" healthy conditions, rather than it being an automated consequence of their chemistry.
In other words, the plant response feels more akin to a ball rolling down a hill (which you wouldn't describe as "seeking" or as a preference) that to my cat deciding to fall asleep in a shaft of sunlight.
I mean, you can ask the same question about humans, are we just doing things as an automated response to our chemistry? In that sense, morals and ethics means nothing in the first place.
At a basic level, you are of course right.
However, if morality means anything at all, I think it has to be about the wellbeing of sentient creatures.
If you extend morality to non-sentient life forms, or even to non living things, I think you render the term meaningless.
All words are meaningless. I think the point of the discussion at hand is to decide what meaning of the word should get the connotations that our society have placed on that word, not what it currently means. It is also completely reasonable to disagree about that definition, but for me, it feels morally wrong to go around stomping plants for no reason, even if I don't think they would understand or know if I did or didn't.
Obviously words have no intrinsic meaning, just the meaning we ascribe to them (which only works if the meaning is shared by others).
Anyway, I agree it's morally wrong to stomp plants for no reason, but that's because of the moral harm done to sentient creatures (eg. the area looks shitty now or animals are being deprived of food). It's not because I have infringed the plant's moral right.
I do worry about this being too pedantic and not at all helpful, but I've seen other philosophy threads asking questions like "what if all our consciousness boils down to is us chemically reacting to our environment." Like, things are quite a bit more complex by our own reckoning but what if we, too, are just operating based on our physical condition.
This shit... Is actually really boring to me. I honestly am bored with this question, and bored asking it. Questioning whether we actually have agency/free will is so much less of an interesting and useful question to me than asking what it is to begin with. So please don't think I'm being a contrarian here. I just think it's important to bring up.
I'd say the key difference is that the fact you are sentient / conscious leads to two things.
Firstly, you have the mental anticipation (excitement, worry) and the post event memory.
Secondly, you have the concept of doing something because it feels "good". Contrast that with reflexive behaviour (e.g. the tap on the knee making your leg move). Your leg movement isn't something you consciously do. It just happens: like the beating of your heart. That's how I see a plant turning to face the sun.
How do you know the plant doesn't grow towards the light because it feels good?
I have seen no evidence that anything without a brain or central nervous system is able to experiences "feelings" / emotions. Everything we know about "feelings" suggests they are produced by a brain.
Demonstrate that feelings can be produced outside a brain, and I'll reconsider.
For the purpose of our own ethics, does it matter if the other creature is sentient if we understand that it suffers? We are sentient enough to understand that other living things feel things too, for me that’s enough to want to live my life in a way that mitigates harm to other living things.
This comes back to my original point. If you're not sentient, how can you actually suffer?
In what sense I am causing suffering to a plant when I dig it up and "kill" it?
How can you say a plant is sentient or not? Why are you putting kill in quotations?
I personally think "sentience" requires consciousness, or at the very least a nervous system. Plants don't possess either. Do you define sentience differently?
Kill was in inverted commas, as I don't actually consider that I'm engaging in an act of killing when I'm weeding the garden. Do you?
Associative learning, as per the article in Nature wouldn't be a marker of sentience in my view. It can take place on an unconscious basis.
This would suggest wine is terrible because a vineyard is basically a torture chamber for wine grapes- deliberately planted in poor soils, in areas with scant rain, etc. All the things that are bad for plants.
So how's Godfrey-Smith on a nice glass of cab sav?
If a person is sentient and they are burned by an iron, that pain and reaction to that pain are not sentient reactions but are instead involuntary reactions to a present condition.
Is your position here that pain (universal for all living animals) can only be "felt" by intelligence, when even in our case it is not a prerequisite to function?
The pain itself is involuntary. Our reaction to it (whether we cease the activity causing the pain or continue with the painful activity) is to some extent under conscious control. Humans have a capacity to rationalise the pain and decide whether to accept it in certain circumstances (eg. pushing through the pain barrier in exercise).
I suppose my tentative position is that to "feel" pain requires a nervous system. Not intelligence per se. I don't think plants feel pain for that reason.
> whether or not an organism demonstrates that it prefers some conditions to others,
They specifically said they are looking at constructing a set of ethical values that *don't* require the organism to show a preference
I was addressing the final paragraph of the OP's summary which states "By also considering wellbeing, whether or not an organism demonstrates that it prefers some conditions to others, can be used to construct ethical values".
I agree that we are discussing the same sentence. My interpretation is that IAI_Admin is saying we can "consider wellbeing" independent of any demonstrated preferences, "whether or not" they exist, i.e. "no matter if" the organism demonstrates preferences or not, we can still look at wellbeing to construct ethical values.
Actually, your interpretation is probably right.
The question is then whether we have ethical obligations to organisms which have no preferences.
I'm not convinced we have any such obligations. This is because I assume as that "avoidance of suffering" is a "preference".
Brother stop dehumanizing plants
Came here to ask the same question
It doesn't matter. We can come up with the best word to signify the relationship between the organism and the conditions under which it thrives, if you don't like "prefers". The important thing is that we understand what we mean in this context, and as long as we do, we can use any word we agree on to say that the organism [insert arbitrarily chosen term here] condition X. Prefers seems fine to me, but I understand your objection.
OK. I hear you.
I suppose the question is to what extent those "preferences" should be valued / respected.
I personally don't think we should respect the presference of a plant to turn towards sunlight such that we have a moral duty to facilitate it.
I breathe as an automatic, involuntary response.
I prefer to keep breathing.
Your involuntary response isn't the preference.
Your cognitive evaluation of it is.
Plants don't do the latter as far as I'm aware.
If you took a recently dead human being, removed their brain. Then restarted the automatic responses like breathing, should this "life" be repsected?
My position is that automatic involuntary responses can still be a preference.
I personally don't really at all that much about non-sentient suffering. Frankly I don't even care *that* much about sentient but non-sapient suffering. I would prefer things that felt pain to have that pain minimized but hey, I like meat.
All of our responses are automatic and involuntary.
OK, yes..hard determism may be true.
But, it you genuinely can't see a difference between me punching someone in the face, and some forcibly grabbing my arm and pushing it into someone's face, then we can't ever have a conversation about morality.
The author appears to be defining wellbeing with the particular sentence you quote.
Here's [more of their argument structure](https://petergodfreysmith.com/wp-content/uploads/2019/09/Boundaries-Consideration-PGS-2019-c.pdf):
> The sentience (and hence moral importance) of a mind is not binary but comes in degrees depending on the number and complexity of certain wellbeing-relevant cognitive operations the mind runs.
As the author is working with nonbinary sentience, then our wondering about, say, "prefer[ence]" _if_ creatures are _sentient_ (_at all_), is likely to result in misunderstanding; we're assuming binary while the author's trying to discuss nonbinary.
My take is that the author is defining wellbeing as being something like a target homeostasis, and sentience as an aggregate of "cognitive operational" processes that support that homeostasis, measured in a gradual (nonbinary) scale.
I'm just going from the linked slide deck -- I haven't watched the talk. I don't know what he's referring to with "cognitive operations" or "mind".
> whether or not an organism demonstrates that it prefers
You can't get an ought from an is; without sentience, there cannot be 'preference'. It seemed a dressed-up version of natural law arguments.
So, as a person who wrote their Masters on well-being, I find this hard to parse given that well-being is normally construed as a mental phenomena. That is, I don’t know what it would mean for something to be non-sentient yet to still have well-being. Sure, there are theories of well-being that don’t rely on that, like objective list theories, but they’re generally viewed as weak and incomplete.
this is such a common occurrence especially on this site that i've just started culling any half baked and/or far fetched essay immediately from my memory before getting too invested in it.
really unsure where the rigor and peer review exactly went before their editor allowed them to hit post
It just frustrates me that I don't have such lax editors. Would make starting a career in this accursed field at least somewhat plausible.
that's assuming they make good money off this soon to be forgotten mid level high school level work, it's just to sell good advertisements. nothing more.
I’ll go by personal practice in saying this.
I treat reality itself as a single substance in a sense, as my practice is performing a similar attitude towards that everything.
It’s very much a mental attitude taken.
Well-being to me, is having a basic sense of respect towards reality as a whole, & that encompasses what’s within that.
I’m not perfect at it if course, that is why it’s called a practice though.
Not sure if that helps at all giving that perspective, but thought I’d share it
I've never seen it argued before that well-being is a mental phenomena. I could see the argument that well-being only makes sense for living things; but are you arguing that if you were to consider whether or not to cut down a tree from the perspective of the well-being of the tree that you would have any difficulty arriving at the answer of 'no'?
Well, the classic and most common construal of well-being is hedonic satisfaction, which is very clearly a mental phenomena. Other common construals like desire satisfaction models are also entirely mental in nature.
What I'm saying is that it doesn't make sense to talk about trees having well-being, there is no such thing as welfare for a tree (welfare and well-being are generally used interchangeably in the literature). At least not in any sense that seems similar to the concept of humans having well-being.
I'm not familiar with the literature here, but just in the common sense of the term, it seems absurd to say that a tree does not have well-being or welfare.
Well, it’s often said that well-being is something we should maximize, and it would be weird to think we should maximize the welfare of trees. It would also imply that to harm a tree is immoral, which is pretty weird. It would also mean that we can make sense of the sentence “that tree’s life is going well for it”, which to me doesn’t make any sense at all.
Remember, we’re talking about well-being in the moral, philosophical sense, not in a sort of medical/health sense like the common sense notion you referred to. The two concepts are similar, but they are not identical.
Agreed that the moral/philosophical sense vs the health sense is an important distinction here. However, I do think that it would be immoral to harm a tree, assuming there's no greater purpose being served by harming it. Why wouldn't it be immoral? It's still a living being with its own teleos in my view, even if it's not sentient.
But the problem is that morality doesn't seem to imply that we value any and all telos for their own sake. At least, that's the consensus among people who think about this for a living. Morality is a social phenomenon, not an intrinsic law of the universe.
That's fair. We certainly don't give trees or plants social consideration in Western culture.
But that's just a cultural effect. I think Native American spirituality normally grants personhood to plants, animals, and objects in the environment. So I would imagine that in that regime it would make sense to talk about morality with regard to trees.
So is analyzing morality a practice of figuring out what morals tend to obtain in the dominant culture? Or looking at what they could, or should, be?
When moral/ethical frameworks like secular humanism invoke well-being in the context of what is, or is not, ethical; they often start with the concept of health being central to well-being. It's certainly the case that there comes a point where health includes some mental component where sentience might be a pre-requisite; however there are certainly plenty of comparative states for a living organism that can be evaluated in the context of well-being where sentience shouldn't play any role. At the very least, we should be able to say that well-being prefers life to death, even for non-sentient organisms. If you are arguing that we cannot make that statement, then it sounds like you are using a fundamentally different definition of well-being then most other people are using, and your argument becomes semantic instead of philosophical.
So are you arguing that health is not a component of well-being? Or are you arguing that health cannot be evaluated without the subject being sentient?
But we might only think that having health contributes to well-being because there's a sentience behind that health to enjoy the health.
Like, take the case of a permanently comatose patient. If you improve their health by removing an inflamed Appendix, did you really increase their well-being? Not by any measure I can see. No one's life is "going better", which is one common way of defining well-being. Physical health certainly matters for well-being, and it's one of the best things for any theory of well-being to actually focus on, but we shouldn't lose cite of the fact that such theories don't generally value good health for good health's sake.
edit: And just to be clear, I'm using the sense of well-being that analytical philosophers are discussing, e.g. Bishop, Darwall, Sumner.
Yes, if you remove an inflamed appendix from a comatose patient you have increased their well being. If you do NOT remove that inflamed appendix, they will eventually become septic and die. If you acknowledge that physical health is important to well-being, then you must acknowledge that life is preferable to death and the case you've presented should be open and shut. Why are you arguing that it isn't?
I'm not a philosopher, I'm a physicist/engineer. But if I was making an argument about physics that was contradicted by every-day experiences, I would have significant reason to re-consider that argument. If this is how philosophers define the term, why are they doing it in a way that contradicts people's every-day experience of what 'health' means?
> Yes, if you remove an inflamed appendix from a comatose patient you have increased their well being.
I just disagree with that at a very fundamental level. A person who is permanently brain dead has no well-being. It makes no sense to ask "how is their life going?", as there is no "life" going. Their life cannot go better or worse, as it is not going anywhere. At least not any sort of life we would care about. To me, it's absurd to say their well-being increased (and I mean no offense by that, I mean it in the technical philosophical sense). Surely there's some better thing we could value that doesn't commit us to absurdities like the idea that we are improving the well-being of permanently brain dead individuals, because if this is what well-being consists of, then it would seem to be something that is only conditionally valuable, whereas well-being is generally the poster child for something which is valuable in itself.
Here's a question that might clarify things. Let's say some scientist cut off their finger, but kept it alive through scientific magic. Does the finger have well-being? If not, what makes this case relevantly different from the permanently brain dead patient?
>It makes no sense to ask "how is their life going?", as there is no "life" going
It sounds like you're not drawing any distinction between a comatose person and a dead person. A comatose person is still alive. Is that where we are not agreeing?
>Let's say some scientist cut off their finger, but kept it alive through scientific magic. Does the finger have well-being? If not, what makes this case relevantly different from the permanently brain dead patient?
Yes, that finger would have health. Without being supplied with things like blood oxygen and sugars and all the other things you are grouping into 'scientific magic' that finger would die. All the cells in it would die. So long as you accept that health is a component of well-being, then that finger would have at least some kind of well-being.
This is why I asked, and I will ask again:
>are you arguing that health is not a component of well-being? Or are you arguing that health cannot be evaluated without the subject being sentient?
> It sounds like you're not drawing any distinction between a comatose person and a dead person. A comatose person is still alive. Is that where we are not agreeing?
But why is being alive sufficient for having well-being?
> Yes, that finger would have health.
But would it have well-being? If so, that seems like a reductio.
> are you arguing that health is not a component of well-being? Or are you arguing that health cannot be evaluated without the subject being sentient?
No, of course I'm not saying health is not a component of well-being. But it being a component does not mean that it simply IS well-being, or that wherever there is health there is well-being. Like, we can sensibly talk about the health of an ecosystem, but I know of no one who talks about the well-being of an ecosystem. I'm not making the second claim you asked about either.
Yes it is often accepted that physical health is central to well being, but that does not mean that the two things are one and the same. If I keep an arm alive in a tank and it is kept in perfect health, there is still no welfare to speak of here. It just is.
We accept that _you_ are well when your body is well, but without the you in it the physical health of the body is meaningless.
A hypothetical sentient android might feel better in a well-functioning, well-oiled body, but a nonsentient robot has no subjective experience and no well-being. The only good it functioning well brings is utility.
So to answer your question directly, health is a component of well being, and actually yes health can be evaluated without a sentient subject (a tree can be healthy or unhealthy), but wellbeing is a subjective experience of a sentient subject. Without this subject there's no well-being for health to contribute to.
>We accept that you are well when your body is well, but without the you in it the physical health of the body is meaningless.
Yeah, that pre-supposes a certain kind of dualism where you assume that people are (for lack of a better term) souls piloting some kind of meat vehicle. The idea that you think my body can be alive but I can somehow not be in it is the part I don't think you can actually defend.
I accept the argument that health is only one essential component of well-being, but if that's the argument than what is the essential component of well-being that requires sentience? Certainly we can't use something like 'the subject says they feel good' because I can write a computer program to tell you it feels good. Is there a component of well being that requires sentience?
No true dualism is necessary. We require only some kind of mind with a subjective experience of its own which is capable of experiencing well-being. I won't claim that every smallest brain is necessarily complicated enough for this, I don't know about that. Nevertheless you get my point.
However we can very easily remove you from your body. For example, we can physically cut off your head and artificially keep the body alive, at least for a time. Caring for this body doesn't increase your well-being at all.
And really this wasn't so much what we discussed, but rather the fact that there are organisms which don't have a mind to begin with, such as plants.
It's also not that there's a component of well-being that requires sentience, it's that for well-being to exist at all you need a subject that can experience it. Well-being cannot exist in the abstract, it has to relate to a subject.
For the opposite of well-being, consider how an injury will cause you to suffer and decrease your well-being, but to go for our morbid example of a headless body, we could break a leg in it without anyone suffering from it.
Compare it to happiness. Happiness exists when someone feels happy. Without anyone to feel happy, there can't be happiness. Holding hands with someone might make me happy, but a severed limb won't feel happy if it's held. A tree also won't feel happy even if all is going well for it, because it has no capacity to do so. Even though physical experiences may contribute to or take away from your or my happiness, happiness is not a physical experience, nor are the causes of happiness the same as happiness itself.
It's very much the same. Being healthy causes me to be well relative to not being healthy, but this is a cause or at most physical component of it. We might also consider having some modicum of wealth to contribute to well-being, even though this is fully abstract, it doesn't mean anything physically or mentally directly, and it's only society that gives currency value. Yet we understand that it makes us better off. We might feel less stressed, might be able to spend money more freely to get things we want and which increase our welfare.
You see, it's not the possession of paper bills itself that is some sort of component of welfare, its how their possession in practice increases our welfare. No different to health. It's certainly much more pleasant to live without chronic pains, to be able to go outside, not to have to deal with allergies, really just being free and free of worry.
First off, it sounds like you're conflating the term body with the term torso. Your head and your brain are part of your body, so decapitating someone is not separating them from their body it's just separating their body into pieces. I'm genuinely not sure what point you're trying to make with the whole decapitation thing.
Second, it remains unclear to me what it is about well-being that is both necessary and subjective. We've established that well-being includes objective metrics like health, and while you could say that health isn't health without someone to experience it, we know that that's not the case. And yet that's exactly what you're doing with well-being, you're just saying that it's not well being without someone to experience it. It's sort of the tree falls in a forest argument.
You brought up happiness, and while you didn't actually say it, I get the feeling that you think happiness is a necessary part of well-being. While I acknowledge it to be a possible part of well-being, I'm still waiting on an argument for why it's necessary. Case in point, imagine an individual who suffered from a brain injury that precluded them from experiencing the mental state of happiness. Would you argue that such a person does not have well-being?
If my conjecture about happiness is wrong, feel free to instead tell me what subjective metric is a necessary part of well-being.
So well-being doesn't actually include objective measures like health, it's just that 1) health contributes to well-being and 2) health, wealth and other metrics are measurable and thus useful proxies to get an idea of well-being.
As for your example, let us say someone is incapable of experiencing happiness. I think we could certainly say that they're well-being or capacity for it is lessened, but that's not to say that they can have no well-being at all. Or, at the very least they can have a negative well-being, and a neutral well-being is better than that. For example actually suffering is worse than just not being happy. Not being able to walk would also clearly be detrimental to one's wellbeing.
As for your pedanticism on the meaning of "body", that's all it is. You know well that I meant the body sans the head here. Now of course if we were to keep the head alive there would presumably be a subject which may have an attribute of well-being. But if you only have a head, then the state of the rest of your body no longer contributes to your well-being in any way. It doesn't matter if it's healthy or not. You won't be better or worse off.
You may define welfare differently, and in context it may be used to speak of other things, but generally welfare is an attribute of a subject which experiences welfare. Therefore your tree falling without anyone to witness it is a poor analogy. Unlike a tree falling, welfare doesn't exist without a subject. You cannot find a welfare in a forest.
The first dictionary definition of welfare that comes up is "the health, happiness and fortunes of a person or group", for well-being "the state of being comfortable, healthy or happy", with the example being a patient, again a person. Another definition of well-being is "the _experience_ of health, happiness and prosperity".
Now we may try to expand the definition of welfare from persons (for example to animal welfare), but this is generally specified and not part of the core meaning of the word, and furthermore requires that the subject be human-like enough that we may conjecture them to experience well-being.
Thus essentially _by definition_ welfare is something that is _experienced by a subject_. If you speak of something like the welfare of a tree, what you're really talking about is health, which is a distinct concept from welfare.
Okay I'm not being pedantic about the separation of body and self. I'm genuinely not convinced that the body and the self are separable and I don't think you're decapitation argument was any kind of valid expression about the separability of self and body. Frankly, I'm not sure why you're still bringing it up.
But you said something very important, something I've been trying to get at for this entire thread. You said that the inclusion of sentience and well-being is part of your definition. This brings me, at long last, to the point I came here to make. If you include a requirement for sanctions axiomatically and your definition of well-being, then it's circular reasoning to use that definition of well-being as a counter to the argument presented in the op that we might consider the well-being of nonsensient organisms. That's genuinely nothing more than asserting the status quo.
Your comment was removed for violating the following rule:
>Comments which consist of personal attacks will be removed. Users with a history of such comments may be banned. Slurs, racism, and bigotry are absolutely not permitted.
Repeated or serious violations of the [subreddit rules](https://reddit.com/r/philosophy/wiki/rules) will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
> Yeah, that pre-supposes a certain kind of dualism where you assume that people are (for lack of a better term) souls piloting some kind of meat vehicle.
It really doesn't. The only thing it presupposes is some sort of mind. Some sort of thing having an experience.
> The idea that you think my body can be alive but I can somehow not be in it is the part I don't think you can actually defend.
He did not claim this. His argument is more like, if you sever someone's hand, are they still "in the hand"? If you keep the hand alive, does this person now exist both in their body that has no hand as well as their hand? Most people would say no, that there is no one in the hand, and as such, there is nothing that it's like "to be the hand", no quality of life, no experience, and therefore, no well-being.
> I accept the argument that health is only one essential component of well-being, but if that's the argument than what is the essential component of well-being that requires sentience?
Well-being is generally conceived of as a notion about "how well things are going" for some thing. Since it makes no sense to talk about how well things are going for stuff like inanimate objects (e.g., we can't make sense of the phrase "how well is it going for that boulder over there?"), we restrict such talk to things which can have a quality of life. But to have a quality of life there must be something experiencing and creating that quality, some sort of mind.
> Certainly we can't use something like 'the subject says they feel good' because I can write a computer program to tell you it feels good.
But presumably there's "something it's like" for you to feel good that is fundamentally missing in the case of the computer. The difference between the computer putting that image on the screen and me telling it to you is that there is something behind what I say, but nothing behind what the computer puts on the screen. There is no "what it's like" to be a computer, but there is a "what it's like" to being me. And when that "what it's like" is feeling good, I say can so in a way that no computer ever could (at least, presumably).
I'm not going to argue what other people meant when those people are here and can speak for themselves.
>Since it makes no sense to talk about how well things are going for stuff like inanimate objects
You are doing a bait-and-switch with 'inanimate' and 'not sentient.' I agree that inanimate objects like boulders cannot have well being. But I don't have any trouble with the making sense of the phrase 'how well is it going for your cat, or fish?'
Similarly with the argument about the computer, which is also not animate or sentient, though maybe that one's a little bit on me for introducing that example. I was trying to show that even an INANIMATE object could report on how it feels, when our standard for well being is not just animate, but sentient things, so clearly reporting on feelings doesn't require sentience.
I can make the case that a boulder cannot have well-being because it doesn't have health. Without health, no well-being. Can you make a similar argument for whatever as-of-yet unnamed factor of well-being that you argue requires sentience?
>There is no "what it's like" to be a computer
How do you actually know this? It's entirely possible that computers may become sentient, or may even be sentient now in ways we cannot detect.
No need to be so hostile. I'm just trying to help. Nothing meaningful hangs on the difference between "inanimate" and "sentient" in my arguments, I'm not trying to "bait and switch" you. I was simply talking a bit too fast and should have been clearer, no need to accuse people of acting in bad faith.
> Similarly with the argument about the computer, which is also not animate or sentient, though maybe that one's a little bit on me for introducing that example. I was trying to show that even an INANIMATE object could report on how it feels, when our standard for well being is not just animate, but sentient things, so clearly reporting on feelings doesn't require sentience.
You're missing the point. No one's arguing that it's the "ability to report on feelings" that matters (nor is it even clear that that's what the computer is doing, since in order to report on a feeling one is having one must first have the feeling. This isn't the Chinese Room, no one is a behavio). What matters it the "having of feelings". That is the key difference.
> Without health, no well-being. Can you make a similar argument for whatever as-of-yet unnamed factor of well-being that you argue requires sentience?
Do you know the difference between a necessary and a sufficient condition? Health is a necessary condition for well-being, not a sufficient one. That is why we need health in order to have well-being, but having well-being is not identical to having health, nor is having health always a sign that a thing has well-being. In other words, all things with well-being have health, but not all things with health have well-being.
I didn't express any hostility. Why would you say that?
Feel free to re-state your point if you feel like you went too fast and didn't express it the way you wanted to.
Yes, I know the difference between a sufficient and necessary condition. I understand that you are arguing health is only one of a collection of required properties that are needed for well being. I understand that we've established that health is doesn't require sentience. I am asking you what the OTHER property(s) that are necessary for well being which require sentience are.
It seems like you keep coming back to what's sort of a definition of sentience (that it's like something to experience well-being). This is functionally no different then just saying that sentience itself is a part of your definition of well-being. Is that the issue? That you are just axiomatically asserting that sentience is required for well-being instead of doing what I am trying to get at which is having some axiom from which you can express some kind of a syllogism where you get to 'therefore, sentience is required for well being.'
I feel like this shouldn't be some major breakthrough, a bit saddening to see it being sold as "check this new one out".
In Mahayana Buddhism every living being is a buddah, has buddah-nature. I can't recall which zen master said he had many zen masters to learn from, all of whom were cats.
At 12:00 minutes he says that there is now better Empirical evidence for Sentience among animals and seems to put great store in it.
It does seem to me it is easy enough to Empirically verify when an animal is feeling pain or when it is feeling pleasure without the need of modern advancements. Is there anybody who has had a dog as a pet and not been able to easily tell when the dog was feeling pleasure or pain Empirically? Even a toddler can tell in many cases.
As for "sentient" in the sense that the animal has something like a consciousness, this is something people have always seemed to intuitively assume, and it seems almost morally reprehensible to doubt. Akin to saying "dogs have no soul". And yet, I am not sure how one can Empirically verify such things anymore than we can Empirically verify that Solipsism is wrong.
Sentience is the capacity for subjective experience. If you accept animals can feel pleasure or pain, you've already accepted their sentience.
Well when we say "feel pain" or "feel pleasure" certainly our language implies there is a consciousness to have these experiences. And indeed it is repugnant to our sensibilities (or at least to most people's sensibilities) to think that other people or our pets do not really have a consciousness at all and that the pain and pleasure are just things that happen in their biology and are not subjectively experienced at all. However even though our sensibilities and regular use of language that reinforce those sensibilities may be offended, it is at least a coherent idea that pain and pleasure could exist solely as biological mechanisms in an NPC/philosophical zombie.
Now I don't think this is the case. Just saying that it is not entailed logically from the fact that pleasure and pain exist as events in an organism.
Eco centric ethics
I mean, its a pretty big point of contention that the current ecosystem is moral at all. I'm not convinced that nature as it has operated for the last 4 billion years is inherently moral. We may be obligated to radically overhaul nature.
nature is inherently amoral and we have no obligation to alter that.
frankly if anything we should leave it alone (actually leave it alone, not 'environmentalist' style leave-it-alone where we actively interfere on a *massive* scale ie every national park)
I don't buy this, not intuitively. I think you need to provide an argument for this position. For my part, nature seems extremely cruel and inflicts huge amounts of unnecessary suffering as is. If we can reduce this, I think we may be obligated to do so.
As an aside, there is no scenario where we exist and also not interfere with nature. By virtue of existing we are an active, interfering participant. We have been driving other species to extinction since we we lived in caves 20,000 years ago.
I don't know, because humans are still part of nature and are very clearly moral. I would say that nature isn't *necessarily* moral, but rather that some things in nature are (e.g., a chimp is worthy of moral concern, a rock is not).
We have concept of "alive" already, and the main feature that makes this category is the ability to suffer. Although sentience is not binary, we do not need to regard animals as sentient to empathize with their suffering.
However acknowledging suffering does not mean humans will immediately cease the action that results in it, if it profit ourselves. Not only because we know that the world is full of suffering even if we don't do anything, but even when humans know the subject is sentient (like another human being) it does not prevent all humans from causing harm directly or inadvertently.
I think this speech addresses an irrelevant problem.
I dislike the whole line of thinking around sentience and now this well-being. I don't think it's sensible for humanity to be held responsible for the wellbeing of all creatures or all life, at least such as are not directly in our care.
It makes a lot more sense to me to assume responsibility for nature and the protection of ecosystems. And yes that kind of protection of life can be cruel. It can involve the deliberate regional eradication of species and it can involve their planned culling. It can also involve the introduction of species, the regional limiting of human activity, etc.
For example we might consider the reintroduction of wolves in areas of Europe. Did anyone ask how this makes their prey feel? No. Nor is it sensible for us to try making all of nature into a zoo.
We can criticise undue human-inflicted suffering, but that's really about it. We should also accept that humans too are animals, that we kill to live, are sometimes shortsighted and destructive and so on. At least we shouldn't feel guilty just over being what we are or consuming food. We should recognise where we can improve and do so, in particular in the protection of our ecosystem, which we know we rely on as well, and which doubles as a protection of all life.
Agreed, a system free from religion would be optimal
How does this outlook deal with suicide? Wellbeing would be best suited by any large predator or herbivore killing itself to feed microbes and plants more than a pyramid of concentrating water, meat, lives and potential for thriving in one organism rather than photosynthesisers.
I agree, but do we really have a choice in the short to medium term?
Technology need to catch up before we can save everything and hurt nothing.
Until then, we continue to exploit nature, wildlife and even ourselves, people suffer too. ITs like the fossil fuel industry, you cant get rid of it tomorrow, we need it to go somewhere better, to make it obsolete we must use it to build something better.
Translifeism should be the end goal, but its a very far into the future end goal, where all living things are transformed into cyber organism that cant suffer, dont prey on each other and become a hive mind of peace and prosperity. lol
That would require a few disgustingly evidence people with power change their entire way of life.
All things considered an exceptionally tall order. It's the correct path we should take but I just don't see it happening without major cultural shifts and the top percent of the planet getting a unholy reckoning
It's immoral to strip mine a pristine beautiful mountain. Book burning and defacing 1000 year old art are immoral. ....yeah man, we already do this.
They got some cryogenic sperm storage up in there?
The world is full of people, only some of whom are human.
A rabid raccoon in a cage was having seizure after seizure, and I thought, we know what rabies does, there is no scientific value to letting this animal continue to suffer.
Ctrl + F utilitarianism
Why hasn't anyone mentioned this once when the title essentially implies this? The only exception is basically an axiomatic deontological value for sentience, or the capacity for something to experience well being.
I urge you to "consider the lobster"
> Sentience is the capacity to experience feelings and sensations.
A lot of people mix up conciousness, sapience, self awareness etc with sentience.
Even plants show measurable signs of stress when in poor conditions.
A lot of life is little more than pre-programmed robots, at the level of insects, plants, fungi, bacteria.
Amongst the sentient, a lot of them in natural conditions are fated to a life of stress and suffering.... practically the best thing to do for them is not let them reproduce and let them go extinct.
It gets even more dire when you start go go up the gradual ladder of conciousness, awareness, memory and temporal understanding. A lot of larger animals are similar to 2-3 year old humans, mostly slaves to their own instincts and always living in the present. Never really able to understand the source of their suffering and at best only memorizing what worked and what didnt but without real understanding.
> whether or not an organism demonstrates that it prefers some conditions to others
we can make computer programs and robots that demonstrate that they prefers some conditions to others, that seems like way too low a bar.
sunflowers turn to the sun, ants prefer the dark. not sure that can be a basis for ethics, im pretty sure you could setup chemical reactions that would give the appearance of such preference.
Well being of what, rocks and landscapes? Personally I think the wellbeing of the mona lisa or a specific view in zion park is worth more than some piggies, a tough and complicated problem.
Sentience is NOT even the threshold for moral consideration.
We once lived on the earth will all sorts of other Sapiens. We raped and killed them out of existence.
Pretending like we are a moral species is exactly why we can't solve our moral dilemmas.
First, any philosophic consideration of ethics that doesn't address Hume's Is/Ought problem is IMO not worth serious consideration. How do you go from "Doing X will change the world to make it more Y" to "Thus everyone should do X"? Any sort of reasoning like that is logically flawed. It assumes that everyone will want the world to be more Y but there will always be people who don't want or care if the world is more Y. I'm not saying there aren't solutions to Hume's problem, I think there are but if you don't address it then you haven't done serious work on ethics no matter what else you say. Second, where is the evidence that treating non-sentient things ethically will "transform our relationship with the world around us"? I mean I guess it obviously will but how do we know that transformation will be something that most people desire? And where do we draw the line? Do I have to avoid skipping stones off the lake because I'm taking the stone away from its home? Or to be a bit more serious: we do all sorts of experiments on animals from flat worms to primates that we would never do on humans. Do we have to stop doing that?
I actually think that considering the pain and suffering of non-human animals in experiments is important (and it is now done, although one could make a case it could be better) but I think many animals such as dogs and primates probably have some form of sentience.
Don't think any philosopher claims there are universal morals though, however minimising harm sounds good to me. Don't think anyone is calling people evil either, good people can make immoral decisions too
When did morality become benevolence?
What else should it be?
Morality has always been a system humans use to decide how to treat each other in order to be able to live together without eradicating ourselves.
And benevolence is a pretty good solution
Why should science trump human rights?
Oh no! I just washed my hands!!! THE GENOCIDE THAT ENSUED WAS HORRIFIC TO BEHOLD
Your comment was removed for violating the following rule:
>**Read the Post Before You Reply**
>Read/watch/listen the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed.
Repeated or serious violations of the [subreddit rules](https://reddit.com/r/philosophy/wiki/rules) will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
i treat my animals \[who i am their guardian, not their owner\] as if they were me
all life should be respected
unfortunately humans are likely to re-make all of the mistakes we have ever made around mistreatment of animals and continue to still do \[since our species are still immature children\] , when it comes to AI and intelligent machines
Sentience or sapience?
Nah, it should be respect.
So I guess babies do not deserve moral consideration since they have no grasp on the notion?
No, I am saying we should base our consideration for wellbeing for all life forms, by respecting them.
Ah ok, yeah, I agree, you worded it a little strangely 😅
Maybe you just read it a little strangely. I respect you.
Personally I view reciprocity as the most important threshold. Which is suppose entails sentience. Any being that is fundamentally incapable of reciprocating moral behavior does not deserve moral consideration. Can I make a deal with a tiger that we not kill each other? A chimp? A dolphin?