Next #SoCIA18 talk: “On aliens and robots: moral status, epistemological and (meta-)ethical considerations" by Keith Abney!
(These are 20min talks, so they'll come fast & furious.)
How do robot ethics inform alien ethics? Robot Ethics definitely a big question, what with lethal autonomous weapons systems being developed. (Abney is from a group with a neutral position - finds some anti-roboweapon arguments compelling, others not)
Robots aren't responsible for their actions, so giving them autonomy is a big problem. But the issues change (in the future) if the robots themselves become full moral agents. How will we test that?
Is there a "moral Turing test?" Would it be the same for aliens, humans, and robots? One proposal is the "Turing triage test:" is there any point where you might choose to save a robot instead of a human (if you can only save one)?
Our speaker does NOT like that test, mind you.
Mixes up intrinsic vs external value, instrumental vs final value. A bad question, like "which would you save, my life or all the gold in Fort Knox?"
I mean, it's not an incoherent question (especially in a terrible action movie), but it's not asking about moral agency/personhood/etc. More of "wrong question" than "bad question."
If rights are derivative (social contract based), then neither robots nor aliens have any. Maybe better to think about basic rights (natural/universal; part of personhood). Those rights can't be taken away by social contract - inalienable! Pun intended.
But, correlativity theses: no rights without responsibilities. Being killed by a natural disaster isn't violation of your rights. (I think I missed why that last line was relevant.)
Whence does intrinsic value arise? Life (biocentrism)? Everything (cosmocentrism)? Rationality (logocentrism)? Capacity for pleasure/pain/consciousness (sentientrism)?
Lots of people think it's sentience. But our speaker is logocentric instead. These things need to have moral reasoning. Ex: trees might well experience pleasure/pain, but they aren't moral beings.
Various arguments for/against sentientrism, though I won't recount most the things he sets up & knocks down.
Philosophical Hedonism (pleasure = good, pain = bad) is trouble because it sends you to wirehead-ism. (The good life is electrical stimulation of brain pleasure centers!)
Confusion between intrinsic, final, and extrinsic/instrumental value. Are the values inherent? Do some things have value just as they are, the goal themselves? Etc. Some of these entail the others, but not always.
E.g. Saturn's rings, as objects of beauty, have final end (they're not valuable toward some other purpose), maybe have instrumental value (I want them to exist), but there's nothing intrinsic to it - it's all about observer aesthetics.
Terraforming (destorying alien biospheres) has no *intrinsic* problems, he says, but can still have instrumental moral problems.
Precautionary principle: if you're not sure of implications, take it slow! Destroying alien bacteria might be fine (no intrinsic value to them), but we don't know their instrumental value, so hold on there terraformbuddies.
Not everyone in the audience believes that intrinsic value arises from the capacity for self-value. If I'm the only conscious being in the universe, but I don't value my life, do I have instrinsic value?
Audience members note that this line of reasoning gives no intrinsic moral value to a 6mo baby. Others say the intrinsic/instrumental distinction may just be doing more harm/confusion than good. Contention! Excellent.
unroll
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Handedness comes in two groups, "right handed" and "not right handed." Most people use their right hands for almost all precision movement, but the other group is a broad spectrum from weakly-right to strongly-left. baen.com/handedness
The way we describe and define handedness creates the effect @CStuartHardwick rightly notices. Culture defines how we talk about it - but the behavior is mostly genetic. The % of righties has remained constant across continents and milennia.
Hand dominance is a more squirrelly thing than most people realize. For example, righties are better at *some* things with their left hand... and *some* of these asymmetries flip in lefties. Take a few minutes on #LeftHandersDay to learn more!
But you should read and learn from the #BlackSpecFic report anyways! The missing data is due to idiosyncrasies of the @EAPodcasts model, and has no impact on any other magazine's numbers.
Long story short, we treat reprints very differently from other magazines. For @escapepodcast specifically, they were ~45% of our 2017 stories, and our editorial process has one unified pipeline for originals + reprints together.
Regretting organizing my two Worldcon panels this year. It means I'm not free to throw up my hands in frustration and give up on programming. The last 24hrs have been the last worst icing on a bad cake that's long been baking.
I mean, my panels will be awesome. But if you're skipping programming because you don't trust the con, you've made a sensible choice.
There are always more people who want to be on programming than can fit. There's no way to make everyone happy. I get that. But this weekend's screwups come in the context of a long chain of trust-erosion.
So glad this one came out! "After Midnight at the Zap Stop" by @ouranosaurus is an awesome story - full of late-night grease, and the luckless & the worthy. But also because it's a #neuroscience teaching opportunity. Might even be a #NeuroThursday!
One offhand line explains a technology as "stimulating a particular set of mirror neurons." Which works as a story element just fine. It sounds plausible and authoritative! But as a neuroscientist, I have strong opinions about #mirrorneurons. I don't think they're real.
To be clear, mine is a controversial opinion. Many neuroscientists would disagree. But it's a hill I'm willing to fight on, especially given how often "mirror neurons" crop up in popular science.
This phenomenon - when you look away from a moving thing, and you briefly see illusory motion in the other direction - is the "Motion Aftereffect," and it comes from some very basic brain maneuvers. Who wants to join me on going full #NeuroThursday here? en.wikipedia.org/wiki/Motion_af…
Most neurons in the brain (and elsewhere) do this thing called "adaptation," where they accept whatever's going on as the new normal. For example, if you sit down with your laptop on your lap, you'll soon stop noticing the weight.
This can arise from the crudest single-cell level: some ion channels in the cell membrane have negative feedback loops that self-dampen.