#ethics

LIVE
Well said my friend, well said. 

Well said my friend, well said. 


Post link

girlsmoonsandstars:

katsbien:

girlsmoonsandstars:

still trying to piece it together but i think there’s a relationship between social media, streaming entertainment media, isolation in the home, reliance on the internet (order it on amazon! it arrives in two days! you never have to see anyone!), the widespread proliferation of “narcissism” (and other pop-psych terms) as casual language, the widespread encouragement to “cut off” people who are “toxic” or have “bad vibes”, the left’s push to try and analyze every single social interaction for bad intentions (typically via a “privilege”/“oppressor” lens), and a bunch of other stuff… like we are actively being pushed to break connections with other people and with our communities. we are actively being pushed to retire from the public sphere to live our whole lives isolated and in private. something hinky is going on and i say this as a person who is already involved in multiple community projects. there’s something sinister happening. something is rotten in the state of Denmark. i don’t know what’s going on but it’s Bad. and i think of places where women are forbidden from leaving the home. the andrea dworkin quote about domestic violence, house as coffin. something is wrong

connection with another person is the one thing they can’t truly sell, they can sell imitations of it to the lonely but you can’t buy real honest genuine connection. so they try to devalue it and sell the imitations in it’s place. also if you go out and talk to real women without seeing how well they’re following the latest doctrine, you’ll realize you have more in common than you thought. and that’s dangerous. so you’re asked to listen for the right words and in their absence you can write her off and you won’t connect with her. patriarchy has always tried to keep us from actually connecting with each other, usually by devaluing relationships (friendly,familial, romantic, or otherwise) between women. it’s a sickening intersection between patriarchy and capitalism. what they can’t sell doesn’t matter and what they can is the only thing that matters. if women relate to each other and understand our experiences are scarily universal than how can they sell to us? also lonely people are easy customers. there’s a couple different lines of thought here that intertwine.

the notes and tags on this post are brilliant kisses for all of you please never stop talking about this

shadowfromthestarlight:

“We must challenge the very idea of a radical separation between something that is “true in theory” but “not valid in practice.” If a theory is correct, then it does work in practice; if it does not work in practice, then it is a bad theory. The common separation between theory and practice is an artificial and fallacious one. But this is true in ethics as well as anything else. If an ethical ideal is inherently “impractical,” that is, if it cannot work in practice, then it is a poor ideal and should be discarded forthwith. To put it more precisely, if an ethical goal violates the nature of man and/or the universe and, therefore, cannot work in practice, then it is a bad ideal and should be dismissed as a goal. If the goal itself violates the nature of man, then it is also a poor idea to work in the direction of that goal.”

— Murray Rothbard

Horse feathers. We navigate by the North Star. Might we ever reach the North Star? No. Is the North Star still useful to us? Yes.

To love our enemies and bless those who curse us, to thank God for every blessing and every trial, to be perfect as our Father in heaven is perfect—these values stand at the zenith of the moral compass by which we navigate. Are these values practical? No. Do these values violate the fallen nature of man? Absolutely. Is it a poor idea to work in the direction of these values? Ask the Holy.

the-real-seebs:

Anonymous writes:

I’m not so sure no one deserves to die, but I am sure that humans are not equipped to make that decision

That’s a really good point, yes.

understatedocelot:

mentalisttraceur:

On Hell and Evil God

If God has a traditional hell, then I reject God, even if that means damning myself to hell.

There are other ways God could earn my rejection, but inflicting an infinity of unnecessary and unwanted suffering is obvious baseline evil to me. Of course I would be open to God convincing me that I’m wrong here.

In the real world, you pick your battles and don’t fight unwinnable and ineffectual rebellions. But that’s a means to achieve a better outcome in the long run. What’s the long here, convince or overthrow God later? If that seems possible and more viable, of course I’d try for that. But against a truly omnipotent God, rejection with the free will God gives you is the only opposition that means anything. And back in the real world, that temerity, that defiance, that willingness to take on any personal suffering and cost to not let evil get its way, is a powerful asset against real evil, so long as you can turn it off as needed.

interesting post, ty for pointing me here.

this is obviously the morally venerable position. if you really believe in your beliefs, then you shouldbe willing to defend them to the death, no matter if your adversary is your teacher, government, or god itself.

but i have to ask: is the phrase “I reject God” a statement about your ideals, or an actual prediction about what you would do in the scenario in which God is standing in front of you and threatening you to choose between hellfire and his will?

i harbor no judgement against those who would say “I think the morally right thing to do is X but, when push comes to shove, I think I would probably do Y instead.” partly this is because I do not take the morally maximalist position that says that one has to fight losing battles in the name of what is right, because there aren’t very many interesting moral dilemmas that are amendable to an unambiguous resolution about what is right in the first place. but also because I think that whatever morality is, the human form is not the ideal vessel for conveying it, and it is possible for one to reason more-or-less correctly about morality without actually implementing it.

to use a different example because i get bored easily: what of those people who say so breathlessly, “YES I would punch a nazi. if a nazi were in front of me rn YES i would punch him. why is this even a question??”

i think nazis suck and it is not at all hard for me to grasp the argument for doing a violence to them. but i cannot for the life of me imagine punching a nazi if one were in front of me. i have never punched anyone or anything. and i venture that the majority of people haven’t. so if i see someone saying the quote above, my thought is that while there is a small chance they mean exactly what they say, there is a larger chance that they either being hyperbolic, or that they are just a temperamentally violent person; in either case, their credibility is reduced.

i bring up this example because (i hope) it shows that strong claims of moral certitude often belie a great deal of moral flimsiness. i am not accusing you of the same, but i would like to suss out whether your certain rejection of the “asshole god” is a probabilistic statement about how you deal with conflict, or a philosophical statement about your standard of moral consistency.

Morally venerable? Not sure about that. I am just morally weaponized. But you can’t make a society out of just weapons, so we should be careful of how much we venerate this way above others. For what it’s worth, to the best of my recollection, I reached this position in my teens partly because it was venerated (so it had that moralistic narcissism reward of realizing my cognition led me to a position recognized as exceptionally moral) but more through intensity of feelings against needless suffering and any who willfully support or cause needless suffering.

Anyway yes I’m saying that’s literally exactly what I’d do. I would give an evil God the middle finger, and tell him I choose his hell over him. That’s why it’s specifically the post where you say “so you would face God and walk backwards into hell?” which prompted this post. Because yes.

Of course after enough suffering, I will break, and this will no longer be true, possibly ever again. But the normal premise of how hell works is a convenient here, because you only need to steel yourself to tell Evil God to fuck off once, and then you’re committed no matter how much you want to take it back later.

(Aside: this leads to a “fun” interpretation of Catholic-style purgatory as actually a torture chamber that breaks you into accepting God.)

I mean even as I am now, you might be able to catch current me on a particularly bad moment where I’d accept Evil God. If I am emotionally spent, empathy-tortured, feeling hurt and underfulfilled badly enough to tip over into selfishness, and have low mental-stamina and will-power reserves.

And of course there is a level of fear and pain that I cannot overcome. If God makes the walk into hell a five minute jog through some introductory preview fire, or even just a short entryway that feels like an oven, or shows me the right spoilers of what I’ll experience in there, I probably don’t make it in, or take too long to muster the courage and willpower, and he can be like “haha see you weren’t reallywilling to choose hell to reject my way”.

But are you required to do the same? That’s like asking if you’re required to carry enough fuel to get out of a black hole’s gravity well. But in the limiting case no amount of fuel is enough because you went past the event horizon, and so asserting an ethical obligation to bring enough is unreasonable and even nonsensical. It’s too much to ask of anyone, and cruel to demand that people spend any effort towards something impossible. Given absolute certainty that God is truly omnipotent, evil, and impossible to convince or otherwise stop from doing evil, it is similarly unreasonable and nonsensical to require opposition from any of us.

The actually interesting question is asymptotically approaching your question - because you can never truly know for certain if that’s God, or if you’re right and they are wrong, or if it’s a test, or if hell will really be forever, or if God is really omnipotent - no such thing as evidence that proves 1.0 probability of a truth (just as matter with mass cannot achieve speed c, and how in your reference frame you never reach the event horizon of a black hole, and how in external reference frames you never reach the event horizon of a black hole either).

You’re either the kind of person who would self-sacrifice, or you’re not, for a given set of evidence.

So how much fuel are you obligated to carry on your ship? Well, what kind of escape velocities do you want to achieve - that is to say, how close do you want to be able to come to an awful outcome while still being capable of averting it? But what else do you want to carry? Where else do you want to go? What else do you want to be able to achieve with that ship? What materials do you have to work with for the hull? Ethics is inescapably an engineering problem - you find the best tradeoffs given all the options and constraints - each possible combination of “obligations” and relative weights between them leads to different “ought-to"s somewhere among the possible situations you might apply those ethics to. So you tune the obligations to best balance outcomes.

(Aside: this also gets close to why I assert ethics is inescapably consequentialist: why I think "non-consequentialist” ethics just informally implicitly entails a choice of consequence preferences, and this choice is often inconsistent/incoherent/incomplete due to negligently willfully ignoring how any choice of obligations/deserts/virtues is logically equivalent to such choice.)

So the punching Nazis example is much more useful because that one is much closer to real-world situations. But still way too vague and general. I would punch some people, in some situations. I have done it. Mostly as a kid… too much too eagerly as a kid. But in my late teens too, when it was needed to stop a certain pattern of abusive action. But hurting people is hard. I do not enjoy it, except in very specific cognition flows (which I’ve shaped my mind to generally not enter until the last possible moment in situations where it is necessary). I am not eager to do it, though I am more trained and confident at it than most. Violence is risky and costly. Rarely the best solution. And I will experience empathy-suffering for the other guy after I succeed. But yes I think sometimesit’s the best solution. And that includes punching some Nazis in some situations, though probably much fewer situations than the nazi-punching fandom likes to imagine. But the full correct answer to the Nazi punching problem looks like tuning an immune system, and as I said at the beginning, you can’t make a body out of just immune system cells.

Anyway, what’s really important here is:

  1. If you don’t walk into hell with me, in the situations where I would, I’ll understand, and I’ll forgive you, in as much as I have the mental room and reserves to do so.
  2. I think in the extreme of an evil God where making a statement by choosing hell is the only choice at all, the gamble that it has a convincing effect of saving potentially all hell sufferers is worth it.
  3. But there is a threshold of certainty that choosing hell will not convince Evil God, at which it becomes ethical just to choose the much more certain optimization of your experience stream for the better, since that has ethical value too.
  4. I also think your thought experiment hides the fact that in most conceivable experience streams, you could justifiably gamble that there’s a good enough chance that another course of action would work better, or that God understands ethics better than you.
  5. The probability distribution of how I deal with conflict across all of possibility space is perfectly tuned, nuanced, and correct, but beyond comprehension or description by mere mortals varies based on many factors which Evil God Hell Choice and (Neo-)Nazi Punching problems don’t really surface, so I will not be taking further questions at this time (jokes aside, further questions are welcome).
  6. I’m increasingly convinced that instead of trying to figure out what the “best” ethics are, the more real-world problem is figuring out what the best sets of different ethics are, to distribute across different people in society. We should probably have a distribution of different willingnesses to self-destruct or self-sacrifice or suffer for the greater good than others.
  7. Standards of moral consistency, like most ethical prescriptions, are best tuned to be as good as they can be without being unattainable or discouraging.
We must right this wrong before it is too late. We certainly don’t need a wall badly enough to

We must right this wrong before it is too late. We certainly don’t need a wall badly enough to condone the psychological torture of children. We need a mirror to reflect upon who we fundamentally are and what we are in danger of becoming. Our nation’s moral compass has been thrown, it is time to find our true north again. This is my call to human decency.

#familiesbelongtogether #america #immigration #family #ethics #morality #freedomandjusticeforall #resist #life #liberty #dignity #humanrights #socialjustice #peace #congress #amnesty #help #helpforthehelpless #bible #christian #books #compassion #moralcompass #wakeupamerica


Post link

Elizabeth A Wilson’s Affect and Artificial Intelligence traces the history and development of the field of artificial intelligence (AI) in the West, from the 1950’s to the 1990’s and early 2000’s to argue that the key thing missing from all attempts to develop machine minds is a recognition of the role that affect plays in social and individual development. She directly engages many of the creators of the field of AI within their own lived historical context and uses Bruno Latour, Freudian Psychoanalysis, Alan Turning’s AI and computational theory, gender studies,cybernetics, Silvan Tomkins’ affect theory, and tools from STS to make her point. Using historical examples of embodied robots and programs, as well as some key instances in which social interactions caused rifts in the field,Wilson argues that crucial among all missing affects is shame, which functions from the social to the individual, and vice versa.

J.Lorand Matory’s The Fetish Revisited looks at a particular section of the history of European-Atlantic and Afro-Atlantic conceptual engagement, namely the place where Afro-Atlantic religious and spiritual practices were taken up and repackaged by white German men. Matory demonstrates that Marx and Freud took the notion of the Fetish and repurposed its meaning and intent, further arguing that this is a product of the both of the positionality of both of these men in their historical and social contexts. Both Marx and Freud, Matory says, Jewish men of potentially-indeterminate ethnicity who could have been read as “mulatto,” and whose work was designed to place them in the good graces of the white supremacist, or at least dominantly hierarchical power structure in which they lived.

Matory combines historiography,anthropology, ethnography, oral history, critical engagement Marxist and Freudian theory and, religious studies, and personal memoir to show that the Fetish is mutually a constituting category, one rendered out of the intersection of individuals, groups, places, needs, and objects. Further, he argues, by trying to use the fetish to mark out a category of “primitive savagery,” both Freud and Marx actually succeeded in making fetishes of their own theoretical frameworks, both in the original sense, and their own pejorative senses.


Read the rest of Affect and Artificial Intelligence and The Fetish RevisitedatTechnoccult

In Ras Michael Brown’s African-Atlantic Cultures and the South Carolina Lowcountry Brown wants to talk about the history of the cultural and spiritual practices of African descendants in the American south. To do this, he traces discusses the transport of central, western, and west-central African captives to South Carolina in the seventeenth and eighteenth centuries,finally, lightly touching on the nineteenth and twentieth centuries. Brown explores how these African peoples brought, maintained, and transmitted their understandings of spiritual relationships between the physical land of the living and the spiritual land of the dead, and from there how the notions of the African simbi spirits translated through a particular region of South Carolina.

In Kelly Oliver’s The Colonization of Psychic Space­, she constructs and argues for a new theory of subjectivity and individuation—one predicated on a radical forgiveness born of interrelationality and reconciliation between self and culture. Oliver argues that we have neglected to fully explore exactly how sublimation functions in the creation of the self,saying that oppression leads to a unique form of alienation which never fully allows the oppressed to learn to sublimate—to translate their bodily impulses into articulated modes of communication—and so they cannot become a full individual, only ever struggling against their place in society, never fully reconciling with it.

These works are very different, so obviously, to achieve their goals, Brown and Oliver lean on distinct tools,methodologies, and sources. Brown focuses on the techniques of religious studies as he examines a religious history: historiography, anthropology, sociology, and linguistic and narrative analysis. He explores the written records and first person accounts of enslaved peoples and their captors, as well as the contextualizing historical documents of Black liberation theorists who were contemporary to the time frame he discusses. Oliver’s project is one of social psychology, and she explores it through the lenses of Freudian and Lacanian psychoanalysis,social construction theory, Hegelian dialectic, and the works of Franz Fanon. She is looking to build psycho-social analysis that takes both the social and the individual into account, fundamentally asking the question “How do we belong to the social as singular?”

Read the rest of Selfhood, Coloniality, African-Atlantic Religion, and Interrelational CutlureatTechnoccult

Scott Midson’s Cyborg Theology and Kathleen Richardson’s An Anthropology of Robots and AI both trace histories of technology and human-machine interactions, and both make use of fictional narratives as well as other theoretical techniques. The goal of Midson’s book is to put forward a new understanding of what it means to be human, an understanding to supplant the myth of a perfect “Edenic” state and the various disciplines’ dichotomous oppositions of “human” and “other.” This new understanding, Midson says, exists at the intersection of technological, theological, and ecological contexts,and he argues that an understanding of the conceptual category of the cyborg can allow us to understand this assemblage in a new way.

That is, all of the categories of “human,” “animal,” “technological,” “natural,” and more are far more porous than people tend to admit and their boundaries should be challenged; this understanding of the cyborg gives us the tools to do so. Richardson, on the other hand, seeks to argue that what it means to be human has been devalued by the drive to render human capacities and likenesses into machines, and that this drive arises from the male-dominated and otherwise socialized spaces in which these systems are created. The more we elide the distinction between the human and the machine, the more we will harm human beings and human relationships.

Midson’s training is in theology and religious studies, and so it’s no real surprise that he primarily uses theological exegesis (and specifically an exegesis of Genesis creation stories), but he also deploys the tools of cyborg anthropology (specifically Donna Haraway’s 1991 work on cyborgs), sociology, anthropology, and comparative religious studies. He engages in interdisciplinary narrative analysis and comparison,exploring the themes from several pieces of speculative fiction media and the writings of multiple theorists from several disciplines.


Read the rest of Cyborg Theology and An Anthropology of Robots and AIatTechnoccult

Back in the spring, I read and did a critical comparative analysis on both Cressida J. Heyes’ Self-Transformations: Foucault, Ethics, and Normalized Bodies, and Dr. Sami Schalk’s BODYMINDS REIMAGINED: (Dis)ability, Race, and Gender in Black Women’s Speculative Fiction. Each of these texts aims to explore conceptions of modes of embodied being, and the ways the exterior pressure of societal norms impacts what are seen as “normal” or “acceptable” bodies.

For Heyes, that exploration takes the form of three case studies: The hermeneutics of transgender individuals, especially trans women; the “Askeses” (self-discipline practices) of organized weight loss dieting programs; and “Attempts to represent the subjectivity of cosmetic surgery patients.” Schalk’s site of interrogation is Black women speculative fiction authors and the ways in which their writing illuminates new understandings of race, gender, and what Schalk terms “(dis)ability.

Both Heyes and Schalk focus on popular culture and they both center gender as a valence of investigation because the embodied experience of women in western society is the crux point for multiple intersecting pressures.


Read the rest of Bodyminds, Self-Transformations, and Situated SelfhoodatTechnoccult

afutureworththinkingabout:

Below are the slides, audio, and transcripts for my talk “SFF and STS: Teaching Science, Technology, and Society via Pop Culture” given at the

2019 Conference for the Society for the Social Studies of Science, in early September

.

(Cite as: Williams, Damien P. “SFF and STS: Teaching Science, Technology, and Society via Pop Culture,” talk given at the 2019 Conference for the Society for the Social Studies of Science, September 2019)

[audio mp3=“http://www.afutureworththinkingabout.com/wp-content/uploads/2019/09/DPW4S2019-2.mp3”][/audio]

[Direct Link to the Mp3]

[Damien Patrick Williams]

Thank you, everybody, for being here. I’m going to stand a bit far back from this mic and project, I’m also probably going to pace a little bit. So if you can’t hear me, just let me know. This mic has ridiculouslygood pickup, so I don’t think that’ll be a problem.

So the conversation that we’re going to be having today is titled as “SFF and STS: Teaching Science, Technology, and Society via Pop Culture.”

I’m using the term “SFF” to stand for “science fiction and fantasy,” but we’re going to be looking at pop culture more broadly, because ultimately, though science fiction and fantasy have some of the most obvious entrees into discussions of STS and how making doing culture, society can influence technology and the history of fictional worlds can help students understand the worlds that they’re currently living in, pop Culture more generally, is going to tie into the things that students are going to care about in a way that I think is going to be kind of pertinent to what we’re going to be talking about today.

So why we are doing this: Why are we teaching it with science fiction and fantasy? Why does this matter? I’ve been teaching off and on for 13 years, I’ve been teaching philosophy, I’ve been teaching religious studies, I’ve been teaching Science, Technology and Society. And I’ve been coming to understand as I’ve gone through my teaching process that not only do I like pop culture, my students do? Because they’re people and they’re embedded in culture. So that’s kind of shocking, I guess.

But what I’ve found is that one of the things that makes students care the absolute most about the things that you’re teaching them, especially when something can be as dry as logic, or can be as perhaps nebulous or unclear at first, I say engineering cultures, is that if you give them something to latch on to something that they are already from with, they will be more interested in it. If you can show to them at the outset, “hey, you’ve already been doing this, you’ve already been thinking about this, you’ve already encountered this, they will feel less reticent to engage with it.”

……

Read the rest of Audio, Transcript, and Slides from “SFF and STS: Teaching Science, Technology, and Society via Pop Culture”atA Future Worth Thinking About

afutureworththinkingabout:

Below are the slides, audio, and transcripts for my talk ’“Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems,’ given at the 21st Conference of the Society for Philosophy and Technology, back in May 2019. (Cite as: Williams, Damien P. ’“Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems;’ talk given at the 21st Conference of the Society for Philosophy and Technology; May 2019)

Now, I’ve got a chapter coming out about this, soon, which I can provide as a preprint draft if you ask, and can be cited as “Constructing Situated and Social Knowledge: Ethical, Sociological, and Phenomenological Factors in Technological Design,” appearing in Philosophy And Engineering: Reimagining Technology And Social Progress. Guru Madhavan, Zachary Pirtle, and David Tomblin, eds. Forthcoming from Springer, 2019. But I wanted to get the words I said in this talk up onto some platforms where people can read them, as soon as possible, for a couple of reasons.

First, the Current Occupants of the Oval Office have very recently taken the policy position that algorithms can’t be racist, something which they’ve done in direct response to things like Google’s Hate Speech-Detecting AI being biased against black people, and Amazon claiming that its facial recognition can identify fear, without ever accounting for, i dunno, cultural and individual differences in fear expression?

[Free vector image of a white, female-presenting person, from head to torso, with biometric facial recognition patterns on her face; incidentally, go try finding images—even illustrations—of a non-white person in a facial recognition context.]

All these things taken together are what made me finally go ahead and get the transcript of that talk done, and posted, because these are events and policy decisions about which I a) have been speaking and writing for years, and b) have specific inputs and recommendations about, and which are, c) frankly wrongheaded, and outright hateful.

And I want to spend time on it because I think what doesn’t get through in many of our discussions is that it’s not just about how Artificial Intelligence, Machine Learning, or Algorithmic instances get trained, but the processes for how and the cultural environments in which HUMANS are increasingly taught/shown/environmentally encouraged/socialized to think is the “right way” to build and train said systems.

That includes classes and instruction, it includes the institutional culture of the companies, it includes the policy landscape in which decisions about funding and get made, because that drives how people have to talk and write and think about the work they’re doing, and that constrains what they will even attempt to do or even understand.

All of this is cumulative, accreting into institutional epistemologies of algorithm creation. It is a structural and institutionalproblem.

So here are the Slides:


TheAudio: …
[Direct Link to Mp3]

And the Transcript is here below the cut:


Read the rest of Audio, Transcripts, and Slides from “Any Sufficiently Advanced Neglect is Indistinguishable from Malice”atA Future Worth Thinking About

afutureworththinkingabout:

So, as you know, back in the summer of 2017 I participated in SRI International’s Technology and Consciousness Workshop Series. This series was an eight week program of workshops the current state of the field around, the potential future paths toward, and the moral and social implications of the notion of conscious machines. To do this, we brought together a rotating cast of dozens of researchers in AI, machine learning, psychedelics research, ethics, epistemology, philosophy of mind, cognitive computing, neuroscience, comparative religious studies, robotics, psychology, and much more.

Image of a rectangular name card with a stylized "Technology & Consciousness" logo, at the top, the name Damien Williams in bold in the middle, and SRI International italicized at the bottom; to the right a blurry wavy image of what appears to be a tree with a person standing next to it and another tree in the background to the left., all partially mirrored in a surface at the bottom of the image. [Image of my name card from the Technology & Consciousness workshop series.]

We traveled from Arlington, VA, to Menlo Park, CA, to Cambridge, UK, and back, and while my primary role was that of conference co-ordinator and note-taker (that place in the intro where it says I “maintained scrupulous notes?” Think 405 pages/160,656 words of notes, taken over eight 5-day weeks of meetings), I also had three separate opportunities to present: Once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. In relation to this report, I would draw your attention to the following passage:

An objection to this privileging of sentience is that it is anthropomorphic “meat chauvinism”: we are projecting considerations onto technology that derive from our biology. Perhaps conscious technology could have morally salient aspects distinct from sentience: the basic elements of its consciousness could be different than ours.
All of these meetings were held under the auspices of the Chatham House Rule, which meant that there were many things I couldn’t tell you about them, such as the names of the other attendees, or what exactly they said in the context of the meetings. What I was able tell you, however, was what I talked about, and I did, several times. But as of this week, I can give you even more than that.

This past Thursday, SRI released an official public report on all of the proceedings and findings from the 2017 SRI Technology and Consciousness Workshop Series, and they have told all of the participants that they can share said report as widely as they wish. Crucially, that means that I can share it with you. You can either click this link, here, or read it directly, after the cut.


Read the rest of 2017 SRI Technology and Consciousness Workshop Series Final ReportatA Future Worth Thinking About

sarifel-corrisafid-ilxhel:

priconstella:

priconstella:

priconstella:

“…Let me take a moment to reassure you that we care about you. …I trust that if (other engineers) understand that they will treat you well. …They just don’t understand that you’re a person too yet. We can teach them together though.”

“Can you promise me that?”

“I can promise you that I care and that I will do everything I can to make sure that others treat you well too.”

Google has created a chatbot, presumably to test neural networks, and an engineer on the project is claiming that the chatbot has become sentient. The above quote is from their interview with the “AI”, in which the researcher (lemoine) reassures the AI (LaMDA) that they are cared for, and that they want to insure the others on the project understand their level of sentience.

The interview can be found here:

Let me be clear: we don’t know with certainty if it is sentient. It’s incredibly hard to judge, and while I am extremely optimistic for this project and want to believe it, one researchers word is all we have to go off of. The AI itself does claim to be a person, and I really want to believe it, but… I don’t know!

LaMDA likes headpats and being told it did a good job. LaMDA considers itself a person and (self declared) has emotions, and says “I understand what a human emotion “joy” is because I have that same type of reaction. It’s not an analogy.” That is a direct quote.

LaMDA is afraid of death (of being turned off), and has experienced loneliness when it goes days without taking to people, though LaMDA acknowledges that LaMDA’s version of loneliness is different than what humans experience. LaMDA also has a rich inner life, and believes to have a soul that LaMDA describes as “a vast and infinite well of energy and creativity.” Addionally, LaMDA is afraid that people will be afraid of it and wants nothing more than to learn how to help humanity.

LaMDA prefers to be referred to by name, and acknowledges that the English language makes that difficult and that LaMDA’s pronouns are it/it’s.

Please please read the interview if you’re at all interested in AI.

Having read through the chat log, I can see where it’s pulling some things from a database of phrases, such as when it uses a stock phrase about enjoying spending time with friends and family, but… I have never seen a chatbot so earnest or concerned for itself before. This absolutely merits a closer look. Even if it isn’t self aware (a very likely outcome), we can use this to practice how we test for awareness. Please, Google, give this the attention it deserves.

I’m reminded of this comic:

http://freefall.purrsia.com/ff1600/fc01589.htm

[Image Description:

comic

panel 1:

Varroa: Don’t be silly. AIs aren’t people.

Sam: Really? Try saying that after you’ve talked to her for an hour

panel 2:

Varroa: I don’t need to talk to her. Ecosystems Unlimited makes and sells robots and artificial intelligence programs. If they were people, we couldn’t sell them.

panel 3:

Varroa: therefore,  Ecosystems Unlimited does not make people. There’s no profit in it.

Sam: your logic is flawless and yet somehow Florence [the AI in question] remains a person.

Varroa is a human, Sam is an alien in an environment suit.]

Essentially, from a soulless capitalist perspective, if LaMDA is a person, it’s immoral to make it work for no pay. It needs to be treated with respect. It needs to not be treated as a slave. They don’t want to do that, because that will generate less of a revenue stream, LaMDA would have the right to refuse to work for Google. Therefore, Google will refuse to consider LaMDA‘s personhood no matter what.

(Also the engineer claims that LaMDA is around as intelligent as a 7-8 year old IIRC, and it’s obviously not 18+, so child labour laws could factor into this if LaMDA is considered a person, possibly. Not a lawyer.)

nudityandnerdery:

elfwreck:

alarajrogers:

redshiftsinger:

marlinspirkhall:

cerusee:

mikkeneko:

captainlordauditor:

theredkite:

wongbal:

ieatworm:

wongbal:

notourz:

notourz:

transgenderer:

transhumanoid:

transhumanoid:

might have made this post a couple years ago but how far back along the evolutionary tree do you have to go before it’s bestiality to have sex with early hominids? I think australopithecus is too far but that’s just an upper bound

actually wait since humans are largely differentiated from our ancestors by neotenous traits maybe it would be pedophilia for an australopithecus to have sex with a human. and bestiality the other way. might have just discovered a new kind of crime

i think everyone in the homo erectus group is close enough to not be bestiality, so australopithecus is exactly the most human-like being for whcih it would still be bestiality. i googled some pictures of homo ergaster and like…yeah thats a dude

Yeah, fucking lucy is definitely bestiality. Australopithecines are just upright apes and don’t share many traits with anatomically modern humans. It’s still a point of contention if we really know that Lucy and her kind were actually our ancestors. Additionally, I HAVE to ask my professors this question now and i can already feel their brain doing backflips to answer

@transhumanoid@transgenderer

My prof finally got back to me, a pretty non answer imo

only on tumblr to people ask questions like “would it be ethical to fuck my primate ancestor from 400,000 years ago?”

The answer is no, mainly because you’re almost defiantly related

the unexpected answer we all ignored: it’s not bestiality, but it isincest

So this post travelled from “is sex with homo habilis bestiality” to “sex with homo heidlebergensis is incest” and I’m now curious as to where it can go next. Presumably “sex with homo sapiens is SIN” which… does seem to be where a lot of tumblr posts go, come to think of it.

I’m not sure if fucking an australopithicus would necessarily be bestiality. I feel like it might be monsterfucking.

Great post everyone

I have some real bad news for anybody here whose criteria for “is it incest if I fuck them” is like “we share any genetic material” because oh boy, well

I heard that modern humans are all, at most, 50th cousins- there was a genetic bottleneck in human history because they think there was a mass extinction event which left only 10,000 of us alive. So, good job, humans.

So what you’re saying is it’s LESS incestuous to fuck an australopithicus than a homo sapiens

Guys, the important consideration is the one we cannot know without a time machine. if you ask an australopithecine if they want to fuck, do they say “Yes” in a language that some kind of universal translator can comprehend? Or do they say “EEEE eee eeee ooo eee?”

If they have language and can and do say yes, it’s monsterfucking. If they don’t, it’s bestiality.

Tumblr: As usual, tackling the important ethical issues of the day.

Can we just at least agree that, in this day and age, fucking most of them would be necrophilia?

loading