#embodied cognition

LIVE

afutureworththinkingabout:

I recently watched all of Star Trek: Picard, and while I was definitely on board with the vast majority of it, and extremely pleased with certain elements of it, some things kind of bothered me.

And so, as with much of the pop culture I love, I want to spend some time with the more critical perspective, in hopes that it’ll be taken as an opportunity to make it even better.

image
[Promotional image for Star Trek: Picard, featuring all of the series main cast.]

This will be filled with spoilers, so. Heads up.

Read the rest of ‘Star Trek: Picard’ and The Admonition: Misapprehensions Through TimeatA Future Worth Thinking About

afutureworththinkingabout:

I recently watched all of Star Trek: Picard, and while I was definitely on board with the vast majority of it, and extremely pleased with certain elements of it, some things kind of bothered me.

And so, as with much of the pop culture I love, I want to spend some time with the more critical perspective, in hopes that it’ll be taken as an opportunity to make it even better.

image
[Promotional image for Star Trek: Picard, featuring all of the series main cast.]

This will be filled with spoilers, so. Heads up.

Read the rest of ‘Star Trek: Picard’ and The Admonition: Misapprehensions Through TimeatA Future Worth Thinking About

afutureworththinkingabout:

Below are the slides, audio, and transcripts for my talk ’“Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems,’ given at the 21st Conference of the Society for Philosophy and Technology, back in May 2019. (Cite as: Williams, Damien P. ’“Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems;’ talk given at the 21st Conference of the Society for Philosophy and Technology; May 2019)

Now, I’ve got a chapter coming out about this, soon, which I can provide as a preprint draft if you ask, and can be cited as “Constructing Situated and Social Knowledge: Ethical, Sociological, and Phenomenological Factors in Technological Design,” appearing in Philosophy And Engineering: Reimagining Technology And Social Progress. Guru Madhavan, Zachary Pirtle, and David Tomblin, eds. Forthcoming from Springer, 2019. But I wanted to get the words I said in this talk up onto some platforms where people can read them, as soon as possible, for a couple of reasons.

First, the Current Occupants of the Oval Office have very recently taken the policy position that algorithms can’t be racist, something which they’ve done in direct response to things like Google’s Hate Speech-Detecting AI being biased against black people, and Amazon claiming that its facial recognition can identify fear, without ever accounting for, i dunno, cultural and individual differences in fear expression?

[Free vector image of a white, female-presenting person, from head to torso, with biometric facial recognition patterns on her face; incidentally, go try finding images—even illustrations—of a non-white person in a facial recognition context.]

All these things taken together are what made me finally go ahead and get the transcript of that talk done, and posted, because these are events and policy decisions about which I a) have been speaking and writing for years, and b) have specific inputs and recommendations about, and which are, c) frankly wrongheaded, and outright hateful.

And I want to spend time on it because I think what doesn’t get through in many of our discussions is that it’s not just about how Artificial Intelligence, Machine Learning, or Algorithmic instances get trained, but the processes for how and the cultural environments in which HUMANS are increasingly taught/shown/environmentally encouraged/socialized to think is the “right way” to build and train said systems.

That includes classes and instruction, it includes the institutional culture of the companies, it includes the policy landscape in which decisions about funding and get made, because that drives how people have to talk and write and think about the work they’re doing, and that constrains what they will even attempt to do or even understand.

All of this is cumulative, accreting into institutional epistemologies of algorithm creation. It is a structural and institutionalproblem.

So here are the Slides:


TheAudio: …
[Direct Link to Mp3]

And the Transcript is here below the cut:


Read the rest of Audio, Transcripts, and Slides from “Any Sufficiently Advanced Neglect is Indistinguishable from Malice”atA Future Worth Thinking About

afutureworththinkingabout:

[This is a in-process pre-print of an as-yet-published paper, a version of which was presented at the Gender, Bodies, and Technology 2019 Conference.]

INTRODUCTION

The history of biotechnological intervention on the human body has always been tied to conceptual frameworks of disability and mental health, but certain biases and assumptions have forcibly altered and erased the public awareness of that understanding. As humans move into a future of climate catastrophe, space travel, and constantly shifting understanding s of our place in the world, we will be increasingly confronted with concerns over who will be used as research subjects, concerns over whose stakeholder positions will be acknowledged and preferenced, and concerns over the kinds of changes that human bodies will necessarily undergo as they adapt to their changing environments, be they terrestrial or interstellar. Who will be tested, and how, so that we can better understand what kinds of bodyminds will be “suitable” for our future modes of existence?[1] How will we test the effects of conditions like pregnancy and hormone replacement therapy (HRT) in space, and what will happen to our bodies and minds after extended exposure to low light, zero gravity, high-radiation environments, or the increasing warmth and wetness of our home planet?

During the June 2018 “Decolonizing Mars” event at the Library of Congress in Washington, DC, several attendees discussed the fact that the bodyminds of disabled folx might be better suited to space life, already being oriented to pushing off of surfaces and orienting themselves to the world in different ways, and that the integration of body and technology wouldn’t be anything new for many people with disabilities. In that context, I submit that cyborgs and space travel are, always have been, and will continue to be about disability and marginalization, but that Western society’s relationship to disabled people has created a situation in which many people do everything they can to conceal that fact from the popular historical narratives about what it means for humans to live and explore. In order to survive and thrive, into the future, humanity will have to carefully and intentionally take this history up, again, and consider the present-day lived experience of those beings—human and otherwise—whose lives are and have been most impacted by the socioethical contexts in which we talk about technology and space.

This paper explores some history and theories about cyborgs—humans with biotechnological interventions which allow them to regulate their own internal bodily process—and how those compare to the realities of how we treat and consider currently-living people who are physically enmeshed with technology. I’ll explore several ways in which the above-listed considerations have been alternately overlooked and taken up by various theorists, and some of the many different strategies and formulations for integrating these theories into what will likely become everyday concerns in the future. In fact, by exploring responses from disabilities studies scholars and artists who have interrogated and problematized the popular vision of cyborgs, the future, and life in space, I will demonstrate that our clearest path toward the future of living with biotechnologies is a reengagement with the everyday lives of disabled and other marginalized persons, today.


Read the rest of Heavenly Bodies: Why It Matters That Cyborgs Have Always Been About Disability, Mental Health, and MarginalizationatA Future Worth Thinking About

afutureworththinkingabout:

So, as you know, back in the summer of 2017 I participated in SRI International’s Technology and Consciousness Workshop Series. This series was an eight week program of workshops the current state of the field around, the potential future paths toward, and the moral and social implications of the notion of conscious machines. To do this, we brought together a rotating cast of dozens of researchers in AI, machine learning, psychedelics research, ethics, epistemology, philosophy of mind, cognitive computing, neuroscience, comparative religious studies, robotics, psychology, and much more.

Image of a rectangular name card with a stylized "Technology & Consciousness" logo, at the top, the name Damien Williams in bold in the middle, and SRI International italicized at the bottom; to the right a blurry wavy image of what appears to be a tree with a person standing next to it and another tree in the background to the left., all partially mirrored in a surface at the bottom of the image. [Image of my name card from the Technology & Consciousness workshop series.]

We traveled from Arlington, VA, to Menlo Park, CA, to Cambridge, UK, and back, and while my primary role was that of conference co-ordinator and note-taker (that place in the intro where it says I “maintained scrupulous notes?” Think 405 pages/160,656 words of notes, taken over eight 5-day weeks of meetings), I also had three separate opportunities to present: Once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. In relation to this report, I would draw your attention to the following passage:

An objection to this privileging of sentience is that it is anthropomorphic “meat chauvinism”: we are projecting considerations onto technology that derive from our biology. Perhaps conscious technology could have morally salient aspects distinct from sentience: the basic elements of its consciousness could be different than ours.
All of these meetings were held under the auspices of the Chatham House Rule, which meant that there were many things I couldn’t tell you about them, such as the names of the other attendees, or what exactly they said in the context of the meetings. What I was able tell you, however, was what I talked about, and I did, several times. But as of this week, I can give you even more than that.

This past Thursday, SRI released an official public report on all of the proceedings and findings from the 2017 SRI Technology and Consciousness Workshop Series, and they have told all of the participants that they can share said report as widely as they wish. Crucially, that means that I can share it with you. You can either click this link, here, or read it directly, after the cut.


Read the rest of 2017 SRI Technology and Consciousness Workshop Series Final ReportatA Future Worth Thinking About

loading