#ashley shew

LIVE

afutureworththinkingabout:

I’m Not Afraid of AI Overlords— I’m Afraid of Whoever’s Training Them To Think That Way

by Damien P. Williams

I want to let you in on a secret: According to Silicon Valley’s AI’s, I’m not human.

Well, maybe they think I’m human, but they don’t think I’m me. Or, if they think I’m me and that I’m human, they think I don’t deserve expensive medical care. Or that I pose a higher risk of criminal recidivism. Or that my fidgeting behaviours or culturally-perpetuated shame about my living situation or my race mean I’m more likely to be cheating on a test. Or that I want to see morally repugnant posts that my friends have commented on to call morally repugnant. Or that I shouldn’t be given a home loan or a job interview or the benefits I need to stay alive.

Now, to be clear, “AI” is a misnomer, for several reasons, but we don’t have time, here, to really dig into all the thorny discussion of values and beliefs about what it means to think, or to be a mind— especially because we need to take our time talking about why values and beliefs matter to conversations about “AI,” at all. So instead of “AI,” let’s talk specifically about algorithms, and machine learning.

Machine Learning (ML) is the name for a set of techniques for systematically reinforcing patterns, expectations, and desired outcomes in various computer systems. These techniques allow those systems to make sought after predictions based on the datasets they’re trained on. ML systems learn the patterns in these datasets and then extrapolate them to model a range of statistical likelihoods of future outcomes.

Algorithms are sets of instructions which, when run, perform functions such as searching, matching, sorting, and feeding the outputs of any of those processes back in on themselves, so that a system can learn from and refine itself. This feedback loop is what allows algorithmic machine learning systems to provide carefully curated search responses or newsfeed arrangements or facial recognition results to consumers like me and you and your friends and family and the police and the military. And while there are many different types of algorithms which can be used for the above purposes, they all remain sets of encoded instructions to perform a function.

And so, in these systems’ defense, it’s no surprise that they think the way they do: That’s exactly how we’ve told them to think.

[Image of Michael Emerson as Harold Finch, in season 2, episode 1 of the show Person of Interest, “The Contingency.” His face is framed by a box of dashed yellow lines, the words “Admin” to the top right, and “Day 1” in the lower right corner.]


Read the rest of I’m Not Afraid of AI Overlords— I’m Afraid of Whoever’s Training Them To Think That WayatA Future Worth Thinking About

afutureworththinkingabout:

I’m Not Afraid of AI Overlords— I’m Afraid of Whoever’s Training Them To Think That Way

by Damien P. Williams

I want to let you in on a secret: According to Silicon Valley’s AI’s, I’m not human.

Well, maybe they think I’m human, but they don’t think I’m me. Or, if they think I’m me and that I’m human, they think I don’t deserve expensive medical care. Or that I pose a higher risk of criminal recidivism. Or that my fidgeting behaviours or culturally-perpetuated shame about my living situation or my race mean I’m more likely to be cheating on a test. Or that I want to see morally repugnant posts that my friends have commented on to call morally repugnant. Or that I shouldn’t be given a home loan or a job interview or the benefits I need to stay alive.

Now, to be clear, “AI” is a misnomer, for several reasons, but we don’t have time, here, to really dig into all the thorny discussion of values and beliefs about what it means to think, or to be a mind— especially because we need to take our time talking about why values and beliefs matter to conversations about “AI,” at all. So instead of “AI,” let’s talk specifically about algorithms, and machine learning.

Machine Learning (ML) is the name for a set of techniques for systematically reinforcing patterns, expectations, and desired outcomes in various computer systems. These techniques allow those systems to make sought after predictions based on the datasets they’re trained on. ML systems learn the patterns in these datasets and then extrapolate them to model a range of statistical likelihoods of future outcomes.

Algorithms are sets of instructions which, when run, perform functions such as searching, matching, sorting, and feeding the outputs of any of those processes back in on themselves, so that a system can learn from and refine itself. This feedback loop is what allows algorithmic machine learning systems to provide carefully curated search responses or newsfeed arrangements or facial recognition results to consumers like me and you and your friends and family and the police and the military. And while there are many different types of algorithms which can be used for the above purposes, they all remain sets of encoded instructions to perform a function.

And so, in these systems’ defense, it’s no surprise that they think the way they do: That’s exactly how we’ve told them to think.

[Image of Michael Emerson as Harold Finch, in season 2, episode 1 of the show Person of Interest, “The Contingency.” His face is framed by a box of dashed yellow lines, the words “Admin” to the top right, and “Day 1” in the lower right corner.]


Read the rest of I’m Not Afraid of AI Overlords— I’m Afraid of Whoever’s Training Them To Think That WayatA Future Worth Thinking About

afutureworththinkingabout:

[This is a in-process pre-print of an as-yet-published paper, a version of which was presented at the Gender, Bodies, and Technology 2019 Conference.]

INTRODUCTION

The history of biotechnological intervention on the human body has always been tied to conceptual frameworks of disability and mental health, but certain biases and assumptions have forcibly altered and erased the public awareness of that understanding. As humans move into a future of climate catastrophe, space travel, and constantly shifting understanding s of our place in the world, we will be increasingly confronted with concerns over who will be used as research subjects, concerns over whose stakeholder positions will be acknowledged and preferenced, and concerns over the kinds of changes that human bodies will necessarily undergo as they adapt to their changing environments, be they terrestrial or interstellar. Who will be tested, and how, so that we can better understand what kinds of bodyminds will be “suitable” for our future modes of existence?[1] How will we test the effects of conditions like pregnancy and hormone replacement therapy (HRT) in space, and what will happen to our bodies and minds after extended exposure to low light, zero gravity, high-radiation environments, or the increasing warmth and wetness of our home planet?

During the June 2018 “Decolonizing Mars” event at the Library of Congress in Washington, DC, several attendees discussed the fact that the bodyminds of disabled folx might be better suited to space life, already being oriented to pushing off of surfaces and orienting themselves to the world in different ways, and that the integration of body and technology wouldn’t be anything new for many people with disabilities. In that context, I submit that cyborgs and space travel are, always have been, and will continue to be about disability and marginalization, but that Western society’s relationship to disabled people has created a situation in which many people do everything they can to conceal that fact from the popular historical narratives about what it means for humans to live and explore. In order to survive and thrive, into the future, humanity will have to carefully and intentionally take this history up, again, and consider the present-day lived experience of those beings—human and otherwise—whose lives are and have been most impacted by the socioethical contexts in which we talk about technology and space.

This paper explores some history and theories about cyborgs—humans with biotechnological interventions which allow them to regulate their own internal bodily process—and how those compare to the realities of how we treat and consider currently-living people who are physically enmeshed with technology. I’ll explore several ways in which the above-listed considerations have been alternately overlooked and taken up by various theorists, and some of the many different strategies and formulations for integrating these theories into what will likely become everyday concerns in the future. In fact, by exploring responses from disabilities studies scholars and artists who have interrogated and problematized the popular vision of cyborgs, the future, and life in space, I will demonstrate that our clearest path toward the future of living with biotechnologies is a reengagement with the everyday lives of disabled and other marginalized persons, today.


Read the rest of Heavenly Bodies: Why It Matters That Cyborgs Have Always Been About Disability, Mental Health, and MarginalizationatA Future Worth Thinking About

loading