#algorithms

LIVE

afutureworththinkingabout:

I’m Not Afraid of AI Overlords— I’m Afraid of Whoever’s Training Them To Think That Way

by Damien P. Williams

I want to let you in on a secret: According to Silicon Valley’s AI’s, I’m not human.

Well, maybe they think I’m human, but they don’t think I’m me. Or, if they think I’m me and that I’m human, they think I don’t deserve expensive medical care. Or that I pose a higher risk of criminal recidivism. Or that my fidgeting behaviours or culturally-perpetuated shame about my living situation or my race mean I’m more likely to be cheating on a test. Or that I want to see morally repugnant posts that my friends have commented on to call morally repugnant. Or that I shouldn’t be given a home loan or a job interview or the benefits I need to stay alive.

Now, to be clear, “AI” is a misnomer, for several reasons, but we don’t have time, here, to really dig into all the thorny discussion of values and beliefs about what it means to think, or to be a mind— especially because we need to take our time talking about why values and beliefs matter to conversations about “AI,” at all. So instead of “AI,” let’s talk specifically about algorithms, and machine learning.

Machine Learning (ML) is the name for a set of techniques for systematically reinforcing patterns, expectations, and desired outcomes in various computer systems. These techniques allow those systems to make sought after predictions based on the datasets they’re trained on. ML systems learn the patterns in these datasets and then extrapolate them to model a range of statistical likelihoods of future outcomes.

Algorithms are sets of instructions which, when run, perform functions such as searching, matching, sorting, and feeding the outputs of any of those processes back in on themselves, so that a system can learn from and refine itself. This feedback loop is what allows algorithmic machine learning systems to provide carefully curated search responses or newsfeed arrangements or facial recognition results to consumers like me and you and your friends and family and the police and the military. And while there are many different types of algorithms which can be used for the above purposes, they all remain sets of encoded instructions to perform a function.

And so, in these systems’ defense, it’s no surprise that they think the way they do: That’s exactly how we’ve told them to think.

[Image of Michael Emerson as Harold Finch, in season 2, episode 1 of the show Person of Interest, “The Contingency.” His face is framed by a box of dashed yellow lines, the words “Admin” to the top right, and “Day 1” in the lower right corner.]


Read the rest of I’m Not Afraid of AI Overlords— I’m Afraid of Whoever’s Training Them To Think That WayatA Future Worth Thinking About

Drone-2000 (Floating Prophecies)

A dystopian performance for amplified drones.

The figure of the drone serves as the starting point for this film, enabling it to explore the intersections that exist between fictional depictions arising from science-fiction literature and the actual advent of flying machines with camera vision in contemporary world, folklore and mythology.

By juxtaposing quotes from works from the previous century (1880 ­- 2000) with a selection of popular, recent videos that have been posted on video sharing websites, this film explores the grey zone between self-fulfilling prophecy and composite narratives.

[Drone2000]

#dronesdronesdrones    #drones    #future    #algorithms    #futures noir    
Watson for PresidentThe Watson 2016 Foundation is an independent organization formed for the advoc

Watson for President

The Watson 2016 Foundation is an independent organization formed for the advocacy of the artificial intelligence known as Watson to run for President of The United States of America. It is our belief that Watson’s unique capabilities to assess information and make informed and transparent decisions define it as an ideal candidate for the job responsibilities required by the president.

[vote] [picture by The Watson Foundation 2016]


Post link
 The designs of algorithmic systems are normative rather than neutral and can consequently reproduce

The designs of algorithmic systems are normative rather than neutral and can consequently reproduce and reinforce structural discrimination in our society. Can we imagine radical design alternatives to create a more just and equitable digitally mediated world?

Presenting a special issue of the Algorithms of Late-Capitalism zine: “[D/R]econstructing AI - Dreams of Visionary Fiction”

Digital Edition
Self-printing Edition

——————————————
This special edition of Algorithms of Late-Capitalism was produced by Nushin YazdaniandInternet Teapot with participants in the [d/r]econstructing AI – dreams of visionary fiction and zine-making held at Ars Electronica 2020.
——————————————

image

Post link
 Presenting the seventh issue of the Algorithms of Late-Capitalism zine: “Bots, Ghosts and Other Wor

Presenting the seventh issue of the Algorithms of Late-Capitalism zine: “Bots, Ghosts and Other Workers”

Digital Edition
Self-printing Edition


——————————————

AI systems are made of human labour and have an impact on human labour conditions. This feedback cycle is present in almost all sectors to some degree. Yet, these systems are often designed and deployed in ways that keep this reality strategically concealed. For this issue, we invited workshop participants to unpack some of the social and technological dimensions of AI and highlight the hidden human costs of automation. Who pays the price for increased automation? What changes are taking place in power relations at work? Can we imagine ways to make labour more fair in the age of AI?

——————————————


This edition of Algorithms of Late-Capitalism was created by internet teapot (Karla Zavala and Adriaan Odendaal) and the participants of the workshop: Algorithms of Late-Capitalism Zine Co-creation Workshop that took place during the Mozilla Festival in March 2022.


Post link
 Presenting the sixth issue of the Algorithms of Late-Capitalism zine: “AI Myths” Digital Edition Se

Presenting the sixth issue of the Algorithms of Late-Capitalism zine: “AI Myths”

Digital Edition
Self-printing Edition


——————————————

Humanoid robots moving through the world with self-determination - set on destroying humanity. Sentient operating systems with infinite knowledge falling in love with one another. Silicon Valley vaporware promising to solve all the world’s problems - from cavities to global poverty.

When we think of Artificial Intelligence or AI, we usually conjure up fantastical images from science-fiction or the tech industry hype machine.

Yet, relying on these unstable myths can distract us from the important (and often problematic) realities of AI technologies. It is crucial to try and see what is hidden behind these myths. By demystifying AI, we can begin to understand the impact these technologies have on our societies, communities, cultures, and daily lives.

——————————————


This edition of Algorithms of Late-Capitalism was created by internet teapot (Karla Zavala and Adriaan Odendaal) with special contribution from Buse Çetin and the participants of the workshop: Algorithms of Late-Capitalism Zine Co-creation Workshop that took place during the When Machines Dream the Future AI Festival organized by Goethe Institute on 13 November 2021.


Post link
A simple Thank You for following, whether on IG, FB or tumblr… I couldn’t have done thi

A simple Thank You for following, whether on IG, FB or tumblr… I couldn’t have done this corset thing for 18 years without your patronage and attention online. I’m gearing up for some Walks Down Memory Lane.. Stay Tuned! #ThankYou #businessofsmallbusiness #algorithms #IfYoureReadingThisThankYou


Post link
That’s science baby!

That’s science baby!


Post link

afutureworththinkingabout:

I’m Not Afraid of AI Overlords— I’m Afraid of Whoever’s Training Them To Think That Way

by Damien P. Williams

I want to let you in on a secret: According to Silicon Valley’s AI’s, I’m not human.

Well, maybe they think I’m human, but they don’t think I’m me. Or, if they think I’m me and that I’m human, they think I don’t deserve expensive medical care. Or that I pose a higher risk of criminal recidivism. Or that my fidgeting behaviours or culturally-perpetuated shame about my living situation or my race mean I’m more likely to be cheating on a test. Or that I want to see morally repugnant posts that my friends have commented on to call morally repugnant. Or that I shouldn’t be given a home loan or a job interview or the benefits I need to stay alive.

Now, to be clear, “AI” is a misnomer, for several reasons, but we don’t have time, here, to really dig into all the thorny discussion of values and beliefs about what it means to think, or to be a mind— especially because we need to take our time talking about why values and beliefs matter to conversations about “AI,” at all. So instead of “AI,” let’s talk specifically about algorithms, and machine learning.

Machine Learning (ML) is the name for a set of techniques for systematically reinforcing patterns, expectations, and desired outcomes in various computer systems. These techniques allow those systems to make sought after predictions based on the datasets they’re trained on. ML systems learn the patterns in these datasets and then extrapolate them to model a range of statistical likelihoods of future outcomes.

Algorithms are sets of instructions which, when run, perform functions such as searching, matching, sorting, and feeding the outputs of any of those processes back in on themselves, so that a system can learn from and refine itself. This feedback loop is what allows algorithmic machine learning systems to provide carefully curated search responses or newsfeed arrangements or facial recognition results to consumers like me and you and your friends and family and the police and the military. And while there are many different types of algorithms which can be used for the above purposes, they all remain sets of encoded instructions to perform a function.

And so, in these systems’ defense, it’s no surprise that they think the way they do: That’s exactly how we’ve told them to think.

[Image of Michael Emerson as Harold Finch, in season 2, episode 1 of the show Person of Interest, “The Contingency.” His face is framed by a box of dashed yellow lines, the words “Admin” to the top right, and “Day 1” in the lower right corner.]


Read the rest of I’m Not Afraid of AI Overlords— I’m Afraid of Whoever’s Training Them To Think That WayatA Future Worth Thinking About

loading