#facial recognition

LIVE

Neural Network Model Shows Why People with Autism Read Facial Expressions Differently

People with autism spectrum disorder have difficulty interpreting facial expressions.

Using a neural network model that reproduces the brain on a computer, a group of researchers based at Tohoku University have unraveled how this comes to be.

The journal Scientific Reports published the results.

“Humans recognize different emotions, such as sadness and anger by looking at facial expressions. Yet little is known about how we come to recognize different emotions based on the visual information of facial expressions,” said paper coauthor, Yuta Takahashi.

“It is also not clear what changes occur in this process that leads to people with autism spectrum disorder struggling to read facial expressions.”

The research group employed predictive processing theory to help understand more. According to this theory, the brain constantly predicts the next sensory stimulus and adapts when its prediction is wrong. Sensory information, such as facial expressions, helps reduce prediction error.

The artificial neural network model incorporated the predictive processing theory and reproduced the developmental process by learning to predict how parts of the face would move in videos of facial expression. After this, the clusters of emotions were self-organized into the neural network model’s higher level neuron space - without the model knowing which emotion the facial expression in the video corresponds to.

The model could generalize unknown facial expressions not given in the training, reproducing facial part movements and minimizing prediction errors.

Following this, the researchers conducted experiments and induced abnormalities in the neurons’ activities to investigate the effects on learning development and cognitive characteristics. In the model where heterogeneity of activity in neural population was reduced, the generalization ability also decreased; thus, the formation of emotional clusters in higher-level neurons was inhibited. This led to a tendency to fail in identifying the emotion of unknown facial expressions, a similar symptom of autism spectrum disorder.

According to Takahashi, the study clarified that predictive processing theory can explain emotion recognition from facial expressions using a neural network model.

“We hope to further our understanding of the process by which humans learn to recognize emotions and the cognitive characteristics of people with autism spectrum disorder,” added Takahashi. “The study will help advance developing appropriate intervention methods for people who find it difficult to identify emotions.”

emo-church:

If you’ve seen this going around on social media PLEASE do not follow this advice.

  1. If you’re going to a protest you shouldn’t wear so much makeup, especially around your eyes. Tear gas can cling to makeup. Same with lotion, sunscreen, and face paint. This makes it a lot harder to get rid of.
  2. This is outdated. Many facial recognition software will not be tricked by it anymore. The studies supporting this are not recent, and even back then it wasn’t entirely perfect.
  3. Having a strange design on your face can draw more attention to you. Remember, there are other ways to figure out a person’s identity aside from using facial recognition. Looking distinct enough can make it easier to track you. You likely want to look as boring as possible, to blend in with the crowd.

You’re probably better off wearing a face mask, sunglasses/goggles, a hat/hood. Dress casual, don’t wear bright colors, try to avoid looking so distinct.

Most importantly, stay with large groups of people and avoid being alone. Not only will this make it harder to identify you, it will also keep you safer physically. People are stronger as a group and if god forbid someone gets hurt then there will be other people around to help out. We need to protect each other.

Also please do not attack the people who posted this. They likely had good intentions. Just let them know that this doesn’t work anymore and it isn’t safe.

afutureworththinkingabout:

I’m Not Afraid of AI Overlords— I’m Afraid of Whoever’s Training Them To Think That Way

by Damien P. Williams

I want to let you in on a secret: According to Silicon Valley’s AI’s, I’m not human.

Well, maybe they think I’m human, but they don’t think I’m me. Or, if they think I’m me and that I’m human, they think I don’t deserve expensive medical care. Or that I pose a higher risk of criminal recidivism. Or that my fidgeting behaviours or culturally-perpetuated shame about my living situation or my race mean I’m more likely to be cheating on a test. Or that I want to see morally repugnant posts that my friends have commented on to call morally repugnant. Or that I shouldn’t be given a home loan or a job interview or the benefits I need to stay alive.

Now, to be clear, “AI” is a misnomer, for several reasons, but we don’t have time, here, to really dig into all the thorny discussion of values and beliefs about what it means to think, or to be a mind— especially because we need to take our time talking about why values and beliefs matter to conversations about “AI,” at all. So instead of “AI,” let’s talk specifically about algorithms, and machine learning.

Machine Learning (ML) is the name for a set of techniques for systematically reinforcing patterns, expectations, and desired outcomes in various computer systems. These techniques allow those systems to make sought after predictions based on the datasets they’re trained on. ML systems learn the patterns in these datasets and then extrapolate them to model a range of statistical likelihoods of future outcomes.

Algorithms are sets of instructions which, when run, perform functions such as searching, matching, sorting, and feeding the outputs of any of those processes back in on themselves, so that a system can learn from and refine itself. This feedback loop is what allows algorithmic machine learning systems to provide carefully curated search responses or newsfeed arrangements or facial recognition results to consumers like me and you and your friends and family and the police and the military. And while there are many different types of algorithms which can be used for the above purposes, they all remain sets of encoded instructions to perform a function.

And so, in these systems’ defense, it’s no surprise that they think the way they do: That’s exactly how we’ve told them to think.

[Image of Michael Emerson as Harold Finch, in season 2, episode 1 of the show Person of Interest, “The Contingency.” His face is framed by a box of dashed yellow lines, the words “Admin” to the top right, and “Day 1” in the lower right corner.]


Read the rest of I’m Not Afraid of AI Overlords— I’m Afraid of Whoever’s Training Them To Think That WayatA Future Worth Thinking About

Painting of a bear I saw on google images.

Dear self

and aligned mutuals,

Am I cringe? Sure, no more than anyone else. Should I post my cringe? I have no idea. Is it on-brand cringe? Yes.

The emotions I have pertaining to this video are immaculate and perhaps ineffable…

(I was playing peekaboo with a filter on tiktok).

Flesh masks to avoid facial recog

Flesh masks to avoid facial recog


Post link
more on my art books instagram pageAdam Broomberg & Oliver Chanarin - Spirit is a Bone2015, MACKmore on my art books instagram pageAdam Broomberg & Oliver Chanarin - Spirit is a Bone2015, MACKmore on my art books instagram pageAdam Broomberg & Oliver Chanarin - Spirit is a Bone2015, MACKmore on my art books instagram pageAdam Broomberg & Oliver Chanarin - Spirit is a Bone2015, MACKmore on my art books instagram pageAdam Broomberg & Oliver Chanarin - Spirit is a Bone2015, MACKmore on my art books instagram pageAdam Broomberg & Oliver Chanarin - Spirit is a Bone2015, MACKmore on my art books instagram pageAdam Broomberg & Oliver Chanarin - Spirit is a Bone2015, MACKmore on my art books instagram pageAdam Broomberg & Oliver Chanarin - Spirit is a Bone2015, MACK

more on my art books instagram page

Adam Broomberg & Oliver Chanarin - Spirit is a Bone

2015, MACK, first edition (signed). Book came wrapped in tissue paper.

The portraits in this book were produced using advanced facial recognition technology that’s being used in most cities around the world, developed by engineers in Moscow from existing systems built to recognize car plates. These so called ‘non-collaborative portraits’ are more three dimensional data maps than photographs, where no human contact is registered: there’s a total negation of humanity, they’re essentially some sort of digital death masks. Broomberg & Chanarin have constructed their own taxonomy of portraits in contemporary Russia, including Pussy Riot members and other Moscow citizens.

Post 2

Post 3


Post link
aurora1040: dailytechnologynews: Clearview AI ordered to delete facial recognition data belonging to

aurora1040:

dailytechnologynews:

Clearview AI ordered to delete facial recognition data belonging to UK residents https://ift.tt/RqD6ya9

just a casual reminder that i have never, not once, liked biometric or facial recognition software. idc how ‘secure’ it is. i will not ever enable it on any device.


Post link

Q&A How do cats see themselves and us?

cat-human-mirror
Q: Do Cats Think They’re Humans? A: I’m not exactly sure where you got this idea, but it’s a rather simple answer: No. So why bring it up? Because there’s this ridiculous related notion that cats think that we are strange-looking cats. By that logic, they would have to think that their dog friends, bunny friends, and other animal friends are also strange-looking cats, but we know they don’t. Cats…

View On WordPress

mostlysignssomeportents:

Privacy advocate Allie Funk was surprised to learn that her Delta flight out of Detroit airport would use facial recognition scans for boarding; Funk knew that these systems were supposed to be “opt in” but no one announced that you could choose not to use them while boarding, so Funk set out to learn how she could choose not to have her face ingested into a leaky, creepy, public-private biometric database.

It turns out that all of Funk’s suspicions were misplaced! It is as easy as pie to opt out of airport facial recognition: all you need to do to opt-out is:

* To independently learn that you are allowed to opt out;

* Leave the boarding queue and join a different queue at a distant information desk;

* Return to her gate and rejoin the boarding queue; and, finally

* Show her passport to the gate agent.

Simplicity itself!

https://boingboing.net/2019/07/02/beware-of-the-leopard-2.html

afutureworththinkingabout:

I’m Not Afraid of AI Overlords— I’m Afraid of Whoever’s Training Them To Think That Way

by Damien P. Williams

I want to let you in on a secret: According to Silicon Valley’s AI’s, I’m not human.

Well, maybe they think I’m human, but they don’t think I’m me. Or, if they think I’m me and that I’m human, they think I don’t deserve expensive medical care. Or that I pose a higher risk of criminal recidivism. Or that my fidgeting behaviours or culturally-perpetuated shame about my living situation or my race mean I’m more likely to be cheating on a test. Or that I want to see morally repugnant posts that my friends have commented on to call morally repugnant. Or that I shouldn’t be given a home loan or a job interview or the benefits I need to stay alive.

Now, to be clear, “AI” is a misnomer, for several reasons, but we don’t have time, here, to really dig into all the thorny discussion of values and beliefs about what it means to think, or to be a mind— especially because we need to take our time talking about why values and beliefs matter to conversations about “AI,” at all. So instead of “AI,” let’s talk specifically about algorithms, and machine learning.

Machine Learning (ML) is the name for a set of techniques for systematically reinforcing patterns, expectations, and desired outcomes in various computer systems. These techniques allow those systems to make sought after predictions based on the datasets they’re trained on. ML systems learn the patterns in these datasets and then extrapolate them to model a range of statistical likelihoods of future outcomes.

Algorithms are sets of instructions which, when run, perform functions such as searching, matching, sorting, and feeding the outputs of any of those processes back in on themselves, so that a system can learn from and refine itself. This feedback loop is what allows algorithmic machine learning systems to provide carefully curated search responses or newsfeed arrangements or facial recognition results to consumers like me and you and your friends and family and the police and the military. And while there are many different types of algorithms which can be used for the above purposes, they all remain sets of encoded instructions to perform a function.

And so, in these systems’ defense, it’s no surprise that they think the way they do: That’s exactly how we’ve told them to think.

[Image of Michael Emerson as Harold Finch, in season 2, episode 1 of the show Person of Interest, “The Contingency.” His face is framed by a box of dashed yellow lines, the words “Admin” to the top right, and “Day 1” in the lower right corner.]


Read the rest of I’m Not Afraid of AI Overlords— I’m Afraid of Whoever’s Training Them To Think That WayatA Future Worth Thinking About

afutureworththinkingabout:

Below are the slides, audio, and transcripts for my talk ’“Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems,’ given at the 21st Conference of the Society for Philosophy and Technology, back in May 2019. (Cite as: Williams, Damien P. ’“Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems;’ talk given at the 21st Conference of the Society for Philosophy and Technology; May 2019)

Now, I’ve got a chapter coming out about this, soon, which I can provide as a preprint draft if you ask, and can be cited as “Constructing Situated and Social Knowledge: Ethical, Sociological, and Phenomenological Factors in Technological Design,” appearing in Philosophy And Engineering: Reimagining Technology And Social Progress. Guru Madhavan, Zachary Pirtle, and David Tomblin, eds. Forthcoming from Springer, 2019. But I wanted to get the words I said in this talk up onto some platforms where people can read them, as soon as possible, for a couple of reasons.

First, the Current Occupants of the Oval Office have very recently taken the policy position that algorithms can’t be racist, something which they’ve done in direct response to things like Google’s Hate Speech-Detecting AI being biased against black people, and Amazon claiming that its facial recognition can identify fear, without ever accounting for, i dunno, cultural and individual differences in fear expression?

[Free vector image of a white, female-presenting person, from head to torso, with biometric facial recognition patterns on her face; incidentally, go try finding images—even illustrations—of a non-white person in a facial recognition context.]

All these things taken together are what made me finally go ahead and get the transcript of that talk done, and posted, because these are events and policy decisions about which I a) have been speaking and writing for years, and b) have specific inputs and recommendations about, and which are, c) frankly wrongheaded, and outright hateful.

And I want to spend time on it because I think what doesn’t get through in many of our discussions is that it’s not just about how Artificial Intelligence, Machine Learning, or Algorithmic instances get trained, but the processes for how and the cultural environments in which HUMANS are increasingly taught/shown/environmentally encouraged/socialized to think is the “right way” to build and train said systems.

That includes classes and instruction, it includes the institutional culture of the companies, it includes the policy landscape in which decisions about funding and get made, because that drives how people have to talk and write and think about the work they’re doing, and that constrains what they will even attempt to do or even understand.

All of this is cumulative, accreting into institutional epistemologies of algorithm creation. It is a structural and institutionalproblem.

So here are the Slides:


TheAudio: …
[Direct Link to Mp3]

And the Transcript is here below the cut:


Read the rest of Audio, Transcripts, and Slides from “Any Sufficiently Advanced Neglect is Indistinguishable from Malice”atA Future Worth Thinking About

Es wird Zeit, bestimmte Aspekte des technischen Fortschritts zu verbieten, schlicht und einfach zu verbieten, weil sie sozial schädlich sind.

Im Mai berichtet Zerohedge, dass in UK ein Mann eine Geldstrafe bekam von 90 Pfund, weil er sein Gesicht verdeckt hat, damit ihn eine automatische Gesichtserkennungskamera nicht erkennen kann. Polizisten haben ihn ergriffen und gegen seinen Willen…

View On WordPress

Currently doing all the homework I procrastinated on until last minute so I could watch The Originals. I’m fine…

loading