SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
We are surrounded by surveillance cameras that record us at every turn. But for the most part, while those cameras are watching us, no one is watching what those cameras observe or record because no one will pay for the armies of security guards that would be required for such a time-consuming and monotonous task.
But imagine that all that video were being watched -- that millions of security guards were monitoring them all 24/7. Imagine this army is made up of guards who don't need to be paid, who never get bored, who never sleep, who never miss a detail, and who have total recall for everything they've seen. Such an army of watchers could scrutinize every person they see for signs of "suspicious" behavior. With unlimited time and attention, they could also record details about all of the people they see -- their clothing, their expressions and emotions, their body language, the people they are with and how they relate to them, and their every activity and motion.
That scenario may seem far-fetched, but it's a world that may soon be arriving. The guards won't be human, of course -- they'll be AI agents.
Last week, the ACLU published a report on a $3.2 billion industry building a technology known as "video analytics," which is starting to augment surveillance cameras around the world and has the potential to turn them into just that kind of nightmarish army of unblinking watchers.
Using cutting-edge, deep learning-based AI, the science is moving so fast that early versions of this technology are already starting to enter our lives. Some of our cars now come equipped with dashboard cameras that can sound alarms when a driver starts to look drowsy. Doorbell cameras today can alert us when a person appears on our doorstep. Cashier-less stores use AI-enabled cameras that monitor customers and automatically charge them when they pick items off the shelf.
In the report, we looked at where this technology has been deployed, and what capabilities companies are claiming they can offer. We also reviewed scores of papers by computer vision scientists and other researchers to see what kinds of capabilities are being envisioned and developed. What we found is that the capabilities that computer scientists are pursuing, if applied to surveillance and marketing, would create a world of frighteningly perceptive and insightful computer watchers monitoring our lives.
Cameras that collect and store video just in case it is needed are being transformed into devices that can actively watch us, often in real time. It is as if a great surveillance machine has been growing up around us, but largely dumb and inert -- and is now, in a meaningful sense, "waking up."
Computers are getting better and better, for example, at what is called simply "human action recognition." AI training datasets include thousands of actions that computers are being taught to recognize -- things such as putting a hat on, taking glasses off, reaching into a pocket, and drinking beer.
Researchers are also pushing to create AI technologies that are ever-better at "anomaly detection" (sounding alarms at people who are "unusual," "abnormal," "deviant," or "atypical"), emotion recognition, the perception of our attributes, the understanding of the physical and social contexts of our behaviors, and wide-area tracking of the patterns of our movements.
Think about some of the implications of such techniques, especially when combined with other technologies like face recognition. For example, it's not hard to imagine some future corrupt mayor saying to an aide, "Here's a list of enemies of my administration. Have the cameras send us all instances of these people kissing another person, and the IDs of who they're kissing." Government and companies could use AI agents to track who is "suspicious" based on such things as clothing, posture, unusual characteristics or behavior, and emotions. People who stand out in some way and attract the attention of such ever-vigilant cameras could find themselves hassled, interrogated, expelled from stores, or worse.
Many or most of these technologies will be somewhere between unreliable and utterly bogus. Based on experience, however, that often won't stop them from being deployed -- and from hurting innocent people. And, like so many technologies, the weight of these new surveillance powers will inevitably fall hardest on the shoulders of those who are already disadvantaged: people of color, the poor, and those with unpopular political views.
We are still in the early days of a revolution in computer vision, and we don't know how AI will progress, but we need to keep in mind that progress in artificial intelligence may end up being extremely rapid. We could, in the not-so-distant future, end up living under armies of computerized watchers with intelligence at or near human levels.
These AI watchers, if unchecked, are likely to proliferate in American life until they number in the billions, representing an extension of corporate and bureaucratic power into the tendrils of our lives, watching over each of us and constantly shaping our behavior. In some cases, they will prove beneficial, but there is also a serious risk that they will chill the freedom of American life, create oppressively extreme enforcement of petty rules, amplify existing power disparities, disproportionately increase the monitoring of disadvantaged groups and political protesters, and open up new forms of abuse.
Policymakers must contend with this technology's enormous power. They should prohibit its use for mass surveillance, narrow its deployments, and create rules to minimize abuse.
Read the full report here.
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
We are surrounded by surveillance cameras that record us at every turn. But for the most part, while those cameras are watching us, no one is watching what those cameras observe or record because no one will pay for the armies of security guards that would be required for such a time-consuming and monotonous task.
But imagine that all that video were being watched -- that millions of security guards were monitoring them all 24/7. Imagine this army is made up of guards who don't need to be paid, who never get bored, who never sleep, who never miss a detail, and who have total recall for everything they've seen. Such an army of watchers could scrutinize every person they see for signs of "suspicious" behavior. With unlimited time and attention, they could also record details about all of the people they see -- their clothing, their expressions and emotions, their body language, the people they are with and how they relate to them, and their every activity and motion.
That scenario may seem far-fetched, but it's a world that may soon be arriving. The guards won't be human, of course -- they'll be AI agents.
Last week, the ACLU published a report on a $3.2 billion industry building a technology known as "video analytics," which is starting to augment surveillance cameras around the world and has the potential to turn them into just that kind of nightmarish army of unblinking watchers.
Using cutting-edge, deep learning-based AI, the science is moving so fast that early versions of this technology are already starting to enter our lives. Some of our cars now come equipped with dashboard cameras that can sound alarms when a driver starts to look drowsy. Doorbell cameras today can alert us when a person appears on our doorstep. Cashier-less stores use AI-enabled cameras that monitor customers and automatically charge them when they pick items off the shelf.
In the report, we looked at where this technology has been deployed, and what capabilities companies are claiming they can offer. We also reviewed scores of papers by computer vision scientists and other researchers to see what kinds of capabilities are being envisioned and developed. What we found is that the capabilities that computer scientists are pursuing, if applied to surveillance and marketing, would create a world of frighteningly perceptive and insightful computer watchers monitoring our lives.
Cameras that collect and store video just in case it is needed are being transformed into devices that can actively watch us, often in real time. It is as if a great surveillance machine has been growing up around us, but largely dumb and inert -- and is now, in a meaningful sense, "waking up."
Computers are getting better and better, for example, at what is called simply "human action recognition." AI training datasets include thousands of actions that computers are being taught to recognize -- things such as putting a hat on, taking glasses off, reaching into a pocket, and drinking beer.
Researchers are also pushing to create AI technologies that are ever-better at "anomaly detection" (sounding alarms at people who are "unusual," "abnormal," "deviant," or "atypical"), emotion recognition, the perception of our attributes, the understanding of the physical and social contexts of our behaviors, and wide-area tracking of the patterns of our movements.
Think about some of the implications of such techniques, especially when combined with other technologies like face recognition. For example, it's not hard to imagine some future corrupt mayor saying to an aide, "Here's a list of enemies of my administration. Have the cameras send us all instances of these people kissing another person, and the IDs of who they're kissing." Government and companies could use AI agents to track who is "suspicious" based on such things as clothing, posture, unusual characteristics or behavior, and emotions. People who stand out in some way and attract the attention of such ever-vigilant cameras could find themselves hassled, interrogated, expelled from stores, or worse.
Many or most of these technologies will be somewhere between unreliable and utterly bogus. Based on experience, however, that often won't stop them from being deployed -- and from hurting innocent people. And, like so many technologies, the weight of these new surveillance powers will inevitably fall hardest on the shoulders of those who are already disadvantaged: people of color, the poor, and those with unpopular political views.
We are still in the early days of a revolution in computer vision, and we don't know how AI will progress, but we need to keep in mind that progress in artificial intelligence may end up being extremely rapid. We could, in the not-so-distant future, end up living under armies of computerized watchers with intelligence at or near human levels.
These AI watchers, if unchecked, are likely to proliferate in American life until they number in the billions, representing an extension of corporate and bureaucratic power into the tendrils of our lives, watching over each of us and constantly shaping our behavior. In some cases, they will prove beneficial, but there is also a serious risk that they will chill the freedom of American life, create oppressively extreme enforcement of petty rules, amplify existing power disparities, disproportionately increase the monitoring of disadvantaged groups and political protesters, and open up new forms of abuse.
Policymakers must contend with this technology's enormous power. They should prohibit its use for mass surveillance, narrow its deployments, and create rules to minimize abuse.
Read the full report here.
We are surrounded by surveillance cameras that record us at every turn. But for the most part, while those cameras are watching us, no one is watching what those cameras observe or record because no one will pay for the armies of security guards that would be required for such a time-consuming and monotonous task.
But imagine that all that video were being watched -- that millions of security guards were monitoring them all 24/7. Imagine this army is made up of guards who don't need to be paid, who never get bored, who never sleep, who never miss a detail, and who have total recall for everything they've seen. Such an army of watchers could scrutinize every person they see for signs of "suspicious" behavior. With unlimited time and attention, they could also record details about all of the people they see -- their clothing, their expressions and emotions, their body language, the people they are with and how they relate to them, and their every activity and motion.
That scenario may seem far-fetched, but it's a world that may soon be arriving. The guards won't be human, of course -- they'll be AI agents.
Last week, the ACLU published a report on a $3.2 billion industry building a technology known as "video analytics," which is starting to augment surveillance cameras around the world and has the potential to turn them into just that kind of nightmarish army of unblinking watchers.
Using cutting-edge, deep learning-based AI, the science is moving so fast that early versions of this technology are already starting to enter our lives. Some of our cars now come equipped with dashboard cameras that can sound alarms when a driver starts to look drowsy. Doorbell cameras today can alert us when a person appears on our doorstep. Cashier-less stores use AI-enabled cameras that monitor customers and automatically charge them when they pick items off the shelf.
In the report, we looked at where this technology has been deployed, and what capabilities companies are claiming they can offer. We also reviewed scores of papers by computer vision scientists and other researchers to see what kinds of capabilities are being envisioned and developed. What we found is that the capabilities that computer scientists are pursuing, if applied to surveillance and marketing, would create a world of frighteningly perceptive and insightful computer watchers monitoring our lives.
Cameras that collect and store video just in case it is needed are being transformed into devices that can actively watch us, often in real time. It is as if a great surveillance machine has been growing up around us, but largely dumb and inert -- and is now, in a meaningful sense, "waking up."
Computers are getting better and better, for example, at what is called simply "human action recognition." AI training datasets include thousands of actions that computers are being taught to recognize -- things such as putting a hat on, taking glasses off, reaching into a pocket, and drinking beer.
Researchers are also pushing to create AI technologies that are ever-better at "anomaly detection" (sounding alarms at people who are "unusual," "abnormal," "deviant," or "atypical"), emotion recognition, the perception of our attributes, the understanding of the physical and social contexts of our behaviors, and wide-area tracking of the patterns of our movements.
Think about some of the implications of such techniques, especially when combined with other technologies like face recognition. For example, it's not hard to imagine some future corrupt mayor saying to an aide, "Here's a list of enemies of my administration. Have the cameras send us all instances of these people kissing another person, and the IDs of who they're kissing." Government and companies could use AI agents to track who is "suspicious" based on such things as clothing, posture, unusual characteristics or behavior, and emotions. People who stand out in some way and attract the attention of such ever-vigilant cameras could find themselves hassled, interrogated, expelled from stores, or worse.
Many or most of these technologies will be somewhere between unreliable and utterly bogus. Based on experience, however, that often won't stop them from being deployed -- and from hurting innocent people. And, like so many technologies, the weight of these new surveillance powers will inevitably fall hardest on the shoulders of those who are already disadvantaged: people of color, the poor, and those with unpopular political views.
We are still in the early days of a revolution in computer vision, and we don't know how AI will progress, but we need to keep in mind that progress in artificial intelligence may end up being extremely rapid. We could, in the not-so-distant future, end up living under armies of computerized watchers with intelligence at or near human levels.
These AI watchers, if unchecked, are likely to proliferate in American life until they number in the billions, representing an extension of corporate and bureaucratic power into the tendrils of our lives, watching over each of us and constantly shaping our behavior. In some cases, they will prove beneficial, but there is also a serious risk that they will chill the freedom of American life, create oppressively extreme enforcement of petty rules, amplify existing power disparities, disproportionately increase the monitoring of disadvantaged groups and political protesters, and open up new forms of abuse.
Policymakers must contend with this technology's enormous power. They should prohibit its use for mass surveillance, narrow its deployments, and create rules to minimize abuse.
Read the full report here.