SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
Many of those weighing in on AI regulation own stock in Big Tech companies and have a powerful incentive not to rein the technology in.
“Everyone Wants to Regulate AI. No One Can Agree How,” Wired (5/26/23) proclaimed earlier this year. The headline resembled one from The New Yorker (5/20/23) published just days prior, reading “Congress Really Wants to Regulate AI, but No One Seems to Know How.” Each reflected an increasingly common thesis within the corporate press: Policymakers would like to place guardrails on so-called artificial intelligence systems, but, given the technology’s novel and evolving nature, they’ll need time before they can take action—if they ever can at all.
This narrative contains some kernels of truth; artificial intelligence can be complex and dynamic, and thus not always easily comprehensible to the layperson. But the suggestion of congressional helplessness minimizes the responsibility of lawmakers—ultimately excusing, rather than interrogating, regulatory inertia.
Amid a piecemeal, noncommittal legislative climate, media insist that policymakers are unable to keep pace with AI development, inevitably resulting in regulatory delays. NPR (5/15/23) exemplified this with the claim that Congress had “a lot of catching up to do” on AI and the later question (5/17/23) “Can politicians catch up with AI?” Months earlier, The New York Times (3/3/23) reported that “lawmakers have long struggled to understand new innovations,” with Washington consequently taking “a hands-off stance.”
The Times noted that the European Union had proposed a law that would curtail some potentially harmful AI applications, including those made by U.S. companies, and that U.S. lawmakers had expressed intentions to review the legislation. (The E.U.’s AI Act, as it’s known, may become law by the end of 2023.) Yet the paper didn’t feel compelled to ask why the E.U.—whose leadership isn’t exactly dominated by computer scientists—could forge ahead with restrictions on the U.S. AI industry, but the U.S. couldn’t.
These outlets frame AI rulemaking as a matter of technical knowledge, when it would be more accurate, and constructive, to frame it as one of moral consideration. One might argue that, in order to regulate a form of technology that affects the public—say, via “predictive policing” algorithms, or automated social-services software—it’s more important to grasp its societal impact than its operational minutia. (Congressional staffer Anna Lenhart told The Washington Post—6/17/23—as much, but this notion seems to be far from mainstream.)
Could it be that delays on price controls were caused more by pro-corporate policy choices than by a lack of technological expertise?
This certainly isn’t the prevailing view of The New York Times (8/24/23), which argued that legislators’ lag continues a pattern of slow congressional responses to new technologies, repeating the refrain that policymakers “have struggled” to enact major technology laws. The Times cited the 19th-century advent of steam-powered trains as an example of a daunting legislative subject, emphasizing that Congress took more than 50 years to institute railroad price controls.
Yet the process of setting pricing rules has little, if anything, to do with the mechanical specifics of a train. Could it be that delays on price controls were caused more by pro-corporate policy choices than by a lack of technological expertise? For the Times, such a question, which might begin to expose some of the ugly underpinnings of U.S. governance, didn’t merit attention.
The New York Times need look no further than its own archives to find some more illuminating context for U.S. lawmakers’ approach to AI regulation. Last year, the paper (9/13/22) reported that 97 members of Congress owned stock in companies that would be influenced by those members’ regulatory committees. Indeed, many of those weighing in on AI regulation have a powerful incentive not to rein the technology in.
One of those 97 was Rep. Donald S. Beyer, Jr. (D–Va.), who “bought and sold [shares in Google parent company] Alphabet and Microsoft while he was on the House Science, Space, and Technology Subcommittee on Investigations and Oversight.” Beyer, who serves as vice chair of the House AI Caucus, has been featured in multiple articles (Washington Post, 12/28/22; ABC News, 3/17/23) as a model AI legislator. The New York Times (3/3/23) itself lauded Beyer’s enrollment in evening classes on AI, sharing his alert that regulation would “take time.”
Curiously, the coverage commending Beyer’s regulatory initiative has omitted his record of investing in the two companies—which happen to rank among the U.S.’s most prominent purveyors of AI software—while he was authorized to police them.
Congressmembers, dozens of whom have historically owned stock in AI companies, surely must be capable of learning about AI—and doing so swiftly—if they’ve been choosing to reap its monetary rewards for years.
Elsewhere in its congressional stock-trading report, The New York Times called Rep. Michael McCaul (R–Texas) “one of Congress’s most active filers,” noting his investments in a whopping 342 companies, including Microsoft, Alphabet, and Meta, formerly known as Facebook, which also has a tremendous financial stake in AI. McCaul, like Beyer, boasts a top-brass post on the House AI Caucus.
McCaul’s trades were dwarfed by those of fellow AI Caucus member Rep. Ro Khanna (D–Calif.), who, according to the Times, has owned stock in nearly 900 companies. Among them: leading AI-chip manufacturer Nvidia (as of 2021), Alphabet, and Microsoft. (Khanna has nominally endorsed proposals to curb congressional stock-trading, a stance contradicted by his vast portfolio.) Save for the Times exposé, none of the above pieces addressed Khanna’s, or McCaul’s, ethical breaches; in fact, Khanna is a recurring media source on AI legislation (Semafor, 4/26/23; San Francisco Chronicle, 7/20/23).
Congressmembers, dozens of whom have historically owned stock in AI companies, surely must be capable of learning about AI—and doing so swiftly—if they’ve been choosing to reap its monetary rewards for years. Why that knowledge can’t be applied to regulating the technology seems to be yet another question media are uninterested in asking.
In omitting this critical information, news sources are effectively giving Congress an undeserved redemption arc. Following years of legislative apathy to the surveillance, monopolization, labor abuses, and countless other iniquities of Big Tech, media declare that legislators are trying to right their wrongs by targeting an ascendant AI industry (Yahoo! Finance, 5/17/23; GovTech, 6/21/23).
Accordingly, media have embraced policymakers’ efforts, no matter how feeble they may be. Throughout the year, politicians have hosted chummy hearings and meetings, as well as private dinners, with the chiefs of major AI companies to discuss regulatory frameworks. Yet, rather than impugning the influence legislators have awarded these executives, outlets present these gatherings as testaments to lawmakers’ dedication.
CBS Austin (8/29/23) justified congressional reliance on executives, whom it called “industry experts,” trumpeting that corporations like Microsoft, OpenAI, Anthropic, Google, and Meta were helping policymakers “chart a path forward.” The broadcaster went on to establish a pretext for business-friendly lawmaking:
Congress is trying to find a delicate balance of safeguarding the public while allowing the promising aspects of the technology to flourish and propel the economy and country into the future.
The New York Times (8/28/23), meanwhile, stated that Congress and the Biden administration have “leaned on” industry heads for “guidance on regulation,” a clever euphemism for lobbying. The Times reported that Congress would hold a forthcoming “closed-door listening session” with executives in order to “educate” its members, evincing no skepticism over what that education might involve. (At the session, Congress will also host civil rights and labor groups, who are theoretically much more qualified than C-suiters to determine the moral content of AI policymaking, but received much less fanfare from the Times.)
The guests of the “listening session,” per the Times, will include Twitter.com‘s Elon Musk, Google’s Sundar Pichai, OpenAI’s Sam Altman, and Microsoft’s Satya Nadella. Might the fact that each of them has fought tech-industry constraints have some bearing on the future? Reading the Times story, which didn’t deem this worth a mention, one wouldn’t know.
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
“Everyone Wants to Regulate AI. No One Can Agree How,” Wired (5/26/23) proclaimed earlier this year. The headline resembled one from The New Yorker (5/20/23) published just days prior, reading “Congress Really Wants to Regulate AI, but No One Seems to Know How.” Each reflected an increasingly common thesis within the corporate press: Policymakers would like to place guardrails on so-called artificial intelligence systems, but, given the technology’s novel and evolving nature, they’ll need time before they can take action—if they ever can at all.
This narrative contains some kernels of truth; artificial intelligence can be complex and dynamic, and thus not always easily comprehensible to the layperson. But the suggestion of congressional helplessness minimizes the responsibility of lawmakers—ultimately excusing, rather than interrogating, regulatory inertia.
Amid a piecemeal, noncommittal legislative climate, media insist that policymakers are unable to keep pace with AI development, inevitably resulting in regulatory delays. NPR (5/15/23) exemplified this with the claim that Congress had “a lot of catching up to do” on AI and the later question (5/17/23) “Can politicians catch up with AI?” Months earlier, The New York Times (3/3/23) reported that “lawmakers have long struggled to understand new innovations,” with Washington consequently taking “a hands-off stance.”
The Times noted that the European Union had proposed a law that would curtail some potentially harmful AI applications, including those made by U.S. companies, and that U.S. lawmakers had expressed intentions to review the legislation. (The E.U.’s AI Act, as it’s known, may become law by the end of 2023.) Yet the paper didn’t feel compelled to ask why the E.U.—whose leadership isn’t exactly dominated by computer scientists—could forge ahead with restrictions on the U.S. AI industry, but the U.S. couldn’t.
These outlets frame AI rulemaking as a matter of technical knowledge, when it would be more accurate, and constructive, to frame it as one of moral consideration. One might argue that, in order to regulate a form of technology that affects the public—say, via “predictive policing” algorithms, or automated social-services software—it’s more important to grasp its societal impact than its operational minutia. (Congressional staffer Anna Lenhart told The Washington Post—6/17/23—as much, but this notion seems to be far from mainstream.)
Could it be that delays on price controls were caused more by pro-corporate policy choices than by a lack of technological expertise?
This certainly isn’t the prevailing view of The New York Times (8/24/23), which argued that legislators’ lag continues a pattern of slow congressional responses to new technologies, repeating the refrain that policymakers “have struggled” to enact major technology laws. The Times cited the 19th-century advent of steam-powered trains as an example of a daunting legislative subject, emphasizing that Congress took more than 50 years to institute railroad price controls.
Yet the process of setting pricing rules has little, if anything, to do with the mechanical specifics of a train. Could it be that delays on price controls were caused more by pro-corporate policy choices than by a lack of technological expertise? For the Times, such a question, which might begin to expose some of the ugly underpinnings of U.S. governance, didn’t merit attention.
The New York Times need look no further than its own archives to find some more illuminating context for U.S. lawmakers’ approach to AI regulation. Last year, the paper (9/13/22) reported that 97 members of Congress owned stock in companies that would be influenced by those members’ regulatory committees. Indeed, many of those weighing in on AI regulation have a powerful incentive not to rein the technology in.
One of those 97 was Rep. Donald S. Beyer, Jr. (D–Va.), who “bought and sold [shares in Google parent company] Alphabet and Microsoft while he was on the House Science, Space, and Technology Subcommittee on Investigations and Oversight.” Beyer, who serves as vice chair of the House AI Caucus, has been featured in multiple articles (Washington Post, 12/28/22; ABC News, 3/17/23) as a model AI legislator. The New York Times (3/3/23) itself lauded Beyer’s enrollment in evening classes on AI, sharing his alert that regulation would “take time.”
Curiously, the coverage commending Beyer’s regulatory initiative has omitted his record of investing in the two companies—which happen to rank among the U.S.’s most prominent purveyors of AI software—while he was authorized to police them.
Congressmembers, dozens of whom have historically owned stock in AI companies, surely must be capable of learning about AI—and doing so swiftly—if they’ve been choosing to reap its monetary rewards for years.
Elsewhere in its congressional stock-trading report, The New York Times called Rep. Michael McCaul (R–Texas) “one of Congress’s most active filers,” noting his investments in a whopping 342 companies, including Microsoft, Alphabet, and Meta, formerly known as Facebook, which also has a tremendous financial stake in AI. McCaul, like Beyer, boasts a top-brass post on the House AI Caucus.
McCaul’s trades were dwarfed by those of fellow AI Caucus member Rep. Ro Khanna (D–Calif.), who, according to the Times, has owned stock in nearly 900 companies. Among them: leading AI-chip manufacturer Nvidia (as of 2021), Alphabet, and Microsoft. (Khanna has nominally endorsed proposals to curb congressional stock-trading, a stance contradicted by his vast portfolio.) Save for the Times exposé, none of the above pieces addressed Khanna’s, or McCaul’s, ethical breaches; in fact, Khanna is a recurring media source on AI legislation (Semafor, 4/26/23; San Francisco Chronicle, 7/20/23).
Congressmembers, dozens of whom have historically owned stock in AI companies, surely must be capable of learning about AI—and doing so swiftly—if they’ve been choosing to reap its monetary rewards for years. Why that knowledge can’t be applied to regulating the technology seems to be yet another question media are uninterested in asking.
In omitting this critical information, news sources are effectively giving Congress an undeserved redemption arc. Following years of legislative apathy to the surveillance, monopolization, labor abuses, and countless other iniquities of Big Tech, media declare that legislators are trying to right their wrongs by targeting an ascendant AI industry (Yahoo! Finance, 5/17/23; GovTech, 6/21/23).
Accordingly, media have embraced policymakers’ efforts, no matter how feeble they may be. Throughout the year, politicians have hosted chummy hearings and meetings, as well as private dinners, with the chiefs of major AI companies to discuss regulatory frameworks. Yet, rather than impugning the influence legislators have awarded these executives, outlets present these gatherings as testaments to lawmakers’ dedication.
CBS Austin (8/29/23) justified congressional reliance on executives, whom it called “industry experts,” trumpeting that corporations like Microsoft, OpenAI, Anthropic, Google, and Meta were helping policymakers “chart a path forward.” The broadcaster went on to establish a pretext for business-friendly lawmaking:
Congress is trying to find a delicate balance of safeguarding the public while allowing the promising aspects of the technology to flourish and propel the economy and country into the future.
The New York Times (8/28/23), meanwhile, stated that Congress and the Biden administration have “leaned on” industry heads for “guidance on regulation,” a clever euphemism for lobbying. The Times reported that Congress would hold a forthcoming “closed-door listening session” with executives in order to “educate” its members, evincing no skepticism over what that education might involve. (At the session, Congress will also host civil rights and labor groups, who are theoretically much more qualified than C-suiters to determine the moral content of AI policymaking, but received much less fanfare from the Times.)
The guests of the “listening session,” per the Times, will include Twitter.com‘s Elon Musk, Google’s Sundar Pichai, OpenAI’s Sam Altman, and Microsoft’s Satya Nadella. Might the fact that each of them has fought tech-industry constraints have some bearing on the future? Reading the Times story, which didn’t deem this worth a mention, one wouldn’t know.
“Everyone Wants to Regulate AI. No One Can Agree How,” Wired (5/26/23) proclaimed earlier this year. The headline resembled one from The New Yorker (5/20/23) published just days prior, reading “Congress Really Wants to Regulate AI, but No One Seems to Know How.” Each reflected an increasingly common thesis within the corporate press: Policymakers would like to place guardrails on so-called artificial intelligence systems, but, given the technology’s novel and evolving nature, they’ll need time before they can take action—if they ever can at all.
This narrative contains some kernels of truth; artificial intelligence can be complex and dynamic, and thus not always easily comprehensible to the layperson. But the suggestion of congressional helplessness minimizes the responsibility of lawmakers—ultimately excusing, rather than interrogating, regulatory inertia.
Amid a piecemeal, noncommittal legislative climate, media insist that policymakers are unable to keep pace with AI development, inevitably resulting in regulatory delays. NPR (5/15/23) exemplified this with the claim that Congress had “a lot of catching up to do” on AI and the later question (5/17/23) “Can politicians catch up with AI?” Months earlier, The New York Times (3/3/23) reported that “lawmakers have long struggled to understand new innovations,” with Washington consequently taking “a hands-off stance.”
The Times noted that the European Union had proposed a law that would curtail some potentially harmful AI applications, including those made by U.S. companies, and that U.S. lawmakers had expressed intentions to review the legislation. (The E.U.’s AI Act, as it’s known, may become law by the end of 2023.) Yet the paper didn’t feel compelled to ask why the E.U.—whose leadership isn’t exactly dominated by computer scientists—could forge ahead with restrictions on the U.S. AI industry, but the U.S. couldn’t.
These outlets frame AI rulemaking as a matter of technical knowledge, when it would be more accurate, and constructive, to frame it as one of moral consideration. One might argue that, in order to regulate a form of technology that affects the public—say, via “predictive policing” algorithms, or automated social-services software—it’s more important to grasp its societal impact than its operational minutia. (Congressional staffer Anna Lenhart told The Washington Post—6/17/23—as much, but this notion seems to be far from mainstream.)
Could it be that delays on price controls were caused more by pro-corporate policy choices than by a lack of technological expertise?
This certainly isn’t the prevailing view of The New York Times (8/24/23), which argued that legislators’ lag continues a pattern of slow congressional responses to new technologies, repeating the refrain that policymakers “have struggled” to enact major technology laws. The Times cited the 19th-century advent of steam-powered trains as an example of a daunting legislative subject, emphasizing that Congress took more than 50 years to institute railroad price controls.
Yet the process of setting pricing rules has little, if anything, to do with the mechanical specifics of a train. Could it be that delays on price controls were caused more by pro-corporate policy choices than by a lack of technological expertise? For the Times, such a question, which might begin to expose some of the ugly underpinnings of U.S. governance, didn’t merit attention.
The New York Times need look no further than its own archives to find some more illuminating context for U.S. lawmakers’ approach to AI regulation. Last year, the paper (9/13/22) reported that 97 members of Congress owned stock in companies that would be influenced by those members’ regulatory committees. Indeed, many of those weighing in on AI regulation have a powerful incentive not to rein the technology in.
One of those 97 was Rep. Donald S. Beyer, Jr. (D–Va.), who “bought and sold [shares in Google parent company] Alphabet and Microsoft while he was on the House Science, Space, and Technology Subcommittee on Investigations and Oversight.” Beyer, who serves as vice chair of the House AI Caucus, has been featured in multiple articles (Washington Post, 12/28/22; ABC News, 3/17/23) as a model AI legislator. The New York Times (3/3/23) itself lauded Beyer’s enrollment in evening classes on AI, sharing his alert that regulation would “take time.”
Curiously, the coverage commending Beyer’s regulatory initiative has omitted his record of investing in the two companies—which happen to rank among the U.S.’s most prominent purveyors of AI software—while he was authorized to police them.
Congressmembers, dozens of whom have historically owned stock in AI companies, surely must be capable of learning about AI—and doing so swiftly—if they’ve been choosing to reap its monetary rewards for years.
Elsewhere in its congressional stock-trading report, The New York Times called Rep. Michael McCaul (R–Texas) “one of Congress’s most active filers,” noting his investments in a whopping 342 companies, including Microsoft, Alphabet, and Meta, formerly known as Facebook, which also has a tremendous financial stake in AI. McCaul, like Beyer, boasts a top-brass post on the House AI Caucus.
McCaul’s trades were dwarfed by those of fellow AI Caucus member Rep. Ro Khanna (D–Calif.), who, according to the Times, has owned stock in nearly 900 companies. Among them: leading AI-chip manufacturer Nvidia (as of 2021), Alphabet, and Microsoft. (Khanna has nominally endorsed proposals to curb congressional stock-trading, a stance contradicted by his vast portfolio.) Save for the Times exposé, none of the above pieces addressed Khanna’s, or McCaul’s, ethical breaches; in fact, Khanna is a recurring media source on AI legislation (Semafor, 4/26/23; San Francisco Chronicle, 7/20/23).
Congressmembers, dozens of whom have historically owned stock in AI companies, surely must be capable of learning about AI—and doing so swiftly—if they’ve been choosing to reap its monetary rewards for years. Why that knowledge can’t be applied to regulating the technology seems to be yet another question media are uninterested in asking.
In omitting this critical information, news sources are effectively giving Congress an undeserved redemption arc. Following years of legislative apathy to the surveillance, monopolization, labor abuses, and countless other iniquities of Big Tech, media declare that legislators are trying to right their wrongs by targeting an ascendant AI industry (Yahoo! Finance, 5/17/23; GovTech, 6/21/23).
Accordingly, media have embraced policymakers’ efforts, no matter how feeble they may be. Throughout the year, politicians have hosted chummy hearings and meetings, as well as private dinners, with the chiefs of major AI companies to discuss regulatory frameworks. Yet, rather than impugning the influence legislators have awarded these executives, outlets present these gatherings as testaments to lawmakers’ dedication.
CBS Austin (8/29/23) justified congressional reliance on executives, whom it called “industry experts,” trumpeting that corporations like Microsoft, OpenAI, Anthropic, Google, and Meta were helping policymakers “chart a path forward.” The broadcaster went on to establish a pretext for business-friendly lawmaking:
Congress is trying to find a delicate balance of safeguarding the public while allowing the promising aspects of the technology to flourish and propel the economy and country into the future.
The New York Times (8/28/23), meanwhile, stated that Congress and the Biden administration have “leaned on” industry heads for “guidance on regulation,” a clever euphemism for lobbying. The Times reported that Congress would hold a forthcoming “closed-door listening session” with executives in order to “educate” its members, evincing no skepticism over what that education might involve. (At the session, Congress will also host civil rights and labor groups, who are theoretically much more qualified than C-suiters to determine the moral content of AI policymaking, but received much less fanfare from the Times.)
The guests of the “listening session,” per the Times, will include Twitter.com‘s Elon Musk, Google’s Sundar Pichai, OpenAI’s Sam Altman, and Microsoft’s Satya Nadella. Might the fact that each of them has fought tech-industry constraints have some bearing on the future? Reading the Times story, which didn’t deem this worth a mention, one wouldn’t know.