SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"It is long overdue that Microsoft and other Big Tech monopolies are broken up—for good," said one expert.
Digital rights advocates responded to Friday's havoc-wreaking global technology outage by sounding the alarm on the Big Tech monopolies.
The outage—which is being attributed to a software update by the U.S.-based cybersecurity firm CrowdStrike—sparked worldwide chaos on Friday, causing so-called "blue screens of death" on computers using Microsoft Windows. The outage grounded commercial flights and caused serious disruptions to transportation, financial, and healthcare systems.
"Today's massive global Microsoft outage is the result of a software monopoly that has become a single point of failure for too much of the global economy," George Rakis, executive director of the advocacy group NextGen Competition, said in a statement.
"For decades, Microsoft's pursuit of a vendor lock-in strategy has prevented the public and private sectors from diversifying their IT capabilities," he continued. "From airports to hospitals to 911 call centers to financial systems, millions today are feeling the consequences of the greed and ego of one of the most egregious offenders in Big Tech."
Emily Peterson-Cassin, who heads Demand Progress' corporate power program, said that "today's outage shows how one software issue stemming from only one or two companies can ground flights, take down hospital systems, stop 911 calls, and cut off access to the internet in one fell swoop."
"Economy-wide reliance on a few giant companies is a serious fundamental risk to Americans," she asserted. "No one regulatory or legislative intervention will prevent this kind of situation, but there are plenty of policies that can reduce the danger. Efforts to empower regulators' ability to tackle the risks posed by concentrated corporate actors are critical to protecting Americans from these kinds of failures."
Bloomberg columnist Parmy Olson—who focuses on tech issues—said that Friday's outage "should spur Microsoft and other IT firms to do more than simply administer a Band-aid."
"The bigger problem is the supply chain itself for cloud computing and, by extension, cybersecurity services, which has left too many organizations vulnerable to a single point of failure," she noted. "When just three companies—Microsoft, Amazon, and Google—dominate the market for cloud computing, one minor incident can have global ramifications."
European Union nations "are furthest ahead in addressing the market stranglehold that these so-called hyperscalers have with the new E.U. Data Act, which aims to lower the cost of switching between cloud providers and improve interoperability," Olson noted.
"U.S. legislators should get in the game too," she argued. "One idea might be to force companies in critical sectors like healthcare, finance, transportation, and energy to use more than just one cloud provider for their core infrastructure, which tends to be the status quo."
"Instead, a new regulation could force them to use at least two independent providers for their core operations, or at least ensure that no single provider accounts for more than about two-thirds of their critical IT infrastructure," Olson added. "If one provider has a catastrophic failure, the other can keep things running."
However, most congressional efforts to rein in Big Tech monopoly power and encourage competition have failed or languished amid opposition and obstruction from lobbyists and corporate lawmakers.
Ultimately, Rakis stressed, "it is long overdue that Microsoft and other Big Tech monopolies are broken up—for good."
"Microsoft has turned a blind eye to cybersecurity vulnerabilities for years and enough is enough," Rakis said. "Not only are these monopolies too big to care, they're too big to manage. And despite being too big to fail, they have failed us. Time and time again. Now, it's time for a reckoning. We can't continue to let Microsoft's executives downplay their role in making all of us more vulnerable."
"This is basically what we were all worried about with Y2K, except it's actually happened this time."
A global technology outage attributed to a software update by the U.S.-based cybersecurity firm CrowdStrike sparked chaos around the world Friday as flights were grounded and healthcare, banking, and ground transportation systems experienced major disruptions.
George Kurtz, the president and CEO of CrowdStrike, said in a statement Friday morning that the company is "actively working with customers impacted by a defect found in a single content update for Windows hosts"—a glitch that affected Microsoft users around the world.
"This is not a security incident or cyberattack," Kurtz added. "The issue has been identified, isolated, and a fix has been deployed. We refer customers to the support portal for the latest updates and will continue to provide complete and continuous updates on our website. We further recommend organizations ensure they're communicating with CrowdStrike representatives through official channels. Our team is fully mobilized to ensure the security and stability of CrowdStrike customers."
The Financial Timesexplained that Crowdstrike is "one of the world's largest providers of 'endpoint' security software, used by companies to monitor for security problems across a huge range of devices, from desktop PCs to checkout payment terminals."
Troy Hunt, a security consultant, wrote on social media that "this will be the largest IT outage in history."
"This is basically what we were all worried about with Y2K, except it's actually happened this time," Hunt added.
The impacts of the outage cascaded rapidly. Wirednoted that "in the early hours of Friday, companies in Australia running Microsoft's Windows operating system started reporting devices showing Blue Screens of Death (BSODs)."
"Shortly after," the outlet continued, "reports of disruptions started flooding in from around the world, including from the U.K., India, Germany, the Netherlands, and the U.S.: TV station Sky News went offline, and U.S. airlines United, Delta, and American Airlines issued a 'global ground stop' on all flights."
As The New York Timesobserved, the National Health Service in the United Kingdom "was crippled throughout the morning on Friday, as a number of hospitals and doctors offices lost access to their computer systems."
The agreement "is a step in the right direction for security," said one observer, "but that's not the only area where AI can cause harm."
Like an executive order introduced by U.S. President Joe Biden last month, a global agreement on artificial intelligence released Sunday was seen by experts as a positive step forward—but one that would require more action from policymakers to ensure AI isn't harmful to workers, democratic systems, and the privacy of people around the world.
The 20-page agreement, first reported Monday, was reached by 18 countries including the U.S., U.K., Germany, Israel, and Nigeria, and was billed as a deal that would push companies to keep AI systems "secure by design."
The agreement is nonbinding and deals with four main areas: secure design, development, deployment, and operation and maintenance.
Policymakers including the director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, forged the agreement with a heavy focus on keeping AI technology safe from hackers and security breaches.
The document includes recommendations such as implementing standard cybersecurity best practices, monitoring the security of an AI supply chain across the system's life cycle, and releasing models "only after subjecting them to appropriate and effective security evaluation."
"This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," Easterly toldReuters. The document, she said, represents an "agreement that the most important thing that needs to be done at the design phase is security."
Norm Eisen, senior fellow at the think tank Brookings Institution, said the deal "is a step in the right direction for security" in a field that U.K. experts recently warned is vulnerable to hackers who could launch "prompt injection" attacks, causing an AI model to behave in a way that the designer didn't intend or reveal private information.
"But that's not the only area where AI can cause harm," Eisen said on social media.
Eisen pointed to a recent Brrokings analysis about how AI could "weaken" democracy in the U.S. and other countries, worsening the "flood of misinformation" with deepfakes and other AI-generated images.
"Advocacy groups or individuals looking to misrepresent public opinion may find an ally in AI," wrote Eisen, along with Nicol Turner Lee, Colby Galliher, and Jonathan Katz last week. "AI-fueled programs, like ChatGPT, can fabricate letters to elected officials, public comments, and other written endorsements of specific bills or positions that are often difficult to distinguish from those written by actual constituents... Much worse, voice and image replicas harnessed from generative AI tools can also mimic candidates and elected officials. These tactics could give rise to voter confusion and degrade confidence in the electoral process if voters become aware of such scams."
At AppleInsider, tech writer Malcolm Owen denounced Sunday's agreement as "toothless and weak," considering it does not require policymakers or companies to adhere to the guidelines.
Owen noted that tech firms including Google, Amazon, and Palantir consulted with global government agencies in developing the guidelines.
"These are all guidelines, not rules that must be obeyed," wrote Owen. "There are no penalties for not following what is outlined, and no introduction of laws. The document is just a wish list of things that governments want AI makers to really think about... And, it's not clear when or if legislation will arrive mandating what's in the document."
European Union member countries passed a draft of what the European Parliament called "the world's first comprehensive AI law" earlier this year with the AI Act. The law would require AI systems makers to publish summaries of the training material they use and prove that they will not generate illegal content. It would also bar companies from scraping biometric data from social media, which a U.S. AI company was found to be doing last year.
"AI tools are evolving rapidly," said Eisen on Monday, "and policymakers need to keep up."