Skip to main content

TESTIMONY: Senate Standing Committee on Internet and Technology

Senate Standing Committee on Internet and Technology

To discuss risks, solutions, and best practices with respect to the use of artificial intelligence in consequential or high-risk contexts, and related issues, such as classification of the types and risk levels of AI uses

Testimony of Communications Workers of America District 1

Thursday January 15, 2026

Communications Workers of America District 1 represents 145,000 workers in 200 CWA local unions in New York, New Jersey, New England, and eastern Canada. CWA members work in telecommunications, health care, higher education, manufacturing, broadcast and cable television, commercial printing and newspapers, state, local, and county government. District 1 represents 65,000 members in New York State.

CWA is concerned about the impact of digital technology and AI on workers and the workplace, and we thank the committee hosting this hearing today for recognizing the importance of hearing directly from labor organizations on this issue. It is our strong belief that government policy should support collective bargaining and worker consultation in the adoption of AI and other emerging technologies, and aim to strengthen and complement workers’ bargaining power with baseline policy protections for all workers. Workers are the experts on their jobs and workplaces, and are best positioned to identify risks and guardrails needed.

There are a few key areas where we are concerned about the increased use of AI in the workplace: monitoring (e.g., audio and video surveillance, tracking keystrokes, mouse movements), managing (e.g., coaching, job assessments, scheduling), hiring (e.g. reviewing resumes, administering applications), and performing complex tasks (e.g., customer interactions, content creation) with little to no human oversight, which can create stressful conditions and eliminate certain tasks or entire jobs. 

Members across CWA District 1 have shared with us anecdotal evidence of these risks: in call centers, AI is used to monitor and give feedback on call flow and tone, transcribe and summarize calls, man chatbots, and recruit and hire. Our outside technicians are under surveillance by GPS monitoring and metric tracking. In hospitals, members have raised concerns around diagnosis decision making and insurance coverage automation, uses that could have dire impacts on both healthcare workers and patients. Our NewsGuild members are increasingly concerned about their job tasks being replaced by AI. Notably, after mass layoffs, Business Insider announced they are fully embracing AI and launched an AI byline with the goal of drafting news stories to “bring readers more information, more quickly.”

Legislation is critical in ensuring there are baseline protections against abusive use of digital technologies. In particular we are focused on protections against harmful surveillance and automated management, which should include policies that: 

  1. Protect workers from abuses under AEDS (Automated Employment Decision Systems). AEDS primarily make or assist in employment decisions, and automate decision-making processes;
  2. Protect workers from abuses under ESAM (Electronic Surveillance and Automated Management). ESAM primarily monitors employees and collects data, which can be used later for decision making, including by AEDS;
  3. Include strong transparency requirements; in any comprehensive technology legislation, transparency is often a good starting point for accountability;
  4. Include job security for workers in specific sectors, protecting against harmful uses of AI to displace workers; 
  5. Require notice and retraining for displaced workers, often building on existing programs like the WARN Act and the Trade Adjustment Assistance Program; and
  6. Protect against bias and discrimination tied to AI across a range of areas beyond the workplace, including the criminal legal system. 

Examples of existing legislation that address these concerns are the BOT Act (S185), which would establish comprehensive guidelines and guardrails for the use of AI; as well as the FAIR News Act (S8451 Fahy) which would establish critical AI protections for both journalists and the broader public as consumers of media. 

While the President’s recent Executive Order titled “Ensuring a National Police Framework for Artificial Intelligence” threatens preemption and retaliation for implementing AI policy, we believe it is critical to continue fighting for protections on the state level.

As this technology is rapidly evolving, we are continuing to have conversations with our members, who are experts in their fields and should be on the frontlines of developing these protections. We also know there are many additional important bills which address the use of AI in different industries, and we look forward to working with the Senate in fighting for meaningful solutions this upcoming legislative session.