Canadian privacy regulators say early versions of ChatGPT violated federal and provincial privacy laws, raising serious concerns about how personal data was collected and used in the rush to deploy artificial intelligence tools.A joint investigation led by the Office of the Information and Privacy Commissioner of Alberta, alongside counterparts in British Columbia and Quebec and the federal Office of the Privacy Commissioner of Canada, concluded that OpenAI failed to adequately address privacy obligations during the development and rollout of its chatbot.The probe, launched in 2023 following a complaint about the unauthorized collection and use of personal information, focused on earlier models including ChatGPT 3.5 and ChatGPT 4. Investigators found the systems relied on data scraped from publicly accessible websites without obtaining meaningful consent, a key requirement under Canadian privacy laws.Alberta’s Information and Privacy Commissioner Diane McLeod said the findings highlight a broader problem in the tech sector, where innovation has outpaced compliance.“It is unfortunate and disappointing that technology companies have moved ahead so quickly… without first ensuring that they are adhering to privacy legislation,” McLeod said, adding the investigation found OpenAI did not sufficiently consider legal obligations tied to personal data.While regulators acknowledged OpenAI has taken steps to address some of the concerns, they said the measures still fall short of meeting foundational consent requirements under laws in Alberta and British Columbia.The report notes that differences in provincial and federal statutes led to varying legal conclusions, but all participating agencies agreed on a set of recommendations and will continue monitoring OpenAI’s compliance efforts..Officials also warned that existing privacy laws were drafted long before the rise of modern AI systems, leaving governments scrambling to catch up.“The privacy laws we currently have in Canada were enacted during a time when today’s advancements in technologies… would have strained believability,” McLeod said, calling on lawmakers to modernize legislation while ensuring safeguards for citizens.The joint investigation reflects growing concern among regulators about the impact of artificial intelligence on personal privacy, with authorities pledging closer cooperation to enforce rules and avoid regulatory gaps as the technology evolves.