By Moira Warburton
(Reuters) – The Canadian federal police force broke the law when they used facial recognition software, the country’s top privacy regulator found in a report released on Thursday.
The Royal Canadian Mounted Police (RCMP) said in February 2020 that for four months they had been using Clearview AI, a U.S.-based facial recognition software company that cross-references photos with a database compiled from photos posted to social media. It is not clear when the RCMP stopped using the software.
Clearview AI has been barred from operating in Canada since July 2020.
Canada’s federal Office of the Privacy Commissioner (OPC) said the onus was on the RCMP to ensure its services were lawful.
“A government institution cannot collect personal information from a third party agent if that third party agent collected the information unlawfully,” Commissioner Daniel Therrien said in a statement.
The RCMP disagreed that the onus was on them to ensure services they used didn’t violate privacy laws, the statement said, arguing that this created an “unreasonable obligation.”
However the police ultimately agreed to implement the OPC’s recommendations, including creating a new oversight function intended to ensure new data tools comply with privacy laws.
The RCMP did not immediately comment on the report.
In light of the report’s findings, the OPC also said it was issuing draft guidance for police on using facial recognition technology.
(Reporting by Moira Warburton in Vancouver; Editing by Steve Orlofsky)