Artificial intelligence (AI) chatbot Claude, one of the hottest in the technology sector, has arrived in Canada in the hope of spreading its security-focused philosophy.

Claude, who can answer questions, summarize documents, write text and even code, was made available in Canada on Wednesday. The technology launched by the emerging company Anthropic, based in San Francisco, in 2023, was already available in more than a hundred countries.

It’s now crossing the border because the company has seen signs that Canadians are eager to get into AI, said Jack Clark, one of Anthropic’s co-founders and the company’s head of policy.

“We have a huge interest from Canadians in this technology, and we have expanded our products and our organizations in a compliant manner, so we are able to operate in other regions,” he said.

The company made its privacy policy clearer and easier to understand ahead of Claude’s launch in Canada.

Although Canada has had access to many of the biggest artificial intelligence products, some chatbots have taken longer to arrive in the country.

Google, for example, only introduced its Gemini chatbot to Canada in February because it was negotiating with the federal government over a law requiring it to compensate Canadian media companies for content published or reused on its platforms.

Despite the delays, Canadians have tried numerous AI systems, including Microsoft’s Copilot and OpenAI’s ChatGPT, which sparked the recent AI frenzy when it released in November 2022.

Anthropic’s founders met at OpenAI, but built their own company before ChatGPT’s debut and quickly decided their mission was to make Claude as secure as possible.

“We’ve always thought of security as something that for many years was seen as a complement or sort of a side quest to AI,” said Jack Clark.

“But our bet at Anthropic is that if we make that the core of the product, it creates both a more useful and valuable product for people, but also a safer one. »

As part of this mission, Anthropic does not train its models with user data by default. Instead, it uses publicly available information on the internet, datasets courtesy of third-party companies, and data provided by users.

It also relies on so-called “constitutional” AI, that is to say that a set of values ​​​​is given to the company’s AI systems, which can train themselves in order to become more useful and less harmful.

At Anthropic, these values ​​include the United Nations Universal Declaration of Human Rights, which emphasizes the equitable treatment of people regardless of age, gender, religion and color.

Anthropic’s rivals are taking note, according to Mr. Clark.

“Every time we win customers –– and this is partly because of security – other companies pay a lot of attention to it and end up developing similar products, which I think is a good incentive for everyone in the industry,” he said.

He expects this trend to continue.

“Our general view is that AI safety will be a bit like seat belts for cars and that if you develop technologies that are simple enough and good enough, everyone will eventually adopt them because they are not what good ideas. »

Canada has introduced a bill focused on AI in 2022, but it will not be implemented until 2025. The country has meanwhile created a voluntary code of conduct.

The code requires signatories like Cohere, OpenText and BlackBerry to monitor AI systems for risks and test for biases before launching them.

Jack Clark has not committed to Anthropic signing the code. He said the company is focused on global or at least cross-country efforts, such as the Hiroshima AI process, which G7 countries used to produce a framework to promote safe, secure and reliable AI. trustworthy.