Emerging Trends: Artificial Intelligence

Artificial intelligence, or AI, is the use of computers or machines to simulate human cognitive activities, including performing problem-solving, decision-making, and analyzing. The idea of having machines work for the benefit of humankind has existed for centuries, although the term “artificial intelligence” was not coined until the 1950s. Even then, however, the concept was largely relegated to science fiction and fantasy. Recently, however, there has been discussion about the practical application of artificial intelligence for everyday use in many professions, including the drafting of legal documents and the analyzing of insurance data.

The possible uses of artificial intelligence are, ironically, only as limited as the human imagination, and benefits can be widely seen. As examples, it can be used to analyze patterns and make predictions for such things as weather patterns or consumer spending. It can detect anomalies in pattern recognition, which can show fraudulent activity. On an organizational basis, it can schedule meetings or filter out spam; however, AI technology is not without its ethical concern, which include, but are not limited to, the following:

  • Privacy issues emerging from the gathering of personal information and the utilization of that data;
  • Lack of transparency as to how AI reaches its conclusion or makes its determination, especially because AI is intellectual property of organizations who are most likely unwilling to share how their AI was created; and
  • Bias of AI, because while AI does not have the emotional component that human-decision making has, AI decision can have inaccuracies or influenced or discriminatory results based on what information or data it relies upon.

To help balance these concerns against the benefits of AI, in 2021 UNESCO adopted the UNESCO Recommendation on the Ethics of Artificial Intelligence.  This Recommendation set the “first-ever global standard on AI ethics.” It has been adopted by all 193 member states of UNESCO, including Canada. A standard for AI has limited value if there is no way to enforce it. To enforce this standard, Canada has tabled legislation to create a legal framework for AI.

Called the Artificial Intelligence and Data Act (AIDA), it was introduced as part of the Digital Charter Implementation Act, 2022. AIDA is drafted to “ensure that AI systems deployed in Canada are safe and non-discriminatory and would hold businesses accountable for how they develop and use these technologies.”

Once legislation regarding AI comes into force and effect, it will help establish the legal standard and duty of care to which AI owners, developers, and users are held. Thus, in theory, limiting the legal anarchy that sometimes surrounds the emergence of new technology when it is misused and no one knows how to address that misuse.