Are You Safe? The Ethical Challenges of AI in Contracting

As I savoured my morning coffee, I eagerly delved into the latest report, “AI in Contracting: Untapped Revolution to Emerging Evolution,” released by World Commerce & Contracting in collaboration with Icertis.  The report not only highlights the complexities arising from the increased adoption of AI in the procurement world, but also underscores the immense potential of AI in transforming the contracting landscape.

Interestingly, the report shows where the Procurement Community sees AI being utilised in their roles:

  1. Analyse Risk and Compliance in contracts
  2. Assist in contract negotiations
  3. Generate Contracts
  4. Free up time for strategic work
  5. Scale as the business grows
  6. Accurately summarise contracts
  7. Support environmental, social and governance (ESG) goals

As you can see, AI technology is significantly involved in activities relating to contracts. This raises several ethical considerations regarding AI’s role in contract creation and management, which we will delve into further below.

Ethical Dilemma #1: Accountability and Transparency

As AI systems take on more decision-making roles, it naturally leads to questions about accountability. Who is responsible when an AI system errs or makes a judgement call that impacts people or businesses? On the surface, it might seem simple to pass the blame onto the automated system, but that approach doesn’t hold water in a real-world scenario where accountability is critical. 

Ensuring transparency in AI’s decision-making processes is vital to maintaining trust and the integrity and fairness of the systems in use. Transparency means making the inner workings of AI algorithms understandable and accessible. When you see how an AI system arrives at a decision, it demystifies these technologies’ ‘black box’ nature. 

This is especially important for fostering trust and ensuring that those impacted by AI-driven decisions have clarity on the rationale behind those decisions. For example, understanding why the AI system has recommended several points for negotiation would be beneficial for the team utilising the AI technology as they need to understand the AI’s interpretation and therefore recommended positioning.

Solution

It’s crucial that accountability is not diffused. A designated human must take ultimate responsibility for an AI system’s actions. This person, well-versed in AI’s capabilities and limitations, ensures that human judgement is balanced with AI efficiency, creating a more robust and trustworthy decision-making framework.

Ethical Dilemma #2:  Data Privacy and Security

When applying an AI system to the creation or assessment of contract terms, safeguarding the data is of critical importance, not only protecting data from other parties but also your own. In today’s digital landscape, ensuring the confidentiality, integrity, and availability of contractual data is a technical challenge as well as an ethical one.

Robust encryption methods, access controls, and regular security audits are essential components in establishing a secure environment for AI systems handling sensitive data. Additionally, organisations must implement comprehensive data governance policies that include data storage, transmission, and destruction guidelines to prevent unauthorised access and data breaches.

Solution

Conducting risk assessments to identify potential vulnerabilities and issues in the AI system is crucial. By understanding the specific threats and weaknesses, you can develop targeted strategies to mitigate them. This also involves staying up-to-date with the latest regulatory advancements and industry or ethical standards to ensure compliance. 

Failing to secure data adequately undermines trust and could lead to significant legal and financial repercussions. A proactive approach to data security is essential for AI’s deployment and operation in contracting.

Ethical Dilemma #3: Quality and Bias in Data

Algorithmic Fairness is the field of research aimed at understanding and correcting biases in data-trained models.

The quality of AI outputs is heavily dependent on the data used to train these systems. AI systems are only as fair as the data on which they are trained. If the data is biased or flawed, the AI’s decisions can perpetuate these biases, leading to unfair or discriminatory outcomes. 

For example, if the input data includes historical biases or is unrepresentative, the AI can replicate and even amplify these biases. This scenario raises significant ethical dilemmas, like the risk of propagating systemic discrimination through seemingly objective algorithms.

Solution

To address these challenges, you must rigorously evaluate the training data for completeness, representativeness, and relevance. Steps to detect potential biases within this data and strategies to mitigate them are crucial. Maintaining transparency about these measures helps build trust and ensure ethical adherence. This involves continuously monitoring and updating AI systems to align them with evolving ethical standards and social values.

Ethical Dilemma #4: Regulatory and Compliance Issues

As AI systems become more integrated into contracting processes, navigating the maze of regulatory and compliance requirements is essential. The evolving nature of this landscape requires continuous monitoring and adaptation by organisations. Compliance is not just about adhering to the letter of the law but also understanding the spirit behind these regulations to foster trust and ethical integrity.

Solution

While AI can generate an excellent “first cut” of terms and conditions, it cannot be fully trusted to handle all nuances since different countries and jurisdictions have varied laws and clause interpretations. Appropriate legal experts should always review contracts to ensure accuracy and compliance with applicable laws. 

Although AI systems can be trained on specific contract models and terms, nothing replaces the final review and sign-off by a human.

Be Safe

Addressing these ethical implications requires a comprehensive approach that includes robust data management practices, transparent decision-making processes, and ongoing human oversight. By tackling these challenges, organisations can harness the benefits of AI in contracting while mitigating potential ethical risks.