Skip to Main Content

Artificial Intelligence: Ethics

This guide provides a resource for legal educators and students on the ethical and responsible use of AI in legal education.

Starting Point

No AI tool can be used as the sole basis for legal decisions without significant human oversight. Attorneys have an ethical obligation to be competent, which now includes understanding the capabilities and limitations of AI. Relying solely on AI is not an option. Attorneys must integrate their expertise with AI insights rather than surrender critical judgment to AI systems. This combination ensures that attorneys uphold their duty to provide the best representation while navigating the complexities of modern technology.

ABA Formal Opinion 512

ABA Formal Opinion 512 provides important ethical guidance for attorneys using GAI tools in their practices. It highlights the necessity for lawyers to maintain traditional ethical obligations, such as competence, client confidentiality, and truthful communication amidst technological advancements. The opinion notes GAI's rising role in areas like electronic discovery, contract analysis, and legal research. It emphasizes the importance of understanding GAI's capabilities and limitations without making expertise a requirement. Ultimately, the opinion offers a framework for leveraging GAI's benefits while preserving the core principles of the legal profession.

Federal Rule of Civil Procedure 11

Rule 11 establishes the responsibilities of parties and attorneys when filing documents in court. Key points include the following:

  • Every document must be signed by an attorney or the party themselves, confirming the submission is not for improper purposes (e.g., intimidation or unnecessary delays).
  • Legal claims must be based on existing law or present a valid argument for changing the law.
  • Factual claims must have evidentiary support or be likely to have such support after further investigation.
  • Violations of Rule 11 can result in court-imposed sanctions, including fines or orders to pay part or all of reasonable attorney fees incurred due to the violation.

Many state courts have rules analogous to Federal Rule of Civil Procedure 11. In Illinois, Supreme Court Rule 137 is virtually identical to Federal Rule 11, and Illinois courts may seek guidance from federal courts' interpretation of Rule 11 when imposing sanctions under Rule 137.

General Points

Ethical Overview

The use of Generative Artificial Intelligence (GAI) in legal research and court filings raises important ethical concerns for attorneys, particularly regarding accuracy and reliability. GAI has the potential to generate inaccurate information, which increases the burden on attorneys to verify these outputs to avoid the risk of malpractice. 

Moreover, inputting sensitive client data into GAI tools presents significant risks to confidentiality and data security, especially with cloud-based systems. There are also challenges related to attribution and originality, as it can be difficult to identify the source of generated content and evaluate its originality. 

Additionally, attorneys must ensure proper oversight of AI-assisted work and maintain transparency with both the court and their clients about the use of GAI in legal processes.

Court Guidelines

Many state courts are currently exploring the development of formal AI policies, reflecting a growing interest in the technology's impact on the judicial system. These policies primarily focus on ethical considerations, ensuring that AI systems used in courts are fair, transparent, and accountable. Additionally, data privacy and security measures are being emphasized to protect sensitive information managed by AI tools. It's also vital that human oversight and judicial discretion are maintained in AI-assisted decision-making processes. Ultimately, building and sustaining public trust in the use of AI in the justice system remains a key priority.

With its formal policy on AI, the Illinois Supreme Court formalized its commitment to maintaining high ethical standards while embracing advancements in AI. The Court sought to offer opportunities for streamlining processes and improving access to justice. However, the Court will be vigilant about concerns related to accuracy, fairness, and the integrity of legal documents. In Illinois, all parties involved in the court process can use AI tools as long as they adhere to legal and ethical guidelines and must review any AI-generated content to ensure its accuracy. The Court will also prioritize privacy and public trust while encouraging the development of technologies that enhance service and promote equitable access to justice.

More on AI Court Rules 

Biased Outputs

ABA Formal Op. 512 observes: "If the quality, breadth, and sources of the underlying data on which a GAI tool is trained are limited or outdated or reflect biased content, the tool might produce unreliable, incomplete, or discriminatory results."

In engaging in an effective AI literacy analysis, one of the key inquiries is whether the information is presented objectively, without bias or agenda. An effective method for evaluating sources of information includes assessing their currency, relevance, authority, accuracy, and purpose. For more general information on AI literacy, check out Cultivating Critical Thinking in a Janky AI Era.

Interacting with AI Apps

Some GAI services may use your input data to train their models, which could mean your personal information or creative content is being used in ways you did not intend. For example, ChatGPT and Bard may use your prompts and the subsequent responses to refine their understanding of language and improve their ability to generate coherent and relevant text. Copilot purportedly does not use your individual code or prompts to further train its models. Gemini learns from a vast codebase; however, your individual interactions purportedly are not used for training purposes. 

It is always wise to exercise caution and be aware of the data you share with any AI service.

DeepSeek

DeepSeek's handling of user data raises several concerns. Their privacy policy states that they collect personal information such as usernames, email addresses, and phone numbers. Importantly, they also gather all user inputs, including everything typed or uploaded, chat logs, prompts, and shared files. Additionally, DeepSeek collects device and network information like IP addresses, device models, and operating systems, as well as usage data that details the features used and actions taken. Like many online services, they employ cookies and trackers and also collect third-party data from linked accounts and advertising partners. While DeepSeek's privacy policy thoroughly outlines the types of data they collect, it fails to clearly explain how this data is utilized for model training.

 DeepSeek Privacy Policy