Skip to Main Content

Artificial Intelligence: Ethics

This guide provides a resource for legal educators and students on the ethical and responsible use of AI in legal education.

Ethical Overview

Starting Point

No AI tool should be used as the sole basis for legal decisions. Attorneys have a professional and ethical duty to exercise independent judgment and ensure competence, which now includes understanding the capabilities and limitations of AI technologies.

Generative AI (GAI) can support legal work, but it cannot replace the critical thinking, contextual analysis, and ethical reasoning that attorneys bring to their practice. Relying solely on AI is not an option. Instead, legal professionals must integrate AI insights with their own expertise to deliver informed, responsible, and client-centered representation.

This balanced approach ensures that attorneys uphold their duty of care while navigating the evolving landscape of legal technology.

Ethical Approach

The use of GAI in legal research, writing, and court filings introduces critical ethical considerations for legal professionals. While these tools offer efficiency and innovation, they also present risks that require careful oversight.

Accuracy and Reliability: GAI can produce outputs that appear authoritative but may contain factual or legal inaccuracies. Attorneys must rigorously verify all AI-generated content to avoid errors that could lead to malpractice or misrepresentation in legal proceedings.

Confidentiality and Data Security: Inputting sensitive client information into cloud-based AI systems raises serious concerns about confidentiality and data protection. Legal professionals must ensure compliance with privacy laws and ethical obligations when using these tools.

Attribution and Originality: Determining the source and originality of AI-generated content can be challenging. Attorneys must be cautious about plagiarism, proper citation, and the authenticity of legal arguments derived from AI tools.

Oversight and Transparency: Lawyers are ethically obligated to supervise AI-assisted work and remain transparent with clients and courts about the use of such technologies. This includes disclosing when AI has been used in drafting or research and ensuring that final outputs reflect professional judgment.

Addressing Bias and Accountability: AI systems may reflect or amplify biases present in training data. Legal professionals must be vigilant in identifying and mitigating algorithmic bias, and remain accountable for decisions informed by AI.

Points to Consider

AI Learns from Data: Many AI systems are trained using a method called data scraping, which involves collecting large amounts of publicly available information. This means AI may not always distinguish between reliable and unreliable sources.

User Inputs May Be Stored: Some AI tools retain and learn from user inputs. This raises concerns about data privacy, especially when using cloud-based or publicly accessible platforms.

Confidentiality Is Paramount: The attorney-client relationship is protected by strict confidentiality and privilege. Never input privileged or sensitive client information into an AI tool, especially those not designed for secure legal use.

Treat AI Like a Public Search Engine: If you wouldn’t type confidential information into a search bar, you shouldn’t share it with an open-access chatbot. Always assume that anything entered into a public AI tool could be stored or accessed.

What Should Lawyers Do about AI Right Now?
Check out this article for more on what attorneys should do about AI right now. 

Federal Rule of Civil Procedure 11

Rule 11 establishes the responsibilities of parties and attorneys when filing documents in court. Key points include the following:

  • Every document must be signed by an attorney or the party themselves, confirming the submission is not for improper purposes (e.g., intimidation or unnecessary delays).
  • Legal claims must be based on existing law or present a valid argument for changing the law.
  • Factual claims must have evidentiary support or be likely to have such support after further investigation.
  • Violations of Rule 11 can result in court-imposed sanctions, including fines or orders to pay part or all of reasonable attorney fees incurred due to the violation.

Many state courts have rules analogous to Federal Rule of Civil Procedure 11. In Illinois, Supreme Court Rule 137 is virtually identical to Federal Rule 11, and Illinois courts may seek guidance from federal courts' interpretation of Rule 11 when imposing sanctions under Rule 137.

ABA Formal Opinion 512

ABA Formal Opinion 512 provides important ethical guidance for attorneys using GAI tools in their practices. It highlights the necessity for lawyers to maintain traditional ethical obligations, such as competence, client confidentiality, and truthful communication amidst technological advancements. The opinion notes GAI's rising role in areas like electronic discovery, contract analysis, and legal research. It emphasizes the importance of understanding GAI's capabilities and limitations without making expertise a requirement. Ultimately, the opinion offers a framework for leveraging GAI's benefits while preserving the core principles of the legal profession.

The infographic is divided into two sections. On the left, a column labeled 'Federal Rule of Civil Procedure 11' includes key phrases like 'Attorney Certification', 'Reasonable Inquiry', 'No Frivolous Claims', and 'Accountability for Court Filings'. On the right, a column labeled 'ABA Formal Opinion 512' includes phrases like 'Competence with AI Tools', 'Client Confidentiality', 'Supervision of AI Outputs', and 'Transparency in Use'. A subtle connecting line or arrow between the two columns emphasizes the shared theme of ethical responsibility.

Many state courts are currently exploring the development of formal AI policies, reflecting a growing interest in the technology's impact on the judicial system. These policies primarily focus on ethical considerations, ensuring that AI systems used in courts are fair, transparent, and accountable. Additionally, data privacy and security measures are being emphasized to protect sensitive information managed by AI tools. It's also vital that human oversight and judicial discretion are maintained in AI-assisted decision-making processes. Ultimately, building and sustaining public trust in the use of AI in the justice system remains a key priority.

With its formal policy on AI, the Illinois Supreme Court formalized its commitment to maintaining high ethical standards while embracing advancements in AI. The Court sought to offer opportunities for streamlining processes and improving access to justice. However, the Court will be vigilant about concerns related to accuracy, fairness, and the integrity of legal documents. In Illinois, all parties involved in the court process can use AI tools as long as they adhere to legal and ethical guidelines and must review any AI-generated content to ensure its accuracy. The Court will also prioritize privacy and public trust while encouraging the development of technologies that enhance service and promote equitable access to justice.

More on AI Court Rules 

Biased Outputs

ABA Formal Op. 512 observes: "If the quality, breadth, and sources of the underlying data on which a GAI tool is trained are limited or outdated or reflect biased content, the tool might produce unreliable, incomplete, or discriminatory results."

As AI becomes more integrated into legal research and decision-making, it is important to recognize that AI systems can produce biased outputs. These biases often stem from the data used to train the models. If the training data reflects historical inequalities, stereotypes, or imbalances, the AI may unintentionally replicate or even amplify those patterns in its responses.

Bias can also arise from the algorithms themselves. Even when trained on seemingly neutral data, the design of an AI system—such as how it prioritizes certain types of information—can introduce unintended skew. Additionally, the way users frame their prompts can influence the AI’s output, reinforcing existing assumptions or perspectives.

In the legal field, biased AI output can have serious consequences. It may lead to unequal treatment of individuals or groups, undermine due process, or result in flawed legal reasoning. Attorneys must remain vigilant, critically evaluating AI-generated content and ensuring it aligns with legal standards and ethical obligations.

To mitigate these risks, legal professionals should verify AI outputs against authoritative sources, use tools that are transparent about their data and design, and stay informed about best practices for identifying and addressing algorithmic bias. In engaging in an effective AI literacy analysis, one of the key inquiries is whether the information is presented objectively, without bias or agenda. An effective method for evaluating sources of information includes assessing their currency, relevance, authority, accuracy, and purpose. For more general information on AI literacy, check out Cultivating Critical Thinking in a Janky AI Era.

Ultimately, the responsibility for fair and accurate legal work rests with the human user—not the machine.

Interacting with AI Apps

When using Generative AI (GAI) tools, it’s essential to understand how your data may be handled—and the ethical and professional implications of sharing information with these systems.

Data Usage and Training: Some GAI platforms may use user inputs to improve their models. For example, ChatGPT (by OpenAI) and Gemini (by Google) may use prompts and responses from free-tier users to refine their systems. Enterprise and paid versions typically provide data privacy controls that help prevent this issue.

  • Copilot (Microsoft): As of 2025, Copilot—integrated into Microsoft 365—does not use individual user prompts or content to train its models. It is designed with enterprise-grade privacy and compliance in mind. Privacy at Microsoft.
  • Gemini (Google): While Gemini is trained on a vast dataset, Google states that individual interactions—especially in enterprise settings—are not used for training unless explicitly permitted by the user. Gemini Apps Privacy Hub.
  • Grok (xAI): Grok is trained using large-scale reinforcement learning and real-time search integration. While xAI has significantly expanded its training data sources, there is no public indication that individual user prompts from Grok are used for training without consent. xAI Privacy Policy.
  • DeepSeek (xAI/High-Flyer): DeepSeek uses a novel training approach that relies heavily on synthetic data and self-supervised learning, minimizing the need for human-labeled data. While it emphasizes efficiency and cost-effectiveness, there is no clear public indication that individual user inputs are used for training without consent. As with other tools, users should avoid sharing sensitive or privileged information. DeepSeek Privacy Policy.

Confidentiality Reminder: Never input sensitive or privileged client information into open-access AI tools. Treat these platforms like public search engines—if you wouldn’t type it into a search bar, don’t share it with an AI chatbot.

Use Secure, Legal-Specific Tools: When working with confidential or case-related material, use AI tools specifically designed for legal practice that comply with professional standards for privacy and data security.