Skip to Main Content

Artificial Intelligence: Best Practices

This guide provides a resource for legal educators and students on the ethical and responsible use of AI in legal education.

Best Practices

Understand Capabilities & Limitations

Capabilities

  • AI tools can rapidly analyze vast volumes of legal data, identify relevant case law, streamline contract review, and support predictive analytics.
  • They are increasingly used for automating routine tasks such as e-discovery, document drafting, and legal research—freeing attorneys to focus on higher-order reasoning and client strategy.

Limitations

  • Despite their power, AI tools can produce inaccurate or misleading results—commonly referred to as “hallucinations.”
  • They may also reflect biases present in their training data. These limitations underscore the need for human oversight and critical evaluation of AI-generated content.

Ensure Data Quality

Accuracy & Relevance

  • The effectiveness of AI depends on the quality of the data it processes.
  • Legal professionals must ensure that the information used is accurate, current, and contextually appropriate.

Bias Mitigation

  • Bias in AI outputs can arise from skewed or incomplete training data.
  • To reduce this risk, use diverse data sources, conduct regular audits, and implement bias detection protocols.

Maintain Ethical Standards

Confidentiality

  • Never input privileged or sensitive client information into open-access AI tools.
  • Use only platforms that comply with legal confidentiality standards and offer secure data handling.

Transparency

  • Be open with clients and courts about the use of AI in legal work.
  • Clearly explain how AI tools are used, their benefits, and their limitations.

Integrate with Human Expertise

Human Oversight

  • AI should support—not replace—legal judgment.
  • Attorneys must review AI outputs for accuracy, relevance, and legal soundness, ensuring that final decisions reflect professional standards.

Continuous Learning

  • Stay informed about evolving AI capabilities, risks, and regulations.
  • Provide ongoing training for legal professionals to use AI tools effectively and ethically.

Evaluate & Select Effective Tools

  • Choose AI platforms specifically designed for legal use, with a proven track record of accuracy, transparency, and ethical compliance.
  • Evaluate vendors based on their legal expertise, support services, and data governance practices.

Monitor & Audit Performance

  • Regularly assess AI tools to ensure they perform as expected and do not introduce errors or bias.
  • Implement feedback loops to refine tool performance and adapt to changing legal needs.

Infographic with a little robotic figure on a stack on books titled 'AI Best Practices' designed for a law school LibGuide on AI. The layout features six evenly spaced textual points with reduced font size and tighter spacing to ensure all text is fully visible. The textual points are: Understand Capabilities & Limitations; Ensure Data Quality; Maintain Ethical Standards; Integrate with Human Expertise; Evaluate & Select Effective Tools; and Monitor & Audit Performance.

AI Literacy

AI literacy is a subset of information literacy, which is the ability to effectively find, evaluate, and use information for personal and professional purposes.

AI literacy has been defined as the ability to recognize, grasp, use, and critically assess artificial intelligence technologies and their impacts.

The "critically assess" element is key to identifying biases and errors, understanding limitations, evaluating ethical implications, and promoting transparency and accountability.  

Human-Centered Approach

The United Nations Educational, Scientific and Cultural Organization (UNESCO) recommends a human-centered approach to AI, emphasizing the importance of using AI to develop human capabilities for an inclusive, just, and sustainable future. This approach is grounded in human rights principles, aiming to protect human dignity and cultural diversity. It calls for regulations that ensure human agency, transparency, and public accountability. The Beijing Consensus on AI and Education specifies that AI in education should boost human capacities for sustainable development and promote effective human-machine collaboration. It advocates for equitable AI access to support marginalized communities and uphold linguistic and cultural diversity, recommending comprehensive policy planning involving various stakeholders. UNESCO provides further guidance for policymakers to detail the human-centered approach in education, suggesting policies for inclusive learning access, personalized learning, improved data management, monitoring of learning processes, and fostering ethical AI use skills.

Guidance for Generative AI in Education and Research, UNESCO 18 (2023).

Workflow integration icons in minimalistic line art : AI task queue, human oversight badge, blended process flow, digital bridge interface, synced dashboard screen, automation feedback icon

 

Regulatory Initiatives

UNESCO recommends that all countries establish effective regulations for GAI to ensure its positive impact on the development of education and other areas. Specific actions should be taken by (1) governmental regulatory agencies, (2) providers of AI-enabled tools, (3) institutional users, and (4) individual users. While many elements in the framework have a global scope, they should also be adapted to the local context, considering each country's educational systems and existing regulatory frameworks.

See Guidance for Generative AI in Education and Research, UNESCO 20–23 (2023).

Suggested Usage for AI Tools

In particular, GAI can be powerful and effective – and ethical – if used properly. An attorney will always have ethical obligations that they cannot circumvent. Here is a non-inclusive list of productive uses for AI tools:

  • Editing
  • Organizing
  • Brainstorming
  • Email drafting
  • Scheduling
  • Note-taking
  • Summarizing

Attorneys must always review AI-generated outputs for currency, relevance, authority, accuracy, and purpose.

Always Verify

Attorneys who depend on AI products for tasks such as research, drafting, communication, and client intake face similar risks as those who have relied on inexperienced or overly confident non-attorney assistants. Just as an attorney would use an initial draft from an assistant, paralegal, or law student to create their own final product, they should also review the output generated by AI. It is essential for attorneys to verify the accuracy and completeness of the AI work. Relying on an AI tool for legal research without proper fact-checking would violate ethical standards and Rule 11. See Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023).

Infographic titled 'BEST PRACTICES FOR USING AI IN LAW' presenting five key guidelines: (1) Understand Capabilities & Limitations – AI can analyze legal data and automate tasks but may hallucinate or show bias; (2) Ensure Data Quality – use accurate, current data and audit for bias and diversity; (3) Integrate with Human Expertise – AI supports legal judgment and requires ongoing training; (4) Maintain Ethical Standards – protect confidentiality and be transparent with clients; (5) Evaluate & Select Effective Tools – choose reliable, legal-specific AI and assess vendor ethics and support.