Skip to Main Content

Artificial Intelligence: Best Practices

This guide provides a resource for legal educators and students on the ethical and responsible use of AI in legal education.

AI Literacy

AI literacy is a subset of information literacy, which is the ability to effectively find, evaluate, and use information for personal and professional purposes.

AI literacy has been defined as the ability to recognize, grasp, use, and critically assess artificial intelligence technologies and their impacts.

The "critically assess" element is key to identifying biases and errors, understanding limitations, evaluating ethical implications, and promoting transparency and accountability.  

Human-Centered Approach

The United Nations Educational, Scientific and Cultural Organization (UNESCO) recommends a human-centered approach to AI, emphasizing the importance of using AI to develop human capabilities for an inclusive, just, and sustainable future. This approach is grounded in human rights principles, aiming to protect human dignity and cultural diversity. It calls for regulations that ensure human agency, transparency, and public accountability. The Beijing Consensus on AI and Education specifies that AI in education should boost human capacities for sustainable development and promote effective human-machine collaboration. It advocates for equitable AI access to support marginalized communities and uphold linguistic and cultural diversity, recommending comprehensive policy planning involving various stakeholders. UNESCO provides further guidance for policymakers to detail the human-centered approach in education, suggesting policies for inclusive learning access, personalized learning, improved data management, monitoring of learning processes, and fostering ethical AI use skills.

Guidance for Generative AI in Education and Research, UNESCO 18 (2023).

Key Considerations

Understand the Capabilities & Limitations

  • Capabilities
    • AI has the ability to efficiently analyze large volumes of data, pinpoint pertinent case law, and forecast legal results by drawing from historical data.
    • It can also automate repetitive tasks like contract analysis and e-discovery.
  • Limitations
    • AI tools have the potential to produce inaccurate results or "hallucinate" information.
    • AI tools may also be biased if trained on biased data.
    • The above highlights the importance of carefully evaluating and verifying the outputs generated by AI tools to ensure their reliability.

Ensure Data Quality

  • Accurate Data
    • The effectiveness of AI tools is directly influenced by the quality of the data on which they are trained.
    • Ensure the information is precise, current, and complete.
  • Bias Mitigation
    • Be mindful of possible biases in the data and take action to reduce any impact.
    • This involves utilizing a variety of data sets and conducting regular audits of AI outputs to identify and address any bias.

Maintain Ethical Standards

  • Confidentiality
    • Ensure that the AI tools are in compliance with legal confidentiality requirements.
    • Avoid using AI tools that store or process sensitive client information in ways that could compromise confidentiality.
    • Always confirm that the AI tool does not receive any firm, client, or personally identifiable information (PII).
  • Transparency
    • Remember to prioritize transparency with clients when it comes to using AI in their cases.
    • Clearly explain how AI tools are used and the benefits provided.

Integrate with Human Expertise

  • Human Oversight
    • AI should augment human judgment, not replace it.
    • Attorneys must always review AI-generated outputs for currency, relevance, authority, accuracy, and purpose.
  • Continuous Learning
    • Stay updated on the latest AI developments.
    • Continuously train legal staff on how to effectively use AI tools.

Evaluate & Select Effective Tools

  • Choose AI tools that are specifically designed for legal research and have a proven track record of reliability and accuracy.
  • Evaluate AI vendors based on their expertise, support services, and commitment to ethical AI practices.

Monitor & Audit Performance

  • Regularly audit AI tools to ensure they are performing as expected and not introducing errors or biases.
  • Implement feedback mechanisms to continuously improve AI tools based on user experiences and outcomes.

Regulatory Initiatives

UNESCO recommends that all countries establish effective regulations for GAI to ensure its positive impact on the development of education and other areas. Specific actions should be taken by (1) governmental regulatory agencies, (2) providers of AI-enabled tools, (3) institutional users, and (4) individual users. While many elements in the framework have a global scope, they should also be adapted to the local context, considering each country's educational systems and existing regulatory frameworks.

See Guidance for Generative AI in Education and Research, UNESCO 20–23 (2023).

Suggested Uses for AI Tools

In particular, GAI can be powerful and effective – and ethical – if used properly. An attorney will always have ethical obligations they cannot circumvent. Here is a non-inclusive list of productive uses for AI tools:

  • Editing
  • Organizing
  • Brainstorming
  • Email drafting
  • Scheduling
  • Note-taking
  • Summarizing

Attorneys must always review AI-generated outputs for currency, relevance, authority, accuracy, and purpose.

Always Verify

Attorneys who depend on AI products for tasks such as research, drafting, communication, and client intake face similar risks as those who have relied on inexperienced or overly confident non-attorney assistants. Just as an attorney would use an initial draft from an assistant, paralegal, or law student to create their own final product, they should also review the output generated by AI. It is essential for attorneys to verify the accuracy and completeness of the AI work. Relying on an AI tool for legal research without proper fact-checking would violate ethical standards and Rule 11. See Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023).