AI literacy is a subset of information literacy, which is the ability to effectively find, evaluate, and use information for personal and professional purposes.
AI literacy has been defined as the ability to recognize, grasp, use, and critically assess artificial intelligence technologies and their impacts.
The "critically assess" element is key to identifying biases and errors, understanding limitations, evaluating ethical implications, and promoting transparency and accountability.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) recommends a human-centered approach to AI, emphasizing the importance of using AI to develop human capabilities for an inclusive, just, and sustainable future. This approach is grounded in human rights principles, aiming to protect human dignity and cultural diversity. It calls for regulations that ensure human agency, transparency, and public accountability. The Beijing Consensus on AI and Education specifies that AI in education should boost human capacities for sustainable development and promote effective human-machine collaboration. It advocates for equitable AI access to support marginalized communities and uphold linguistic and cultural diversity, recommending comprehensive policy planning involving various stakeholders. UNESCO provides further guidance for policymakers to detail the human-centered approach in education, suggesting policies for inclusive learning access, personalized learning, improved data management, monitoring of learning processes, and fostering ethical AI use skills.
Guidance for Generative AI in Education and Research, UNESCO 18 (2023).
Understand the Capabilities & Limitations
Ensure Data Quality
Maintain Ethical Standards
Integrate with Human Expertise
Evaluate & Select Effective Tools
Monitor & Audit Performance
UNESCO recommends that all countries establish effective regulations for GAI to ensure its positive impact on the development of education and other areas. Specific actions should be taken by (1) governmental regulatory agencies, (2) providers of AI-enabled tools, (3) institutional users, and (4) individual users. While many elements in the framework have a global scope, they should also be adapted to the local context, considering each country's educational systems and existing regulatory frameworks.
See Guidance for Generative AI in Education and Research, UNESCO 20–23 (2023).
In particular, GAI can be powerful and effective – and ethical – if used properly. An attorney will always have ethical obligations they cannot circumvent. Here is a non-inclusive list of productive uses for AI tools:
Attorneys must always review AI-generated outputs for currency, relevance, authority, accuracy, and purpose.
Attorneys who depend on AI products for tasks such as research, drafting, communication, and client intake face similar risks as those who have relied on inexperienced or overly confident non-attorney assistants. Just as an attorney would use an initial draft from an assistant, paralegal, or law student to create their own final product, they should also review the output generated by AI. It is essential for attorneys to verify the accuracy and completeness of the AI work. Relying on an AI tool for legal research without proper fact-checking would violate ethical standards and Rule 11. See Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023).