Skip to Main Content

Artificial Intelligence: AI Policy

This guide provides a resource for legal educators and students on the ethical and responsible use of AI in legal education.

AI in Action

Defining Artificial Intelligence

The concept of AI lacks a universally accepted definition, although several key definitions have been created through a provision in the U.S. Code (U.S.C.), ABA Formal Opinion 512, three Presidential Executive Orders, and a publication by the Department of Commerce, as well as guidelines promulgated by global multilateral organizations and foreign jurisdictions.

Relevance of Defining AI for Legal Education

The definition of AI is increasingly relevant in legal education due to the rapid advancements in AI technology and its growing influence on various aspects of law and society.  Understanding the definition of AI is essential for legal professionals to navigate the complex legal landscape surrounding AI technology. By understanding AI definitions, educators, law students, and legal professionals can effectively contribute to developing legal frameworks, addressing AI-driven decisions' ethical implications, and leveraging AI to improve legal practice.

National Artificial Intelligence Initiative

The National Artificial Intelligence Initiative, 15 U.S.C. 9401(3), defines AI as "a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments."  The statute further provides that AI systems use machine- and human-based inputs to (A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action. 

ABA Formal Opinion 512

According to ABA Standing Comm. on Ethics & Pro. Resp., Formal Op. 512 (2024), AI involves computer technology, software, and systems that can perform tasks that traditionally require human intelligence. This refers to the ability to perform tasks commonly associated with intelligent beings. The term is often used to describe the development of systems that seem to use or imitate human-like intellectual processes, such as reasoning, understanding meanings, generalizing, and learning from past experiences.

Verified Chip Icon

The Role of Executive Orders

Recent Executive Orders reflect a broader legal and policy debate over how best to balance innovation with accountability. For legal professionals and scholars, understanding how AI is defined and regulated at the federal level is essential for navigating emerging issues in compliance, liability, intellectual property, and civil liberties.

Two major executive orders—Executive Order 14110 (October 2023) and Executive Order 14179 (January 2025)—represent contrasting approaches to AI governance.

Executive Order 14110, issued by the Biden administration, emphasized safety, equity, civil rights, and consumer protection. It directed over 50 federal entities to undertake more than 100 specific actions, including developing standards for AI safety testing, mitigating algorithmic bias, and safeguarding privacy. It also acknowledged AI’s potential to disrupt labor markets and called for workforce support and training.

Executive Order 14179, issued by the Trump administration, revoked Executive Order 14110 and shifted the focus toward removing regulatory barriers to AI innovation. It framed AI development as a matter of national competitiveness and economic strength, prioritizing policies that promote American leadership in AI infrastructure and reduce bureaucratic constraints.

Digitally recreated representation of an executive order.

In July 2025, the Trump administration introduced the AI Action Plan, a comprehensive strategy aimed at reasserting American leadership in artificial intelligence. Framed as a response to global competition—particularly with China—the plan emphasizes deregulation, infrastructure expansion, and a commitment to ideological neutrality in AI systems.

The AI Action Plan revolves around the following executive orders: 

 The plan advocates for the removal of federal regulations that may hinder innovation, encouraging the development and use of open-source AI models to foster transparency and accessibility. A central tenet is the promotion of “Unbiased AI Principles,” which call for truth-seeking and the elimination of politically motivated content filtering in federally procured AI systems. These principles reflect a broader push to ensure that AI technologies remain free from what the administration describes as engineered social agendas.

To support the physical and technical demands of AI development, the plan proposes streamlining the permitting process for critical infrastructure, including data centers, semiconductor fabrication facilities, and energy systems. It also includes workforce development initiatives focused on skilled trades essential to building and maintaining AI-related infrastructure. Enhancing cybersecurity and establishing an AI incident response framework are also key components, aimed at bolstering national resilience against emerging threats.

Internationally, the plan positions AI as a strategic tool of diplomacy and security. It outlines efforts to export American AI technologies to allied nations, strengthen export controls to limit adversarial access, and promote global standards that align with U.S. values and interests.

Although the AI Action Plan outlines more than 90 federal actions, many of its proposals are advisory in nature and lack detailed implementation mechanisms. Nevertheless, it marks a significant shift in federal AI policy, prioritizing economic competitiveness and national security over regulatory oversight and social equity.

The proposal is filled with bold statements and patriotic themes, but when the details are examined, it resembles a familiar approach: deregulate, privatize, and trust that the market will resolve everything. There is much discussion about “winning the AI race,” but the plan lacks clarity on how to achieve that—especially considering the dearth of timelines, funding, and accountability in the plan. It feels reminiscent of a tech startup pitch from the '90s: ambitious but lacking in execution. 

Graphic titled “Key Points of the AI Action Plan” with a dark blue background and white text. It summarizes the Trump administration’s 2025 strategy for artificial intelligence, highlighting themes of deregulation, infrastructure expansion, and ideological neutrality. The graphic includes phrases such as “Accelerating Innovation,” “Building AI Infrastructure,” and “Leading in Global AI Diplomacy,” each accompanied by brief descriptions of related policy goals.

The European Union’s AI Act represents the first comprehensive legal framework for regulating artificial intelligence globally. Its primary goal is to ensure that AI systems used within the EU are safe, transparent, and respectful of fundamental rights, while also promoting innovation and competitiveness across member states.

At the core of the Act is a risk-based approach to regulation. AI systems are categorized based on the level of risk they pose to individuals and society. Systems that are considered to present an unacceptable risk—such as government-controlled social scoring and biometric surveillance in public spaces without oversight—are banned outright. High-risk systems, which include applications in healthcare, education, employment, and law enforcement, must adhere to strict requirements. These requirements include conducting conformity assessments, registering in an EU database, and implementing robust documentation and oversight mechanisms. Limited-risk systems, such as chatbots or deepfake generators, are required to meet transparency obligations to ensure users are aware they are interacting with AI. Finally, minimal-risk systems, like spam filters or algorithms used in video games, are largely exempt from regulation.

The Act also addresses general-purpose AI models, including large language models. Developers of these systems are required to publish summaries of training data, comply with copyright laws, and implement safeguards if their models present systemic risks. These provisions reflect the EU’s intent to regulate not just specific applications, but also the foundational technologies that power them.

Importantly, the EU AI Act applies extraterritorially. Any provider or deployer whose AI system is used within the EU must comply with its provisions, regardless of where they are based. Enforcement is phased, beginning with bans on unacceptable-risk systems and general obligations in early 2025, followed by requirements for general-purpose AI and high-risk systems through 2026 and 2027.

Violations of the Act can result in substantial penalties, with fines reaching up to €35 million or 7% of global turnover, depending on the severity of the breach. As such, the EU AI Act sets a global benchmark for AI governance, balancing regulatory oversight with a commitment to technological progress.

This graphic presents a stylized digital world map with a dark blue background and glowing blue outlines of continents. It features a network-like design with illuminated dots and lines symbolizing global data connections and technological interconnectivity. The image conveys a futuristic theme, emphasizing the worldwide scope and impact of artificial intelligence. There is no embedded text or numerical content, making the visual purely illustrative. This graphic is intended to complement discussions of global AI policy and infrastructure.

Other International Resources

In its Guidelines and Regulations to Provide Insights on Public Policies to Ensure AI’s Beneficial Use as a Professional Tool, the International Bar Association (IBA) provides detailed information on the use of AI (and varying AI definitions) in the main multilateral organizations and several jurisdictions worldwide.

The Law Library of Congress has published a comprehensive report (Innovative Technology in Legislatures in Selected Countries) that examines how legislative bodies are adopting and utilizing innovative technological infrastructures, including AI tools, to enhance their parliamentary processes, services, and functions. Notably, the report reveals a lack of references to a clear definition of AI.