Skip to Main Content

Artificial Intelligence: Introduction

This guide provides a resource for legal educators and students on the ethical and responsible use of AI in legal education.

Introduction to Artificial Intelligence

AI is reshaping the legal profession and legal education by transforming traditional practices, enhancing efficiency, and introducing new modes of collaboration. As AI becomes more embedded in legal workflows and classrooms, it is also prompting critical reflection on ethics, equity, and the future of legal practice.

AI tools such as ChatGPT, Claude, Copilot, Gemini, and Harvey, alongside established platforms like Bloomberg, Lexis, and Westlaw, are increasingly integral to legal work. These tools use natural language processing to assist with drafting legal documents, generating briefs, summarizing case law, and conducting legal research. By automating routine and time-consuming tasks, AI allows legal professionals to focus on higher-order reasoning, strategic analysis, and client advocacy.

In practice, AI enhances collaboration by integrating with productivity software, enabling real-time editing, citation suggestions, and formatting assistance. These capabilities streamline workflows and improve the accuracy and speed of legal writing. AI also strengthens research by rapidly analyzing large volumes of legal texts, identifying relevant precedents, and supporting predictive analytics for litigation and case strategy.

In legal education, AI technologies are transforming how students learn and engage with the law. Interactive tools can simulate courtroom scenarios, client interviews, and legal negotiations, offering students experiential learning opportunities in a low-risk environment. These simulations provide immediate feedback and help students develop practical skills alongside doctrinal knowledge.

As AI becomes more prevalent, legal educators and practitioners must also grapple with its ethical implications. Issues such as algorithmic bias, data privacy, transparency, and accountability are central to responsible AI use. Law schools are increasingly incorporating these topics into their curricula to prepare students to critically assess and ethically deploy AI in their future careers.

Ultimately, the integration of AI into the legal field is not just about efficiency—it’s about shaping a more responsive, equitable, and forward-thinking legal system, where human oversight remains essential to ensure ethical integrity, accountability, and sound judgment.

The layout is divided into two equal sections with consistent formatting. On the left, labeled 'Legal Profession', include bullet-style phrases: 'Document Drafting', 'Legal Research', 'Predictive Analytics', and 'Workflow Automation'. On the right, labeled 'Legal Education', use the same bullet-style format with: 'Simulated Client Interactions', 'Interactive Case Studies', 'Real-Time Feedback', and 'Ethics & AI Literacy'.

Goals

  • Promote Ethical and Responsible Use: Equip legal educators and students with guidance on the ethical, transparent, and responsible use of AI in legal education and practice.
  • Clarify Capabilities and Limitations: Help users understand both the potential benefits and the inherent limitations of AI tools in legal contexts.
  • Support Informed Adoption: Provide practical guidance on selecting appropriate AI tools, training faculty and students, and developing institutional policies and ethical frameworks.
  • Highlight Real-World Applications: Showcase successful implementations of AI in legal education while also learning from challenges, missteps, and evolving best practices.
  • Curate Learning Resources: Offer a curated list of articles, tools, case studies, and other resources for deeper exploration and continued learning.
  • Encourage Ongoing Engagement: Inspire continued exploration, critical thinking, and responsible innovation in the use of AI across legal education.

Privacy

When it comes to data security, AI tools might store or process the information you provide. It is important to familiarize yourself with the data security and privacy policies of the specific tool you are using. While using an AI tool, refrain from sharing confidential or proprietary information, personally identifiable information, or any communications protected by attorney-client privilege.

When interacting with AI apps, exercising caution and being aware of the data you share with any AI service is always wise. For more information, see the Ethics page.

Digital Access

The UIC Law Library has access to numerous resources digitally via our subscription databases. Current UIC law students, staff, and faculty can access these resources with their UIC credentials. Additionally, UIC Law students, faculty, and staff can access the New York Times and Wall Street Journal with their credentials (click here for access info).

The layout features six evenly spaced bullet points with reduced font size and tighter spacing to ensure all text is fully visible. The bullet points are: 'Promote Ethical & Responsible Use', 'Clarify Capabilities & Limitations', 'Support Informed Adoption', 'Highlight Real-World Applications', 'Curate Learning Resources', and 'Encourage Ongoing Engagement'.

In Using ChatGPT or other AI tools? Here’s who can see your chat history, Jared Newman cautioned that many AI assistants maintain a comprehensive record of your conversations, accessible to anyone who has access to your devices. These dialogues are frequently stored online, sometimes indefinitely, raising the risk of exposure through bugs or security breaches. Additionally, in some instances, AI providers may share your chats with human reviewers. Newman evaluated the privacy settings and policies of nine AI tools in seven categories: default settings, ability to disable AI training, availability of private chat mode, ability to share chats, use of chats for targeted ads, and duration of data retention.

Most AI assistants—such as ChatGPT, Gemini, and Copilot—store conversations, sometimes indefinitely. These stored chats may be reviewed by humans to enhance system performance, which raises concerns about confidentiality and data security. While some platforms provide privacy controls, like temporary or incognito modes, others do not offer strong options for limiting data retention or human access.

ChatGPT provides users with the option to disable training on their chats and offers a temporary chat mode. However, deleted conversations may still be retained for up to 30 days. Gemini stores data indefinitely unless users enable auto-deletion, and human reviewers may keep the content for up to three years. Copilot utilizes chat data for both training purposes and targeted advertising, with a retention period of 18 months unless the data is manually deleted.

Different platforms have varying practices regarding user data. Claude only trains on conversations if users choose to opt in, and it retains flagged content for up to seven years. Grok offers a private mode and limits the retention of private chats to 30 days. In contrast, Meta uses chat data for training and advertising purposes, does not offer a private mode, and retains data indefinitely.

Privacy-focused tools like Duck.AI and Proton Lumo stand out by avoiding training and advertising use altogether, offering minimal data retention and no human review.

Table comparing privacy settings and policies of nine AI tools—ChatGPT, Gemini, Claude, Copilot, Grok, Meta, Perplexity, Duck.AI, and Proton Lumo—across six categories: default data usage, ability to disable AI training, availability of private chat mode, ability to share chats, use of chats for targeted advertising, and data retention duration. Each tool displays varying levels of privacy control and data handling practices.

As AI becomes more integrated into legal practice and education, the role of human oversight remains essential. In Co-Intelligence: Living and Working with AI, Ethan Mollick emphasizes the importance of the “human in the loop” principle—highlighting that AI should augment, not replace, human expertise.

Attorneys bring critical qualities that AI cannot replicate: nuanced legal reasoning, ethical judgment, and empathy. These human attributes are especially vital in sensitive areas like family law, immigration, or criminal defense, where emotional intelligence and contextual understanding can significantly influence outcomes.

Human oversight also ensures that AI-generated outputs meet legal standards and are interpreted appropriately. Lawyers must critically evaluate AI recommendations, verify their accuracy, and apply them within the broader legal and factual context of each case. This helps prevent errors, misuse, or overreliance on automated tools.

Moreover, ethical concerns such as algorithmic bias, data privacy, and transparency demand human accountability. Legal professionals are responsible for identifying and addressing these issues—tasks that require moral reasoning and professional responsibility beyond the scope of AI.

In short, while AI enhances efficiency and supports legal professionals, it cannot replace the human touch. Maintaining a thoughtful balance between automation and human judgment is key to ensuring that justice is delivered ethically, equitably, and effectively in an AI-augmented legal system.

The design features two columns: one labeled 'AI Capabilities' listing tasks like Legal Research, Document Drafting, and Case Summarization; the other labeled 'Human Judgment' listing Ethical Oversight, Critical Thinking, Empathy, and Accountability. A subtle line or arrow connects the two columns to emphasize collaboration.

Disclaimer

UIC Law Library LibGuides are created to assist patrons and researchers. These research guides are not intended as legal advice.

Contact Information

The UIC Law Library encourages you to contact us with AI questions and concerns.
(312) 427-2737, ext. 729 (Reference) ● (312) 427-2737, ext. 710 (Circulation) ● Email: law-library@uic.edu