Guarini School Generative Artificial Intelligence (AI) Policy

Summary of Policy

The Guarini School of Graduate and Advanced Studies recognizes Generative Artificial Intelligence (GenAI) as a transformative tool for research, teaching, and learning. While these technologies offer significant potential to enhance graduate education, their use raises important considerations regarding academic integrity, ethics, intellectual property, and discipline-specific applications. This policy aligns with Dartmouth's undergraduate guidelines on using artificial intelligence, with additional guidelines specifically crafted to support the advanced, often exploratory use of AI by graduate students. Graduate students are expected to use GenAI responsibly, adhering to Dartmouth's Academic Honor Principle and professional ethical standards.

Instructors and advisors have the authority to set expectations regarding the use of GenAI tools in coursework, research, and professional activities. Students must acknowledge or cite the use of GenAI in their work to maintain transparency and integrity.

Affected Parties

Students Under the Guarini School Programs

Policy Statement

Generative AI refers to technologies that create content, such as text, images, videos, code, or data analyses, using complex models. While these tools can enhance academic and research outcomes, their application in graduate studies requires careful consideration of ethics, intellectual property, and academic objectives.

Use of GenAI

A. GenAI Use in Coursework & Research

Graduate students may use GenAI tools in their coursework or research only if permitted by their program, advisor, or course instructor. Programs, advisors, and course instructors have authority to allow or disallow GenAI use for specific research projects, courses, or assignments, depending on the academic objectives. If unsure whether GenAI tools are permitted for particular tasks, it is the responsibility of graduate students to discuss AI tool permissions with their advisors or course instructors to ensure compliance with both program and course-specific guidelines.

B. GenAI Use in Research Writing

When writing for publication, graduate students should follow the specific standards of their disciplines and the specific policies of funding agencies, societies, and journals, some of which prohibit certain uses of AI-assisted technologies for text, image, or video generation, or other forms of expression.

C. GenAI Use in Thesis & Dissertation Writing

When writing their thesis or dissertation for approval by the Guarini School, graduate students are responsible for the intellectual work involved in demonstrating their knowledge as well as their ability to clearly communicate that knowledge. Therefore, GenAI tools should be used sparingly, for isolated tasks such as proofreading language use in select paragraphs, wordsmithing phrases, or ensuring text clarity.

GenAI Acknowledgement

Clear acknowledgment or citation of GenAI contributions is required whenever it is used. For coursework, graduate students are responsible for consulting with their advisors and instructors to determine appropriate citation methods when GenAI tools are used, in accordance with Dartmouth's Academic Honor Principle. Any contributions of GenAI to research outputs, publications, or presentations must be fully disclosed and attributed according to discipline-specific or journal-specific standards. Where GenAI has been used collaboratively within a research group or lab, graduate students must ensure that in addition to crediting all parties appropriately, IP contributions should be documented to reflect the use of AI tools. Graduate students are responsible for ensuring any AI-generated or AI-edited material is not plagiarized.

Responsible Use of GenAI in Research

Graduate students with permission to use GenAI in research must ensure it aligns with discipline-specific research ethics and data integrity standards. For example, when analyzing proprietary datasets, students must not use AI tools in ways that could compromise confidentiality or intellectual property (IP) rights. Graduate students are also responsible for the accuracy of their work. Because Large Language Models (LLMs) are trained on unverified crowd-sourced information that contains bias, GenAI output may include biased information. GenAI may also generate false information (i.e., "hallucinations"), plagiarized sentences, or include the copyrighted information of others.

Protection of IP and Research Confidentiality

Graduate students are required to protect their research IP and confidential data when using GenAI tools, particularly when dealing with sensitive or proprietary information. Importantly, original content uploaded to a GenAI tool can become part of the tool's database and be used by others. Therefore, cloud AI tools (e.g., Open AI) should not be used to process or share unpublished research, confidential project details, or intellectual property. Unauthorized disclosure of sensitive information to cloud AI platforms may compromise future publication and patent rights.

 

Graduate students should strive to ensure transparency, honesty, and academic rigor in all uses of AI in their work and are responsible for clarifying any ambiguous guidelines with their advisors, instructors, or program directors.

Effective Date

August 14, 2025

Office of Primary Responsibility


Last Reviewed Date

September 3, 2025