Artificial Intelligence (AI)
Generative AI has the potential to facilitate research, revolutionize teaching, and function as a co-intelligence in academic work.
Although generative AI technologies are still new and are rapidly evolving, improving, and adding capabilities, their potential benefits are considerable.
The integration of generative AI into the teaching, research, and service activities of the modern university is also rapidly evolving, as we work to answer a variety of questions central to our mission:
- What impact will generative AI have on our ability to measure student learning and maintain research security and integrity?
- What challenges will generative AI bring to the university in terms of data security and issues of data privacy and data leaks?
The impacts of generative AI for higher education are layered and continually changing as the technology advances, raising questions about student learning and assessment, ethics and bias, business processes and efficiencies, and the impact these new technologies will have on a changing workforce and the economic landscape, while also inspiring us with new opportunities and possibilities.
General Guidelines for the Use of Generative AI Tools
- Understand Your Responsibility – Content created by generative AI tools can include factual errors or inaccuracies, fabrications, bias, or other unreliable information. It is your responsibility to ensure the accuracy of what is reported in your work. Review all material produced for accuracy, violations of copyright protections, and plagiarism. Document and be transparent about all uses of Generative AI—such clarity in citation and attribution is a critical aspect of any research product that uses Generative AI. Be sure to comply with academic and research integrity policies: review the Code of Student Life and Faculty Handbook. In the classroom, be clear about expectations for student use of AI in their coursework.
- Guard Confidential Data – Data classified as private or restricted should not be entered into generative AI tools, including non-public research data, per SBHE 1202.3 Data Privacy Policy. When using these tools, do not disclose confidential, sensitive, or personally identifiable information. Do not disclose intellectual property that is not safeguarded. This caution extends not only to sensitive data generated and used as part of a research project but also to protected student data and student information.
- Personal Security Starts with You – Be extra vigilant about potential phishing attacks. Generative AI is rapidly changing the phishing landscape, and AI has made it possible for bad actors to deploy newer, more sophisticated phishing attacks and other attacks on your personal data and identifying information. Report any questionable emails to University Information Technology (UIT) by clicking the Phish Notify button in Outlook/O365 or submitting a helpdesk ticket. Familiarize yourself with UND’s cybersecurity tools and services.
- Reach out to UIT Before Procuring Generative AI Tools – As with any software purchase, submit procurement requests via Jaggaer regardless of the dollar value. If you have already purchased a generative AI tool, please let UIT know. Help us help you to protect yourself and the University. Safeguarding UND is everyone’s responsibility!
As generative AI tools evolve, we expect these existing policies and guidelines to
be modified or new policies to be created to align with the use of the technology.
More information on generative AI and guidance can be found below.
What is Artificial Intelligence (AI)?
Artificial Intelligence (AI) is the phrase used to refer to computer systems that mimic or exceed human thinking based on experience. AI systems rely on algorithms and models that can analyze large data sets to identify patterns for predictions or decision-making.
What is Generative AI?
Generative AI is a form of AI used to create new content, images, text, videos, music, etc., based on user inputs. Generative AI uses large data sets and algorithms to analyze patterns and rules to “learn” behavior and characteristics of training data sets, and it uses that “learned” information to generate original content.
What is ChatGPT?
ChatGPT is one of many natural language AI programs that allow you to ask questions conversationally. Programs such as ChatGPT can help compose emails, essays, and even programming code. Other systems include Microsoft Copilot, Anthropic’s Claude 3, and Google Gemini.
Before using natural language AI programs for academic, research, or administrative purposes, it is important to understand your responsibilities and the related guidelines for ethical use, both in terms of policy and in terms of disciplinary standards and expectations.
What are AI Hallucinations?
Within the context of generative AI, hallucinations occur when AI generates false information or perceptions of something that is not actually present or factual. Because AI is trained on large data sets to learn patterns, the AI tool may create new text that is not necessarily based on facts.
What Information Should I Not Feed a Generative AI Tool?
Data classified as private or restricted should not be entered into generative AI tools, including non-public research data, per SBHE 1202.3 Data Privacy Policy. Do not disclose confidential, sensitive, or personally identifiable information when using these tools. Do not disclose intellectual property that is not safeguarded. This requirement extends not only to sensitive data generated and used as part of a research project but also to protected UND community data including student, faculty, and staff research and data.
What are AI Notetakers?
AI notetakers can create text transcriptions of in-person or virtual meetings or class sessions. While an AI notetaker can save time and effort by transcribing and summarizing the main points of a meeting or a class lecture, they may not be accurate or reliable. Further, these tools may raise privacy and security concerns as the data may be stored or processed by a third-party service that you have no control over or may create a record that is subject to an open records request.
If you are using an AI notetaker, it is important that you inform the other participants of the meeting of that use. Students should make sure use of an AI notetaker in class is permitted by the instructor.
How Can I Subscribe to a Generative AI Tool?
As with any software purchase, submit a non-standard procurement request via Jaggaer regardless of the dollar value when using sensitive data. Please let UIT know if you have already purchased a Generative AI tool for academic or research purposes. Help us help you to protect yourself and the University. Safeguarding UND is everyone’s responsibility.
AI Frequently Asked Questions
Faculty and Staff
What can I, as a faculty member, do to support academic integrity in relation to AI?
- Provide clear expectations regarding academic integrity and AI. Clarify for students
your expectations regarding using any generative AI tools or applications. State these
expectations clearly on your course syllabus and in any assignment prompts. Explain
the consequences for the students if your expectations regarding the use of generative
AI tools are not met.
- Discuss your expectations at the beginning of the course and frequently thereafter.
- Place clear statements on the course syllabus and on Blackboard Ultra.
- Be clear about whether using an automated tool such as ChatGPT is considered academic dishonesty in your particular course or program. Also indicate that their use of AI may violate the Code of Student Life if your policy is to restrict or ban use of such AI tools.
- Note that websites that purport to detect the use of AI are flawed and have high occurrences of false positives and negatives. Use caution and be transparent with your students if you do decide to use these tools.
- Discuss with your students the challenges and opportunities that AI and automation present within your academic discipline and the subject of your courses.
- Acknowledge that other disciplines, courses, and faculty may have different expectations and understandings of appropriate use.
-
Report academic integrity concerns by submitting an academic integrity concern report.
Can I use Generative AI for course delivery and assessment?
- AI tools might be useful in teaching and assessment and could drive new course delivery methods. Be open to this possibility. TTaDA will be providing programming and conversations for opportunities to learn more about AI and teaching. We encourage you to have conversations within your department and college about appropriate use in teaching and learning.
- Be wary of claims by third-party vendors. Familiarize yourself with the practices of these vendors before adopting such tools in your courses.
- TTaDA also offers resources for faculty as they think about ChatGPT and other generative AI and their teaching.
Can I use generative AI in my research and scholarship?
- Your work is considered to be your own. Document any use of generative AI in constructing your work, research, and manuscripts. Similarly, if quantitative or computing tools are used in your research, document those uses.
- Be sure to check your disciplinary organizations and the publishing guidelines of your target journals to fully understand their position on the use of generative AI and large language models in the construction of manuscripts.
- Similarly, be aware of current guidelines by granting agencies on using generative AI and large language models.
- Generative AI and large language models are not designed to establish proof or provide accurate facts. It is the responsibility of the human author to ensure the accuracy of what’s reported in your work and to disclose the role of AI in your work.
- Submitting your work to an AI tool may make your work widely available in that tool’s training set or database. This might put your results into the public domain before you are ready to share them and ahead of peer review, a critical component of the research process.
As a staff member can I use AI for my job?
- Use of AI such as ChatGPT can be useful in putting together agendas, technical writing explaining a concept, or other administrative tasks, but always ensure the accuracy of the information you receive when using Generative AI.
- It is important to have conversations on the use of AI with your office/unit/supervisor to determine how and where it can be used ethically and effectively in your unit.
- Be transparent about your use of AI. Cite and attribute work that is generated by AI.
- It is your responsibility to ensure the accuracy of what is reported in your work. Review all material produced for accuracy, violations of copyright protections, and plagiarism.
Students
Can I use AI for my courses?
- Different fields, courses, and instructors will have different policies and guidelines for how AI can or cannot be used. It is important not to make assumptions about what is allowed and to ask your professor/instructor for clarification when needed.
- When submitting your work for credit, it is assumed to be your original work. The use of other resources, including Generative AI models like ChatGPT or Microsoft Copilot, should be documented or follow the guidelines set by your faculty.
- Generative AI and large language models are not designed to establish proof or provide accurate facts. It is your responsibility to ensure the accuracy of what you submit.
- When in doubt about what’s allowed in a given course or assignment, clarify it with your professor/instructor. It is your responsibility to do so prior to utilizing AI for coursework.
- Failing to properly cite the use of AI or utilizing AI in a manner inconsistent with the expectations of your professor/instructor may be considered academic dishonesty and may result in academic and/or behavior consequences.
AI Resources
UND Resources
-
- For discipline-specific information on artificial intelligence developments across academia, and guidance on publisher and journal policies, please consult the Artificial Intelligence Resources - Library Research Guide compiled by librarians at the Chester Fritz Library.
- UND faculty have been meeting the challenges and possibilities of generative AI in the classroom by creatively developing new assignments. After taking a workshop hosted by TTaDA, several have submitted their assignments to the CFL Scholarly Commons AI Assignment Library. Please explore their work for ideas for developing your own assignments.
-
Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations, U.S. Department of Education, May 2023
External Resources
- Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations, U.S. Department of Education, May 2023
- ChatGPT and Artificial Intelligence in Higher Education, UNESCO, 2023
- Artificial Intelligence Risk Management Framework (AI RMF 1.0), National Institute of Standards and Technology, U.S. Department of Commerce, January 2023
- Generative AI in Higher Education: From Fear to Experimentation, Embracing AI’s Potential, Tyton Partners, May 2023
- Student Perceptions of Generative AI Jisc report, May2024
- What is ChatGPT Doing... and Why Does It Work? Stephan Wolfram, February 2023
AI Tutorials
-
- University of Maryland “AI and Information Literacy Tutorial”
- Wharton Interactive Crash Course: “Practical AI for Instructors and Students
Resources on AI Detectors
-
- The Use of AI-Detection Tools in the Assessment of Student Work by Sarah Eaton
- Detecting AI May be Impossible by Geoffrey Fowler (Washington Post article, use UND login to access)
- AI Detection Tools Falsely Accuse International Students of Cheating: Stanford study found AI detectors are biased against non-native English speakers by Tara García Mathewson
- GPT detectors are biased against non-native English writers by Weixin Liang et al.