AI Guidance

“The difficulty lies not so much in developing new ideas as in escaping from old ones.”
John Maynard Keynes

The College of Professional Studies’ goal is to ethically and responsibly embed Generative AI and Large Language Models (LLMs) in our work: New Product Development, Curriculum Development, Teaching, Assessments, and Workplace Tasks. Whether you are embarking on your journey to implement AI into teaching, curriculum development, assessments, the development of CPS administrative and learning products, or simply in work-related tasks, this document will support you with guiding principles and standards (note that University policies supersede this document).

This document was developed by Joan Giblin, Uwe Hohgrawe, Ilka Kostka, Yvonne Leung, Prashant Mittal, Allison Ruda, Balazs Szelenyi, John Wilder, and Shachi Winton. Please feel free to contact these colleagues while you are exploring your use of AI in the areas outlined here; they will be able to provide guidance and/or put you in the right direction for a successful implementation of AI at the College of Professional Studies.

Our Purpose

We are teaching and working in an era of significant disruption due to rapid developments in AI. With this disruption, however, there are tremendous opportunities for enhancing teaching, research, collaboration, and work, allowing the College of Professional Studies (CPS) to serve as innovative thought leaders for the University. The guidance in this document aims to support faculty, administrators, and staff as they integrate AI into educational endeavors in the College. This living document serves as a reference and recommendation (i.e., guideline) and is not set forth as a policy or rules to be implemented.

Who Should Use These Guidelines?

As Ethan Mollick aptly noted, “The only bad way to react to AI is to pretend it doesn’t change anything.” Whether you are a power user of AI, early experimenter, curious beginner, or anywhere in between, you are taking important steps to understand the impact of AI on teaching, learning, and work. This living document offers guidance and support to ensure that CPS colleagues with a range of AI experience have the knowledge they need on their journey into AI integration. Each use case and question are built around creating an AI supported environment that promotes efficiency, innovation, and accountability. You are encouraged to refer to these guidelines as you teach and work with AI and regularly share insights, questions, and concerns with colleagues.

Playground or Sandbox Approach. Embracing a playground or sandbox approach fosters a culture of innovation where experimentation with new technologies is encouraged and valued. Creating an environment that welcomes trying out new AI applications through pilot programs and projects of small scale allows us to test viability and impact without significant risk.

Safety First/Human in the Loop. This principle emphasizes the ethical and responsible use of AI to ensure that AI systems are designed to mitigate biases, protect privacy, and promote transparency. We all maintain oversight and accountability by keeping humans involved in critical decision-making processes such as admissions, grading, and credentialing. Regular monitoring of AI outputs safeguards against errors and upholds quality standards.

Smart Resource Allocation. Adopting a rational and practical stance on the economics of AI use is crucial. Conducting thorough cost and benefit analyses for CPS at large and for AI projects allows us to make informed decisions about investing in AI technologies that align with institutional goals. Resources should be allocated to AI solutions that offer clear benefits, considering factors like scalability and integration with existing systems to maximize utility. CPS’ commitment to ongoing professional training also enhances our ability to effectively use and manage AI tools, facilitating smoother transitions to AI enhanced workflows and ensuring effective applications of AI to professional endeavors at CPS.

Continuous Improvement and Measurement. Quality assessment and performance metrics are vital to ensuring the long-term success of all AI implementations. This involves establishing clear benchmarks and KPIs (Key Performance Indicators) to measure the performance and effectiveness of AI systems. By continuously monitoring metrics such as accuracy, efficiency, and user satisfaction, we can assess whether AI applications meet desired outcomes and adhere to institutional standards. Ongoing evaluation also helps to identify areas for improvement and optimize AI performance, ensuring that we not only meet immediate needs but also anticipate future demands. Quality assessment ensures that AI tools provide consistent value, both operationally and academically, in the long term.

Learning About and Engaging with AI

AI for Curriculum and Product Development

AI for Teaching

CPS encourages the integration and use of AI, and all technologies, into teaching and learning. Below are considerations for students and faculty.

General Considerations – Students

General Considerations – Faculty

Suggestions for Assessment

Automated Grading with Human Oversight

Use AI for efficient grading and assessment while maintaining quality and ensuring equity.

Academic Integrity


Please visit the Academic Integrity resources pages on the CPS SharePoint site for ideas and guidance related to educating learners about academic integrity and AI, and to learn about tools and strategies for investigating and responding to potential lapses.

AI for Research, Experimentation, and Continuous Feedback Loops

AI for (Daily) Work and Collaboration

Appendix I: AI Training and Infrastructure at CPS

This section provides strategic framework considerations for building a fully integrated and sustainable AI workplace, ensuring that our practices align with the College’s goals and mission and support the ongoing development of faculty, staff, and students.

Appendix II: Examples