2/13/2026

Summary

The uncritical adoption of generative artificial intelligence across higher education represents a fundamental threat to academic freedom, shared governance, and the educational mission of universities. Driven by corporate marketing and administrative efficiency claims rather than educational evidence, the values of learning, or the needs of students, institutions are rushing to implement AI systems that undermine faculty authority, compromise student learning, exploit intellectual property, and exacerbate existing inequities. This statement outlines and engages with these concerning trends through the lens of Indiana University’s own recent and ongoing investments in GenAI, which exemplifies how universities are prioritizing technological adoption over educational values and democratic governance. We argue that the widespread promotion of AI tools that cannot understand, reason, or create knowledge fundamentally contradicts the core mission of higher education: developing critical thinking, human understanding, and informed citizenship.  

Beyond immediate educational concerns, GenAI adoption contributes to environmental destruction, labor exploitation, and the concentration of power in technology corporations at the expense of public education. Rather than accepting claims about AI’s inevitability, we call for faculty-led resistance to technology adoption without thoughtful deliberation and reflection, the strengthening of shared governance over education technology decisions, and a recommitment to student-centered approaches to teaching and learning that prioritize understanding over efficiency.

As the AAUP Chapter at Indiana University Bloomington, we affirm our commitment to academic freedom, shared governance, the integrity of scholarship, and the welfare of faculty and students. This statement represents our collective assessment that IU’s GenAI initiatives, far from advancing the values of higher education, represent a fundamental departure from the principles that should guide a public research university committed to democratic education and the pursuit of inquiry and knowledge.


Background & Context

The rapid proliferation of generative artificial intelligence (GenAI) in higher education represents a concerning trend toward the uncritical adoption of transformative technologies without adequate faculty oversight, shared governance, or consideration of their broader implications. As the AAUP’s 2025 report on Artificial Intelligence and Academic Professions clearly states, “AI integration initiatives are spearheaded by administrations with little input from faculty members and other campus community members, including staff and students,” with survey findings showing that “71 percent of respondents said decision-making and AI initiatives are overwhelmingly led by college or university administrations.”

This pattern echoes previous waves of educational technology hype that have swept through and attempted to reshape higher education. As Jonathan Rees notes in his recent analysis, we have witnessed similar promises before: “Do you remember MOOCs? I realize that that question is itself cliche now, but if you do remember massive open online courses you almost certainly remember the quote about how in the future there were only going to be ten universities and that ‘There’s a tsunami coming.’ Needless to say, there are still no signs of either of those things actually happening.”

Yet unlike previous educational technology trends, the current GenAI push represents a more comprehensive assault on academic labor, intellectual property rights, and the fundamental mission of higher education. The technology’s unprecedented data requirements, computational costs, and potential for surveillance and control make it qualitatively different from earlier technologies.

At Indiana University, this troubling pattern is exemplified through an aggressive and intensely celebratory institutional push to integrate GenAI across all aspects of university life, representing one of the most comprehensive GenAI adoption programs in higher education and serving as a cautionary example of how universities are prioritizing technological solutions over educational values and faculty governance.

At IU Bloomington, this acceleration is especially pronounced. In just the past year, IU has:

  • Launched a university-wide GenAI 101 course, marketed aggressively to students, staff, and faculty as a way to “stay ahead of the curve” and “streamline daily tasks”;
  • Built an “AI at IU” service catalog that lists vendor-approved AI tools, from Microsoft Copilot to Google Gemini, as official resources for university use;
  • Conducted Next.IU pilots embedding AI into classrooms through platforms like Canvas AI and Microsoft Copilot;
  • Rolled out institutional access to ChatGPT Edu for all faculty, staff, and students—one of the largest such deployments nationwide;
  • Provided extensive resources through CITL and Teaching.IU, encouraging faculty to redesign syllabi and assignments to integrate AI

This institutional posture—heavily promotional, vendor-aligned, and efficiency-focused—requires a thorough examination from the standpoint of academic freedom, shared governance, and professional integrity. The AAUP’s Artificial Intelligence and Academic Professions report reminds us that AI is not simply a neutral “tool,” but a complex system with profound implications for labor, pedagogy, equity, and governance.

Promotion & Deployment of GenAI at Indiana University

Indiana University’s approach to generative AI implementation provides a particularly troubling example of how institutions are prioritizing technological adoption over faculty governance, student privacy, and educational values. The university’s comprehensive GenAI initiative demonstrates the scope and intensity of current institutional pressures to adopt these technologies.

Institutional GenAI Promotion and Implementation

An official IU AI webpage promotes generative AI as opening “exciting possibilities for the IU community” with the “ability to create content, code, analyze data, and more” that “can help you uncover discoveries and solve problems with speed and elegance.” This promotional language exemplifies the uncritical technology adoption that the AAUP’s AI report identifies as problematic.

IU’s approach is not merely neutral facilitation; it is active marketing of AI adoption, in ways that reflect a particular set of institutional values. The university’s promotional materials reveal a fundamentally concerning perspective on the role of technology in education. A recent university email blast to faculty and staff declared:

“AI isn’t just coming – it’s here, transforming how we teach, innovate, and work at IU. The university is making big investments in generative AI across instruction and operations, and GenAI 101 is your chance to put it to work for you.” (“Unlock the Power of AI: Take GenAI 101,” IU email communication to faculty and staff, 8/25/2025)

This language reveals several concerning assumptions: that technological adoption is inevitable (“AI isn’t just coming – it’s here”); that faculty must adapt to serve workforce demands rather than educational principles; and that the primary goal is to “put it to work” rather than critically evaluate its appropriateness or effects. The email also emphasized that the GenAI 101 course will “help you build practical skills to streamline daily tasks, spark fresh ideas, and optimize how you do your job today.” Participants are promised an official IU badge after only eight modules, framed as a credential to “showcase your new expertise.”

Faculty were further urged to recruit students into the course:

“For those of you who work with students, we hope you will encourage them to complete GenAI 101 this semester. We’ve prepared resources to set faculty up for success, like PowerPoint slides and a syllabus insert describing the course.”

Other marketing lines underscored IU’s framing of AI:

  • “Stay ahead of the curve… this course equips you with foundational skills to adapt and lead in an academic environment that prepares students for a workforce that expects them to have generative AI skills.”
  • “Have you ever felt like you needed an assistant? Learn how to use GenAI to brainstorm ideas, help with repetitive tasks, and solve problems.”
  • “IU is investing in your future too. GenAI 101 isn’t just for students. IU is committed to helping every employee stay relevant, efficient, and future-ready.”

Taken together, this language reveals IU’s approach: positioning generative AI as inevitable, central to employability, universally applicable, and essential for individual relevance in the institution. This framing does not invite debate about whether or how AI should be used in academia; it assumes adoption and focuses on speed, efficiency, and workforce alignment.

More recently, in August 2025, IU announced it would provide ChatGPT Edu access to all 120,000 students, faculty, and staff, making it “the second largest ChatGPT Edu rollout of all time for OpenAI.” While the university secured contractual protections ensuring that user interactions with ChatGPT Edu are not used to train OpenAI’s models, this massive deployment occurred with minimal faculty consultation or shared governance input, representing exactly the kind of administrative overreach the AAUP report critiques.

Comprehensive Integration Across University Functions

IU’s GenAI initiative extends far beyond optional tools for interested faculty. The university has created:

  • GenAI 101 Course: An effectively mandatory course for students, who were automatically enrolled without their approval or consent, and encouraged for faculty and staff, designed to build “practical skills to streamline daily tasks, spark fresh ideas, and optimize how you do your job today.”
  • Administrative Integration: The university explicitly promotes GenAI for “operations” and administrative functions, expanding surveillance and data collection capabilities across university functions.
  • Faculty Compliance Expectations: The administration has prepared “PowerPoint slides and a syllabus insert” for faculty to promote the GenAI course to students, effectively requiring faculty to become promoters of the technology regardless of their professional judgment about its appropriateness.

University leadership justified this massive deployment by citing that “80% of participants reported that ChatGPT did the best job supporting their teaching, research and service responsibilities” among 200 faculty in a pilot program, and that “over 30,000 members of the IU community were already using the free version of ChatGPT with IU email addresses.” However, these justifications fail to address whether such usage is educationally sound or whether popular adoption should drive institutional policy.

Core Areas of Concern

Academic Freedom and Shared Governance

The AAUP’s AI report identifies that “AI integration initiatives are spearheaded by administrations with little input from faculty members and other campus community members” and that “many respondents described administrators exerting great effort to introduce AI into research, teaching, policy, and professional development with little meaningful input from—let alone oversight by—faculty members, staff, or students.” IU’s approach exemplifies this pattern.

The university’s decision to automatically enroll students in GenAI 101, provide institution-wide access to ChatGPT Edu, and promote faculty adoption of these tools represents a fundamental violation of faculty primacy in curricular matters. The shared governance violation is compounded by the speed of implementation. Complex educational technologies that fundamentally alter teaching and learning are being deployed faster than traditional academic review processes can accommodate. This creates a fait accompli where faculty are presented with already-implemented systems and asked to adapt rather than evaluate. As the AAUP’s 1966 Statement on Government of Colleges and Universities establishes, it is “the responsibility primarily of the faculty to determine the appropriate curriculum and procedures of student instruction.”

Student Learning and Educational Integrity

The AAUP report notes that “respondents were overwhelmingly concerned with student plagiarism made possible by generative AI,” with one respondent noting: “I am less concerned about the ‘honesty’ part than the ‘failure to learn’ part… It is now more difficult for [students] to develop their thoughts on a topic because they don’t have to spend time with it while they work through writing about it.”

GenAI systems fundamentally undermine the educational process by providing seemingly authoritative answers without understanding, encouraging superficial engagement with complex topics, and creating dependencies that inhibit the development of critical thinking skills. As one faculty member quoted in the AAUP report observed, “Large language models like ChatGPT produce shallow, unoriginal ‘predictive text-y ideas’ and I worry that my students and others will increasingly believe that that’s okay—that there’s nothing better than that to aspire to.”

Intellectual Property and Data Rights

Intellectual property concerns around GenAI systems operate on multiple levels. All current GenAI systems, including ChatGPT, were initially trained on vast datasets that included copyrighted materials scraped from the internet without permission from original creators. This foundational appropriation affects every GenAI system regardless of subsequent contractual protections.

For ongoing data use, IU has secured contractual agreements with vendors ensuring that ChatGPT Edu and other GenAI tools’ user interactions are not collected or used for further model training. This is important. Nevertheless, IU’s systematic promotion of AI tools—regardless of specific contractual protections—normalizes dependence on systems built through unauthorized appropriation of creative and scholarly work.

The broader concern is that even with contractual protections for some tools, the university’s approach creates faculty and student dependency on technologies whose core functionality was developed through intellectual property appropriation and that, outside the university’s infrastructural and contractual environment, are ultimately technologies of surveillance and extraction. This institutionalizes the legitimacy of such appropriation while making the campus community dependent on corporate platforms for essential academic functions.

Labor Conditions and Work Intensification

The AAUP report found that “preexisting work intensification and devaluation are the main reasons respondents give for using AI to assist with academic tasks” and that “implementing AI in higher education adds to faculty and staff workloads and exacerbates long-standing inequities.” Rather than addressing underlying problems of overwork and under-resourcing, GenAI adoption promises technological solutions that actually increase faculty workload through required training, system management, and the need to detect and address AI-assisted student work.

The survey found that AI has generally led to worse outcomes for “the teaching environment (according to 62 percent of respondents), pay equity (30 percent), job enthusiasm (76 percent), academic freedom (40 percent), and student success (69 percent).”

Privacy and Surveillance

Technologies like GenAI also inherently involve extensive data collection and monitoring. Every interaction with these systems generates data that can potentially be analyzed for patterns, preferences, and behaviors—even when that data is protected from training models. While IU has secured contractual protections with vendors like OpenAI that prevent user data from being used for model training, significant privacy and surveillance concerns remain around institutional oversight and system integration. ChatGPT Edu includes what OpenAI describes as “administrative controls” and “usage insights,” though the specific details of what administrators can monitor are not publicly documented. Other institutions have been explicit about institutional oversight capabilities: Columbia University notes that conversations are “securely stored and never deleted” and can be accessed “by legal request for eDiscovery purposes, whereby OpenAI will contact our administrators,” while Harvard explicitly states that their “Policy on Access to Electronic Information applies to ChatGPT” just as it does to other university IT resources like Zoom and Outlook.

The integration of AI tools with university authentication systems (SSO) and directory services (SCIM) means that AI usage is necessarily tied to university identity systems, creating data flows that connect AI usage to individual university accounts even when conversation content is protected. As with all university IT resources, AI tool usage falls under institutional technology policies that can evolve over time, and the infrastructure established for any level of administrative oversight creates potential for expansion of monitoring capabilities. Moreover, contractual protections with vendors can change, and users who integrate AI tools into their academic work create dependencies that extend beyond their control.

The broader concern is the institutional promotion of systems that inherently require some level of administrative oversight, normalizing the presence of monitoring infrastructure in academic work. As the AAUP report notes, “data-intensive technologies have a high likelihood of making recommendations, predictions, and analyses that are biased against historically marginalized people,” with one respondent charging that AI technology “has become a tool of surveillance by administration.” The fundamental issue is not necessarily current monitoring practices, but the establishment of technological infrastructure that makes previously private intellectual activities subject to potential institutional oversight, even when specific content protections exist.

Hype, Capture, and Mission Drift

Perhaps most concerning is IU’s adoption of the hype language of the tech industry. Phrases like “AI isn’t just coming—it’s here,” “stay ahead of the curve,” and “IU is investing in your future too” frame AI adoption as inevitable, desirable, and central to the university’s mission. GenAI 101’s pitch—“no technical background required”—suggests that AI is universally applicable, reducing all disciplines to a set of productivity tasks.

This rhetoric suggests mission drift. Instead of cultivating critical inquiry, creativity, and scholarship, IU risks reframing itself as a workforce-training center for corporate technologies. Faculty are invited to become recruiters, embedding AI into syllabi and assignments, not as a matter of scholarly judgment but as institutional policy by marketing.

Our position: Higher education’s mission is not to market or normalize vendor technologies, but to critically evaluate them. IU should commit to resisting hype cycles, centering its educational mission in faculty governance, and protecting academic freedom against corporate capture.

Broader Social and Environmental Implications

Environmental Destruction

The environmental costs of GenAI are staggering and largely hidden from users. Training large language models requires enormous computational resources, consuming energy equivalent to the annual electricity usage of thousands of homes. Ongoing inference (generating responses) demands substantial energy for data centers, cooling systems, and network infrastructure. As universities deploy these systems at scale, they become complicit in significant environmental destruction at a time when institutions should be modeling environmental responsibility.

The water usage for cooling GenAI data centers is also massive, with estimates suggesting that a single conversation with ChatGPT may require the equivalent of a bottle of water for cooling. Universities adopting these technologies at scale are contributing to water scarcity and environmental stress, particularly in regions already facing water challenges.

Much of IU’s AI use relies on cloud-based computing rented from Microsoft, Amazon, or Google. These arrangements externalize environmental and labor harms to distant communities, while keeping costs and impacts invisible to faculty and students.

Resource Extraction and Labor Exploitation

The computational infrastructure underlying GenAI depends on extensive mineral extraction for semiconductors, rare earth elements, and other components. This extraction often occurs in Global South countries under exploitative conditions, creating environmental degradation and human rights violations far from university campuses.

Additionally, the training of GenAI systems relies on vast amounts of human labor for data annotation, content moderation, and system refinement, often performed by workers in precarious conditions with inadequate compensation. Universities adopting these technologies become part of global supply chains that depend on exploited labor.

Digital Dispossession and Corporate Control

As this recent MIT report outlines, GenAI systems participate in larger systems of digital colonialism, extracting value from human knowledge and creativity while concentrating benefits in the hands of a few technology corporations. The problematic nature of AI systems becomes particularly evident when looking at the ways algorithm creators can manipulate these tools to reflect and propagate specific cultural and political perspectives. A clear example emerged earlier this year when Elon Musk’s AI chatbot, Grok, underwent a series of updates that revealed underlying bias by generating responses that reflected particular political biases

University adoption of these systems legitimizes and strengthens corporate control over knowledge production and access. The concentration of AI development in a few corporations means that changes in political leadership, corporate ownership, or business strategy can rapidly alter the ideological orientation of tools that educational institutions have integrated into their core functions. Universities become dependent not just on corporate platforms, but also on the political and cultural perspectives of their creators, surrendering institutional autonomy to the changing whims of tech billionaires.

Moreover, universities providing these tools to students may be creating dependencies that students must then pay to maintain after graduation, representing a form of predatory technology adoption.

Our position: IU’s marketing frames GenAI as clean, efficient, and personal, but it obscures the enormous material consequences of these technologies. The university should disclose the full environmental and labor footprint of its AI adoption, including vendor energy sourcing, water usage, and labor practices. Adoption should be tied to sustainability commitments, not hidden outsourcing.

Recommendations

  • Faculty governance first: Faculty bodies must have input on and consultative authority over AI policy and curriculum, and procurement decisions should be driven by faculty expertise and student needs.
  • Respect faculty expertise and competence: Decisions about AI procurement and deployment should be driven by the expertise and input of faculty and staff with deep knowledge in AI technologies. Integration into curricula and the life of the IU community should be led by IU faculty and staff who best understand these tools.
  • Transparency and audits: Require vendors to disclose training data, labor practices, and environmental impacts; conduct independent audits.
  • Labor protections: Ban substitution of faculty and staff work with AI without consent and compensation.
  • Pedagogical integrity: Develop consistent, faculty-authored policies on AI use and authorship.
  • Privacy and IP: Guarantee ownership of faculty and student work; prohibit unauthorized training on IU outputs.
  • Environmental and equity commitments: Publish environmental and labor disclosures; set sustainability and fair-labor standards.
  • Pluralism in tools: Support IU-hosted and open-source models (like REALLMs) alongside corporate platforms to avoid lock-in.
  • Continuous review: Establish a standing, faculty-majority AI Oversight Committee to evaluate and guide adoption.

Conclusion

As the AAUP’s AI report emphasizes, “technological interventions, especially those offered as one-size-fits-all solutions for educational problems, do not improve student, faculty, institutional, or research outcomes. In many instances, their use harms students as well as faculty members and staff.”

The case of Indiana University demonstrates how universities are being captured by the hype surrounding generative AI, prioritizing technological adoption over educational values, faculty governance, and student welfare. This represents not progress but regression—a movement away from the critical thinking, human understanding, and democratic participation that should define higher education. IU Bloomington is not simply experimenting with generative AI—it is aggressively promoting it, positioning adoption as inevitable and central to the university’s future. Through courses like GenAI 101, vendor partnerships, and promotional campaigns, IU is framing AI as a workforce necessity and productivity booster, while downplaying its risks and externalities.

From the perspective of the IUB-AAUP, this approach threatens academic freedom, faculty labor, intellectual integrity, and the broader mission of higher education. We call for a deliberate, transparent, and faculty-led approach to AI that resists hype, protects labor and integrity, and confronts the real social, environmental, and ethical costs of these technologies. The future of higher education should be shaped by educational principles and democratic participation, not by the profit motives of technology corporations or the efficiency obsessions of administrators. Faculty, staff, and students must assert their collective power to ensure that universities serve human flourishing rather than technological imperatives.

IUB-AAUP Executive Committee
iubaaup.org


Further Resources

To learn more about GenAI, we suggest the following resources


Discover more from Indiana University Bloomington (IUB) – Association of University Professors (AAUP)

Subscribe to get the latest posts sent to your email.

Posted in

Leave a comment