Times, They Are A(i) Changin’: Generative AI and Stanford’s Honor Code

If you’ve spent any time on the internet in the past year, you’re probably at least a little aware of the recent advances in generative artificial intelligence. Programs like ChatGPT are telling jokes, writing songs, drafting code, and more. Midjourney and DALLE can generate any image you can imagine, so long as you can describe exactly what you want to see.

With all of their creativity and efficiency upsides, there are plenty of legitimate concerns surrounding the use of generative AI tools. In the wake of some of the longest strikes in the history of the entertainment industry, writers and actors have sought protections against AI becoming their replacements. Large language models are trained on massive amounts of pre-existing data with little regard for copyright protections. And across the country, professors and higher education administrators alike are watching a new era of education unfold—one where the use of AI in classrooms is half-welcome and half-rebuffed.

Stanford’s campus is one of the many arenas where programs like ChatGPT have become major players, regardless of whether or not they’re completely welcome. Stanford’s Office of Community Standards has drafted measures against unpermitted AI use, but how much stake do Stanford students put into their institution’s Honor Code to begin with? What weight does it hold now that generative AI programs have entered the proverbial ring?

Let’s Talk Honor Code

For reference, here’s the Stanford Honor Code. You’ve got your usual suspects here: Expectations for academic integrity, examples of Honor Code violations in the context of different academic disciplines, the works. 

In February of 2023, Stanford produced an addendum to its anti-plagiarism rules that lays out guidelines surrounding the use of generative AI in the classroom for students and faculty alike. Individual course instructors are encouraged to set their own policies regarding the use of AI in their classrooms. Whether these tools are allowed or not is up to them, so long as they communicate these expectations clearly to students. 

For students, the guidelines are significantly more rigid. Here’s the exact verbiage:

Absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person. In particular, using generative AI tools to substantially complete an assignment or exam (e.g. by entering exam or assignment questions) is not permitted. Students should acknowledge the use of generative AI (other than incidental use) and default to disclosing such assistance when in doubt.

It’s pretty cut and dry. Unless you’ve got an instructor’s express permission to incorporate content from any form of AI in an assignment or test, asking ChatGPT for a bit of homework help is a no-no. 

How much weight do students place on the Honor Code? Well, the average student response to I take the Stanford Honor Code seriously, according to the 2021 Marriage Pact, is pretty middling. 

honor code line

Freshmen are the most solemn about the college’s academic integrity guidelines. This reverence fades as students progress through their programs, with seniors seeming to care the least about the rules.

honor code class

Male students also take the Stanford Honor Code the most seriously of their peers. 

honor code gender

When it comes to political affiliation, independent students lead the pack. Their less traditional peers, like socialists and communists, respect the Honor Code the least in comparison. Interested in how else a student’s political affiliation influences their values?

honor code pol

To get a sense of how current students are using AI in their daily lives, I spoke to two Stanford students. As these conversations involve discussions of ethics and academic integrity, we’ve replaced their names with pseudonyms to preserve their anonymity.

Filling the Blanks: Emily

Emily is a junior design major on the human behavior track at Stanford, and generative AI has become a big part of both her personal and academic lives. She uses ChatGPT to write Instagram captions and draft Hinge prompts, and she uses DALLE to generate cover art for the songs she releases on Spotify to mitigate the expense of commissioning artists.

Aside from her personal and creative pursuits, programs like ChatGPT have aided Emily’s academic career, when used carefully.

For example: in a Social Entrepreneurship class, Emily had to do a project identifying the stakeholders of a given social issue and develop a visual map of the information. To save herself the time, she asked ChatGPT to provide her with information on the stakeholders and their interests in her assigned issue. She then used the content it produced and rewrote the final product in her own voice. 


ChatGPT provides me with the content that I'm seeking, which saves me the sheer amount of time I would otherwise spend doing research, but there’s still a middle ground between what it provides me and what I end up handing in.

Emily isn’t alone in this sort of generate-and-reword approach to using ChatGPT for homework and assignments. When ChatGPT was brand new to the academic scene, it may have been enough to ask the chatbot to write your essay for you and then turn it in sight unseen. However, the consensus among Emily and her friends now is clear: for their grades’ sake, it’s safer (and less obvious to eagle-eyed TAs) to use AI in smaller chunks to generate essay ideas, provide information and academic sources, reword specific language, and even proofread finished essays and create suggestions of how to improve them.

Though academic integrity has been a sticking point in the conversation around the use of generative AI, Stanford’s Honor Code itself hasn’t really been an omnipotent presence in Emily’s academic career. In her opinion, it comes up every once in a while at the beginning of the quarter and is typically only impressed upon students who are new to campus.

Peers in Emily’s circle are also pretty lax about academic integrity—it’s common for students to talk openly about “collaborating” on assignments and take-home exams. It definitely depends on the student, she says, but most don’t put much stake in being the most academically pious learners in the world.


Even right when ChatGPT first became a thing, I knew people who almost instantly seized the opportunity to run their essays or code through it. I think we didn’t quite yet have the notion that using ChatGPT was cheating—we saw it more as a cool new tool. As teachers continue to address AI in their classrooms more and begin referring to the use of it explicitly as cheating, that consensus may change.

For all of its time-saving perks, there are a few elements of AI’s growing use in creative industries that concern Emily. For one, seeing AI programs that are capable of composing their own music worries her as a songwriter because it feels like a craft she’s been honing for a long time that is no longer as meaningful. And while some musicians have wholeheartedly embraced AI as a new element in the music industry, Emily is bothered by the use of musical artists’ work to train AI or inspire AI-related projects without permission or compensation.

If the convenience of AI has done anything for Emily, it’s created one interesting consequence. She explains it like this: if she’s working on an assignment, she knows she can use AI to generate the information she needs without having to spend the time and effort researching the topic herself. As a result, the information she gets from ChatGPT won’t be stored in her brain as robustly or as long-term as it would have been if she had had to attain it through traditional, more time-consuming methods. 

But at the same time, she attests that the final product of what she needed that information for is much more advanced than she feels she would be able to produce through her own laborious research. This shift in how she collects information might make the mental storage of said information less profound, but using ChatGPT elevates her work, increases her productivity, and pushes her final product to a higher caliber, which justifies the shortcut. Ends, meet means.

Fearing the Unknown: Jess

Jess is a junior design major on the digital UX and AI track, and while she’s used generative AI tools in a similar capacity to Emily, her concerns around data privacy and the rapid advancement of AI programs are growing, especially as these tools develop and change the way students approach their educations and future careers.

Though questions about the ethics of using generative AI programs have been on her mind, tools like ChatGPT efficiently provide information that's challenging to find through casual Google searches. She also finds that the use of these tools is becoming more commonplace in academic settings.


On every single syllabus, there’s now a section that’s about the use of generative AI and what it means in terms of our assignments. I have used ChatGPT for some assignments when I wasn’t supposed to, either to find information or to double-check that I’ve done a math problem correctly. I don’t know if that’s what the majority of users are doing, but I like to use ChatGPT to confirm that my answers are correct. 

Like Emily, Jess acknowledges that ChatGPT isn’t perfect, and you often can’t (or just shouldn’t) copy the answers it generates outright. In the past, Jess has used ChatGPT on an open-book midterm and found that some of the answers she got conflicted with her other notes. With its imperfections aside, Jess is impressed that someone can get nearly all the way through an assignment using ChatGPT without having any prior knowledge of a given course.

To her peers, ChatGPT has become synonymous with finding information in the way Google has, whether it’s a permitted tool in a given course or not. Jess thinks that students hold a higher degree of integrity for exams, especially in-person ones, but when it comes to homework and other smaller assignments, using ChatGPT against the terms of the Honor Code is generally welcomed among students. 

As students reap the benefits of the ever-evolving tools at their fingertips, Jess thinks that professors are forced to assess the impact generative AI will have on the future of education and determine how they need to rise to meet it.


It feels like they know this is something that’s coming at them fast and they’re trying to prepare the education system for it, but it’s impossible to keep up with the rate that AI is adapting and changing. It feels like some professors are cautious and wary and others are very intimidated by AI. I think that it comes from a very warm place of wanting their students to know the content they’re teaching and to succeed, so they want to dissuade us from using AI if it’s not relevant to the assignment or if it’s not allowed. But I also think that for those who have been doing this for 10 years, 20 years, or they’re tenured, now they’re going to have to remodel their entire way of how they teach and work.

Jess’s opinions on the Honor Code vary a bit from Emily’s: in her experience, the college’s anti-plagiarism rules are reiterated frequently in each class as courses across disciplines are taught differently, and she thinks about the Honor Code most frequently in her computer science classes. Because it’s such a prominent major at Stanford, and coding is a space ChatGPT has infiltrated in full force, Jess feels there’s a significant effort put into the search for Honor Code violations, which fuels discussions surrounding the Honor Code and its rules to begin with. 

As the use of generative AI becomes more prevalent in the tech industry, and countless other professional disciplines, Jess wonders if professors, and the Honor Code itself, will shift to view AI use in a more favorable light in order to prepare students for their careers. 


It’s something that Stanford maybe isn’t completely preparing for, but students certainly are, whether that’s through clubs, attending speaker events, or just engaging with AI in and outside of class. I’m definitely thinking about how AI can accompany education and how assignments may be altered in the future alongside it.

One concern Jess does have stems from the lack of government oversight in the creation of AI tools. She thinks that in order to protect individual data privacy and to ensure that companies currently developing generative AI tools follow strict safety standards, more legislation and government intervention are needed*.


As someone who will inevitably use AI in my career and my education, I'm definitely thinking about the ethical standards that we need to enact so that AI will serve people and communities instead of harming them. It feels like AI is a Pandora's box. We’ve opened it, and the rate at which AI has been improving is alarming because we have not implemented measures to make sure that it’s serving our needs and not harming folks who can be left vulnerable by AI. I think that can mean a lot of things, but most importantly, I think the security and privacy of our individual data is so immensely important. 

*In the time between my conversation with Jess about these concerns and the publishing of this article, the White House has issued an executive order outlining standards for, among many things, AI safety and security and the protection of workers in industries susceptible to job replacement and discrimination. It also calls on Congress to pass bipartisan data privacy legislation.

With all that in mind, it’s difficult to conclude whether the ever-increasing prevalence of generative AI tools on college campuses is a net positive or negative for students, faculty, and administrators. One thing’s for certain: as these programs change and evolve into the information powerhouses they seem to want to be, schools and their academic integrity policies are going to have to change to keep up with them. 

Whatever happens in the future in regards to AI development standards, potential data privacy legislation, or the ways in which schools decide to manage AI use in classrooms, this isn’t a bell anyone’s going to be able to unring.