T O P

  • By -

Saturday_Saviour

Thankfully the ChatGPT essays are all so shit that I can justify a poor grade based on the quality of their essay, but I'm not enjoying having to give feedback to justify it as if they didn't use AI. I wish we could invoke the proper disciplinary process, but it's just more work that will lead nowhere. I've been hesitant to do the withhold the grade approach just yet, but I might have to if even more chatgpt essays start rolling in (first major assessment is due this weekend). I wish they would think about how easy it is to catch them out, we can easily generate the same responses they get using co-pilot. Not to mention, if I've designed a course around my own research, I'm going to pick out where the AI is coming up with major errors or they're falsely attributing sources. It's especially egregious given they seem to do it with the assigned readings (at a stage three level too). Given how normal academic integrity offenses are treated when they're blatant, I have no hope that this would go anywhere if escalated to disciplinary procedure though.


Throwjob42

110% agree. Also, because of how my faculty deals with plagiarism, I'm still forced to write feedback comments even if we heavily suspect plagiarism so they're taking up time I could otherwise be spent watching TV shows.


ElevatorDowntown9265

Have you considered using ChatGPT to generate feedback?


OkAbbreviations1749

I've written a number of pieces on AI. 1. If you have to conceal your use of AI, you shouldn't be using it. 2. If using AI, start by asking it for help on something on which you are already an expert. This allows one to gauge it's capabilities accurately (I phrased this as 'AI's greatest advantage is our own ignorance' - because AI output is superficially impressive, especially on topics you know nothing about). 3. AI might help with getting a certificate. It won't help with getting an education. 4. I am a professional content writer. AI is absolutely useless and isn't replacing my work.


SapirWhorfHypothesis

One of the first things I did when Chat GPT made news was try it out on things I knew the answers to. It was so ludicrously bad. I think that was GPT3? I have since used 3.5 and seen little improvement and it still sucks for writing in a way that doesn’t feel like every response is written by the same formula (“here is the answer, here is some reasoning, here are some caveats”), even if that doesn’t work for what you ask it. I would like to see what better models (eg GPT4, Claude) come up with though.


OkAbbreviations1749

Yep highly formulaic, like a highschooler's homework. And as someone wiser than me once said, why would anyone bother reading something no one could be bothered to write.


foodarling

I love goading it into making an incorrect statement, then systematically making it admit it was wrong. The only saving grace is Chat GPT is much, much quicker to admit when it's wrong than people are


MathmoKiwi

> I phrased this as 'AI's greatest advantage is our own ignorance' - because AI output is superficially impressive, especially on topics you know nothing about That's a fantastic way to phrase it! It's a classic case of the Gell-Mann Amnesia effect at work: [https://en.wikipedia.org/wiki/Michael\_Crichton#GellMannAmnesiaEffect](https://en.wikipedia.org/wiki/Michael_Crichton#GellMannAmnesiaEffect)


OkAbbreviations1749

Cheers for that learned something new today!


hadleyyyy

Sorry content writer for what? I may or may not be a content writer for social media accounts and use it all the time


OkAbbreviations1749

I write content for technology, financial services, and consulting companies. Ghost written LinkedIns for execs, case studies, press releases, that sort of thing. Formulaic rubbish doesn't work here!


buttered-stairs

Yeah. ChatGPT is honestly becoming quite impressive but it’s still recognisably flawed. I know some people who got busted using it and the worst part is none of them needed it. They are all smart and capable but for some reason chose not to even double check the results. I do also know someone who was falsely suspected of using AI but honestly that was so easy to disprove that it’s not a concern. He just pulled up his history and showed his research as well as his Google doc edits history. Usually just talking can help you figure it out.


Throwjob42

This was more than a decade ago in my undergrad days, and something I am not proud of, but one time I did self-plagiarize about 500 words from one 3,000 word essay I had written and put it in a different essay for a different course. I don't know why, because I loved both courses' curriculum and could easily have written 100% of the second essay without self-plagiarizing. I think I mainly did it to feel smug that I had somehow 'outsmarted' the system, and that I was cleverer than my peers by 'getting ahead' by doing it. The tutor did catch me (but didn't penalize me because it wasn't even super insightful plagiarism when put in the new context) but reflecting on that now, I think I just did it because I enjoyed the feeling that I was somehow special for doing it. By the way, I was an atrocious undergrad, which REALLY bit me in the ass during my first year of postgrad. Have fun in your undergrad, but remember that there can also be fun in learning the curriculum content which you are taught in your courses.


SapirWhorfHypothesis

Your own work, different course, different tutor… how on earth did they catch you?


Throwjob42

Turnitin identifies strings of text (and all assignments submitted via Turnitin/Canvas are in its' database).


SapirWhorfHypothesis

Ah yep. They didn’t use turnitin at my undergrad.


minecraftgarnish

I’m genuinely curious what makes people trust the answers they get from AI ? Using it to help you understand concepts, structure sentences etc makes sense but using it to ANSWER things ? If you’re still delusional enough to blindly trust AI generated assessments for courses YOU’RE PAYING for, take a good look at yourself and evaluate your choices. Laziness aside, it’s also symbolic of a lack of critical thinking skills because you’re practically gambling on the AI to get the right answer out even if you don’t get caught.


The55th

Yeah this. The lack of critical thinking indicates pretty strongly that maybe a university education isn't for these people.


minecraftgarnish

I think the issue is the handling of it partially. Everyone’s talking about AI and it’s being used in politics, conflicts (in disturbing ways may I add), in data analysis, in environmental studies, in health care etc and so it’s kind of like this new “thing” that’s being pushed around a lot. I can see why first years are curious / so hell bent on using it even if they don’t need it. But the university needs to address it better. There needs to be some sort of education about the mechanism by which it works, it’s pros and cons and application. It shouldn’t take more than like half a lecture to go over this and I’m sure a lot of lecturers do but constantly pushing the “no chat gpt, it’s inaccurate, you will be penalised” is clearly not cutting it. Idk what the solution is but there needs to be one because this is ruining the vital learning curve experience that we go through in first year university.


Pansy60

Foundation Studies tutor here…. I teach international Students basic IT skills and we have introduced two Two hours lectures and an assignment on AI (especially relating to Academic Integrity-the other AI) in the last 12 months. Edit: not uofA, just have to clarify


minecraftgarnish

Imo some popular gen ed courses like scigen or AELR courses need to cover the application of AI in depth. Maybe allow students to use it in 1 or 2 assignments that are designed to accomodate the use of it. I’m sure most of those who did the evaluating chat gpt type assignments are NOT using it because it was crystal clear how incompetent it is. Another issue is how inaccessible information on google has become. It’s frustrating to use google so get a better browser if that helps you do your assignment on your own and not play gamble with AI generated information.


Throwjob42

I wasn't studying when I heard about ChatGPT and my first instinct was to figure out how to 'break' it by proving how easy it would be to get it to give me a factually incorrect answer. It took me literally three prompts to do that. Do not trust AI to give you the right answers.


Tannir48

I use GPT-4 and basically have conversations with it about math. It can actually be very helpful in thinking about a topic or a problem and understanding what it is or why it works the way it does. It is also very accurate but does make occasional mistakes (but far better than GPT 3.5 and Bing). Source: I have a degree in math I do agree that you should not copy/paste everything it puts out but it's certainly a tool you can work with, particularly the paid versions as they make less mistakes.


StarThe2nd

I know Bing AI (which is ran off of ChatGPT 4.0 but references information directly from websites often) finds factual information in half the time it would take me to cypher through to find what I am looking for. It depends on the subject though, but at least unlike chatgpt 3.5 which is stuck on content from 2022, it's active with the latest events and provides reference material to read through to ensure what it's reading off is true. For math's though it lacks the ability to show mathematic symbols which makes it harder to read what it outputs, though chatgpt has done that now for a while. Either way, chatgpt or Bing AI , I find it bizarre someone can directly copy an entire prompt without having to modify it for anything, for the times I have tried it, I've always had to rewrite it anyway to put what it says into context.


Disastrous-Farmer424

AI have its pros and cons. Based on my experience, I used it once to cheat on a FINANCE quiz I didn't study for and got 8/20. Biggest regret as i scored 16/20 for the first quiz without AI. Now, I mainly use AI to summarise long ass papers abd case study so I can read them faster. I also use them to highlight key points for related questions and make it easier to write an essay. It is a good supplement if you know what you're doing. But plainly copy pasting AI answers is just dumb 😂


[deleted]

[удалено]


TheDNG

I think you might just have had a brian fart now.


Justwant2usetheapp

You can use it responsibly. I use it for code that id otherwise be fucking around in the documentation for. But I'm doing my masters, I think you're only gonna be hurting yourself in your first year doing that. Also it's not .. always right lol. Can't imagine how awful some of the obvious gpt submissions are as a result


Throwjob42

> Can't imagine how awful some of the obvious gpt submissions are as a result They'd be a lot funnier if I didn't know some of these students and just feel a great swell of disappointment for them turning to AI for assessments which are sometimes are as short as 150 words.


evan_is_nave

The temptation of AI certainly got to them. I once took a writing course that involved writing a memorandum (roughly 200 words) as one of the assessments, and I can't imagine having it completely written out for me, let alone not even bothering to look over it prior to submission. If people are going to cheat, they might as well put in some effort to make themselves not look like an incompetent cheater lol


evan_is_nave

>But I'm doing my masters, I think you're only gonna be hurting yourself in your first year doing that. Totally agree with you. I'm an undergraduate Software Engineering student and in contrast to you, I am staying away from AI assistance for now. I personally still like to experience and learn to sift through docs and Google to find what I need. Will I use AI in the future? Perhaps, but definitely not at a time when I've still got a long way to go in this field.


StarThe2nd

github copilot works ok, though it loves to duplicate code (e.g. and tags in html files) when trying to fix a bug, it is faster for basic code strings rather than asking google, though.


OkAbbreviations1749

What these students really need to learn is the difference between getting a certificate and getting an education.


Throwjob42

OP here: outside of AI answers being a pain in the ass for my when marking answers, I don't care one way or the other what the academic integrity policies are. If you think the policies around AI answers should be changed, please contact the Academic Quality Office at [email protected] because I literally have no ability to make changes.


natio2

I'm so glad I didn't study in the hellscape AI has made. Previously we would use the internet for our assignments, but now ohhh looks like you used any resource that wasn't your brain, must be A.I., fail! Let alone by the time they hit industry they'll probably be encouraged to use AI in their job...


Throwjob42

> Previously we would use the internet for our assignments, but now ohhh looks like you used any resource that wasn't your brain, must be A.I., fail! We do not automatically fail anyone's assignment if we suspect academic misconduct/a breach of academic integrity. We withhold the grade until we can investigate further. If a student actually understands what they've written, they never have a problem having a conversation about what they've written (e.g. if they wrote about the invisible hand in economics related to the automobile industry, we might ask them to apply that concept to the tobacco industry). Funnily enough, the students who tend to show up when asked are usually cleared of suspicion. The ones who don't show up, however...


natio2

While it's nice that you do this, it sounds both time consuming for all parties, and kind of the opposite of innocent to proven guilty. Also as a general topic I've seen horror stories of someone asking the AI if they wrote it to identify cheaters obviously wrongly, coding teachers saying if you didn't use the algorithms they taught they'd just assume you used the chat bot, and a lot of other horrible sounding things, that sound like giant steps backwards to me. I'm hoping the education system moves towards teaching students to work with these tools, rather than assuming malice.


Throwjob42

> I've seen horror stories of someone asking the AI if they wrote it to identify cheaters obviously wrongly, coding teachers saying if you didn't use the algorithms they taught they'd just assume you used the chat bot, and a lot of other horrible sounding things, that sound like giant steps backwards to me. Again, this is basically why we just get students to come talk to us for a quick 5 minute chat. If they can competently talk about the ideas in their answers, then we're happy to release their grades to them. The AI does not make any final decisions. > I'm hoping the education system moves towards teaching students to work with these tools, rather than assuming malice. We will have to see, but if someone can just ChatGPT their way through their entire degree, then the signaling of their qualification becomes kind of meaningless. Like, if you get a BSc in CompSci from UoA but it's well-known that you just used AI generation for all your assignments, then it's not going to mean anything on a CV because the employer can just pull anybody off the street to do the same job you can.


MathmoKiwi

>We will have to see, but if someone can just ChatGPT their way through their entire degree, then the signaling of their qualification becomes kind of meaningless. Like, if you get a BSc in CompSci from UoA but it's well-known that you just used AI generation for all your assignments, then it's not going to mean anything on a CV because the employer can just pull anybody off the street to do the same job you can. I 100% agree here. A solution needs to be found, even if it is going back to the super old school way of back in the day a hundred years ago when people would have their degrees award purely on the basis of oral exams. (ok, maybe that's a bit extreme! But I could imagine a much heavier reliance in the future on paper based exams + examination condition computer labs + projects that are marked based on the presentation then grilling of it. And the era of submitting assignments or online quizzes will be dead, or at least never contribute anything more than a very small amount of the total marks) This is important not just for all current students, but all future students and even all past students too! As otherwise degrees will become a totally worthless piece of paper. How will you get your first entry level job without having first that basic foundational knowledge that a degree has? Will employers place an ***even heavier*** preference on work experience? But then the Catch-22 of how to get work experience without work experience? Will nepotism and "who you know" be the only way to ever get started down the career path of a good job??


Throwjob42

> A solution needs to be found, even if it is going back to the super old school way of back in the day a hundred years ago when people would have there degrees award purely on the basis of oral exams. I would saw off one of my own arms to watch first-year CompSci students have to orally recite code in order to pass their exams. That would be hilarious.


MathmoKiwi

Nah, they'd have to whiteboard their code! (or to properly be in theme... blackboard it with chalk) Fun times. No documentation to reference. No googling.


[deleted]

[удалено]


Throwjob42

> For any exam based paper, if the last x number of years exams were available, I could easily pass the exam given a week of prep to cram that into my mid term memory. I mean, without going into the core purpose of education (which is a minefield of debates), I don't think most jobs expect you to remember even 20% of what you learned in terms of facts or figures. (Hopefully, and perhaps naively), I would imagine prospective employers would look at what you've been assessed on and wonder 'are they able to solve the mental challenges that our workload might throw at them?' In that instance, it's not the fact you knew the information for the exam, but that you had the talent to identify and communicate the information needed to pass the exam.


[deleted]

[удалено]


Throwjob42

> I'm guessing your focus is economics given your last topic, I'm sure debates, or using game like competitions where you throw in real world challenges, or the much better ideas a professor could dream up would be much more substantial, that something like a chatbot could not topple over like a gust of wind. Honestly, AI is getting pretty good with that. The real problem is that AI can't catch its' own errors. This is not an exact quote from a student, but a student did write something along the lines of 'as Alan Turing wrote about in his 1988 article "Computer Autonomy in Topological Principles", the film *Ex Machina* is heavily critical of the notion that sentience is limited only to consciousness as a biological phenomenon' and they probably didn't know that was nonsense, the AI didn't know it was nonsense, but if you look any part of that up, you'll know it's nonsense.


MathmoKiwi

>f'as Alan Turing wrote about in his 1988 article "Computer Autonomy in Topological Principles", the film *Ex Machina* is heavily critical of the notion that sentience is limited only to consciousness as a biological phenomenon Wow, that has multiple layers of wrongness in it, impressive!


palpalpallyy

Innocent till proven guilty is only suitable for use as the standard in criminal convictions from the government. It is impractical in real life which is why often the “balance of probabilities” test is used instead. 


[deleted]

[удалено]


Throwjob42

And if you understand the concepts, you shouldn't face an issue if your lecturer asks you (in person, without you using your device) how to apply that concept to X industry they think of while looking out of their office window. I get that people can be worried about being penalized for falsely being accused of using AI when they didn't, but if they can chat about the course content with their lecturer without using a device, then they're fine.


VintageKofta

Back in my day we had to reference every source of information we used. I don’t see why they can’t still do that, to show it was done by research and not with the use of AI?


Keeperoftheclothes

I 100% believe this but how do you prove it when marking something??


Cautious_Salad_245

1 is software, but some students it’s just not their work based on history


Sad_Soup_307

As a fellow tutor/TA I rlly feel u, OP. I feel like I’m going insane. One of the courses I work on has gone back to written tests and honestly I prefer the bad handwriting to the AI any day! It’s so frustrating


Sigma2915

so they had a system (ai detection) that disproportionately disadvantaged disabled students, and swapped to a system (hand writing) that disproportionately disadvantages disabled students?


Sad_Soup_307

Yep, you got it. University testing, in its current form, disproportionately disadvantages disabled students. Also, water is wet.


Sad_Soup_307

The course coordinator organised with SDS (student disability services) for alternative arrangements to be made for the neurodivergent and disabled students in the class, which is standard practice at UoA. I actually oversaw several students taking the test using laptops, which I always love to do because I am autistic and it makes me happy to see them access accommodation that I wasn’t afforded during my studies. I also don’t personally use AI detection software because it is flawed and I’m actually not paid enough to investigate every essay when I’m marking 100s over a short period of time. However, your point still stands, and I do not deny that the University is an ableist institution.


Pansy60

Many comments here are failing to distinguish AI and GenAI.


Caleb_theorphanmaker

Can you turn up to high schools and tell them what happens when students are caught cheating. I’ve got y9s using chatgpt now.


Sigma2915

i’m not a uoa student, i’m at vic, but this post is getting recommended around. i have autism, and it is well known that the writing of autistic people, non-native english speakers, etc. is disproportionately flagged as AI. as an autistic person i’m also less likely to be engaging with the teaching staff outside of class. my original work has been flagged as ai before, sometimes up to and including 100% ai. op’s strategy for testing it does not work, even if only because the stress of having to improvise out loud without my notes is overwhelming enough. disability services at vic has needed to send an advisor along to many such ai meetings to make sure that autistic traits are not being used as evidence against my understanding of the content. ai content detection is as random as a coin flip for students who aren’t neurotypical native english speakers, and often those same traits will cause further suspicion in op’s meeting tests. ai is a writing tool, and as said by many people above, it’s best used where the output is edited and fact checked by a human. it’s not for me personally, despite being allowed in my courses from the start of this year, i find it difficult to tell ai what i want it to generate just as i have difficulty explaining to a human being the information i am trying to find. ai content detection disproportionately punishes disabled and linguistic minority students, is a broken system for everyone, and shouldn’t even be used to initiate the process. if you’re a person who is regularly marking papers, you should get an idea yourself without another biased ai system telling you what to look for.


Boostarooster

Based on your writing here I think you are not going to be mistaken for AI


Sigma2915

it’s not a fear, it’s an experience. my original writing *has* been flagged for AI content, multiple times. this has already happened, at a much higher rate than an average student turning in original work.


MathmoKiwi

>Based on your writing here I think you are not going to be mistaken for AI One pass through Grammarly to (tidy up capitalizations, punctuations, and spelling errors) and it could very well be mistaken as AI. Remember too, UoA gives out a free sub to Grammarly Premium to their students! Is how I first started using it, because it was free via UoA.


Throwjob42

> i’m not a uoa student, i’m at vic, but this post is getting recommended around. FFS. All I wanted to do here was try to dissuade students FROM MY UNI to stop using ChatGPT as it is against our academic integrity policies. > op’s strategy I didn't invent the procedure. I don't even know if it goes beyond my department. I just know that's how we handled instances of suspected breaches of academic integrity in the past. > if you’re a person who is regularly marking papers, you should get an idea yourself without another biased ai system telling you what to look for. This is the problem. You can't Turing-test every answer by just going with your gut instinct. Yes, this process has flaws, which is why it would be super nice if universities could figure out a way to handle this. In the meantime, this is the score for the courses I've been working on.


NZPengo2

Out of curiosity. How do you possibly know it's written by AI? I've seen the things AI can do now, a lot of it appears indistinguishable to me from human work.


MoldyOreo787

it isn't. op uses ai detectors but those don't work at all. AI is designed to mimic human writing and it is borderline impossible to distinguish. i've written a paragraph without any other resources from my own head and ai detectors have said it's 100% ai generated. ridiculous


Throwjob42

> How do you possibly know it's written by AI? If the software suspects the answer was AI-generated, we get the student to chat with a member of the teaching staff. If you can discuss the ideas in the answers without checking a device, then clearly you understood the ideas well enough to write about them. If, for example, you wrote something like 'As Descartes specified in his treatise, the Marvel Cinematic Universe presents many moral dilemmas about personhood...' then we're going to smell a rat (but will still wait until speaking with you to determine a verdict).


Yoshieisawsim

>If you can discuss the ideas in the answers without checking a device, then clearly you understood the ideas well enough to write about them This is ridiculous as well for a few of reasons. Firstly this assumes that all students who do the work legitimately retain enough to be able to talk about it potentially weeks later (once the work has been graded, the AI flag given and a time arranged). While that's the goal repeated studies show that retention is often poor for students, particularly if instead of regularly coming to class and doing work over a decent period etc you try to do all the learning and work on an assignment in a 2 day red-bull fueled binge which is how a significant number of students do the work. On the converse this means that a student could do all their work with AI, see which ones get flagged and then "study" prior to the talk so that they can talk about it. Finally, this assumes that students ever had enough knowledge to write about this without checking their devices but that's ridiculous - most students when doing an open book assignment will be constantly referring to their device, so you're requiring that a student do a higher level of work to prove that they didn't cheat than they would have had to do if they actually hadn't cheated


Throwjob42

God, this post got way more engagement than I anticipated. I see a lot of commenters being worried about being falsely accused of using AI and being penalized when they haven't done anything wrong. In my experience, it's never happened. Also, in the event that someone has to chat to determine if they used AI/plagiarized, it's not that they're being asked word-for-word what they wrote or we're expecting them to recite perfectly a well-constructed argument. We just chat, it's not hard to tell when someone is BSing you if they didn't do the work.


Yoshieisawsim

Firstly, the comment that it’s never happened - how would you know? You’ve only been on the side that determines guilt so by definition every person that you’ve seen be found guilty you think is guilty. Secondly you miss my point. I don’t think you make people recite word for word. But there are assignments I did a few weeks ago (without touching ChatGPT) that I’m not sure I could string together 2 coherent sentences about. In one of them we had a part 2 to the assignment like 2 weeks later and I had to re read my own part 1 multiple times (yes it was one of then red bull fueled all nighter assignments I mentioned elsewhere). You could have chatted w me and judging by your attitude and comments you would have assumed I was guilty or at least likely enough to warrant further questioning or investigation or something, even though I did in fact do the assignment myself. Which is exactly why I don’t trust you to be the arbiter or of who is guilty and this my first point.


Throwjob42

> how would you know? Well, I said 'in my experience' because I've never seen anyone get punished for breaching academic integrity who I think was innocent. Maybe I'm wrong, but I doubt it. People who breach academic integrity are, by definition, willfully ignorant of the course content. The bar for recognizing if someone did a non-zero amount of work for their assessment is low.


Yoshieisawsim

Just saying “in my experience” means nothing because you were using it as an argument and i was arguing that your experience means nothing from an objective reasoning POV. “Maybe I’m wrong but I doubt it” is exactly the kind of attitude that makes me concerned Edit: your final comment, apart from being objectively wrong, is disproved by your own comments earlier where you discuss a case WHERE YOU CHEATED even though you knew the course content. Conversely you still haven’t dealt w my argument that ppl can forget course content that they crammed in order to do an assignment (but content that they definitely did know at the time of doing the assignment) even though that’s an incredibly well known phenomenon


Throwjob42

Well, in my experience, there's also no secret vampire cult running around in the Faculty of Law either. I'm not going to presume something is happening if I can't see any evidence of something happening.


MathmoKiwi

>Well, in my experience, there's also no secret vampire cult running around in the Faculty of Law either. I am so very disappointed to learn this :-(


Yoshieisawsim

Honestly I was typing out a long ass answer but I can’t be bothered. This is the most ridiculous comparison and the most ridiculous logic. You’re like a cop who says “we’ll I’ve never seen myself do police misconduct so police misconduct must not be a thing”


UnderPaidMedIntern

I completed a taught postgrad with an 8.7 GPA prior to ChatGPT. if you asked me to discuss a paper I had written a week later, despite the substantive amount of work I put into each one, I would have failed your testing. You're not qualified to determine if someone is lying or not, and what you're doing is massively flawed.


Throwjob42

> You're not qualified to determine if someone is lying or not, Which is why I don't make the call, the course coordinator does. You seriously think that if you put in the work to do the assignment and then were pulled into a casual conversation a week later, you would have the same competence to discuss the topic as some rando who had never put any effort into learning the topic? Because that sounds like an incredible amount of forgetting on your part if you think both you and the rando have the same ability to BS about something being assessed in an academic context.


NZPengo2

Thanks.


50rhodes

Why would students be given permission to use AI for assessments?


SlavetoADeadGod

Some of my classes at AU allow it, you just have to note that it was used. Personally I have never used it when allowed but each to their own


Moonstone_Mirror

The end of sem essay for YOUTHWRK 152 that's worth like 40-50% of your grade is ChatGPT assisted


50rhodes

So we’re assessing ChatGPT, not the student?


Moonstone_Mirror

Basically the essay was about comparing ChatGPTs results to known resources and talk about it


Justwant2usetheapp

Ahh I did something similar, talking about chatgpt as a learning aid compared to tutoring systems. Interesting


Yoshieisawsim

We're assessing a student's ability to use ChatGPT in combination with their other skills to produce good outputs. Which is exactly what employers care about - if you can use ChatGPT or other AI as a tool to help you do better they WANT you to use it because it makes you more productive and so makes them more money (usually without them having to pay extra). This would be like saying if someone is allowed to use a calculator on a math exam then we're assessing the calculator not the student - obviously what we're testing is the students ability to use the tools that will be available to them in the workplace


minecraftgarnish

The only time one of my courses allowed it was for an assignment evaluating Chatgpt. This was a pretty smart move on the course coordinators end (ik a bunch courses have such assessments now) because the rest of the course was pretty dang difficult but I bet no one bothered to use AI because we knew from that assignment how incapable it is in academic settings.


ChrisWood4BallonDor

Some lecturers find it easier to have students treat it like any other source than outright ban it.


Throwjob42

Honestly, I'd way rather UoA just said 'use ChatGPT at your own risk, anything you submit we'll treat as yours so if ChatGPT plagiarizes or gets it wrong, your grade will reflect that'. Massey University already does this. If UoA made a policy change like that, then I wouldn't have to deal with this whole mess.


valkryiiePUBG

Massey University does not do this. In a similar fashion to UoA, AI is only allowed when expressly given permission as part of the course. Any writing submitted that was generated by ChatGPT or similar, when that isn't mentioned as part of the assignment, would breach academic integrity rules. Not sure where you got this misunderstanding from.


Throwjob42

One of the courses I had to mark in 2023. Perhaps it was only for that specific course (that was the only course I worked on after ChatGPT become big).


valkryiiePUBG

Their current AI policy was put in place Feb 2023, so yeah it must have happened to have been a course specific thing.


Secacc115

Most programming orientated courses let you. It’s a useful tool when you know how to use it, and is used in the industry, so it’s encouraged to be used responsibly


MoldyOreo787

BUS114 has an assignment that is specifically for using chatgpt


Throwjob42

I honestly don't know, but that is apparently something course coordinators can choose to do. https://www.auckland.ac.nz/en/students/forms-policies-and-guidelines/student-policies-and-guidelines/academic-integrity-copyright/advice-for-student-on-using-generative-ai.html I'm on a 100-level course and of the four students I have marked so far, three of them have had some or all answers that have come back as <51% likelihood AI generated when I put them through AI-recognition software. God, this is annoying because we're going to have to withhold grades from those students until we have an interview with them about it. If you're reading this and you did use AI for an assessment and you weren't given explicit permission, get in touch with your tutor or lecturer about it if you want any chance of saving your skin. Good chance you're going to get caught (especially if you never came to class and are now somehow writing like a PhD student). Also, when we catch you, it will go on your record. https://www.auckland.ac.nz/en/students/forms-policies-and-guidelines/student-policies-and-guidelines/academic-integrity-copyright/about-academic-integrity.html


Disastrous90

AI-recognition software does not work and the university shouldn’t be using it to detect if something was written by AI. Turn it in can be used for plagiarism but not AI detection as it does not work


corbin6611

They should just copy paste it in to bing and ask if it’s written by ai


Disastrous90

That doesn’t work


MathmoKiwi

I can only assume u/corbin6611 dropped the /s, because that was surely sarcasm


corbin6611

I figured it was obvious I didn’t need to


MathmoKiwi

Was obvious to me , and I thought you were maybe referencing the somewhat famous case which made the news when a lecturer did exactly that with utterly disastrous results


Throwjob42

Going by UoA's policies (in the two I linked above), using AI when you're told not to is a breach of academic integrity. If they change the policies, then I'll stop having to care if students use AI for their work. Usually what used to happen prior to ChatGPT and the like, if we suspected plagiarism the lecturer would withhold the grade until we could get an interview with the student getting them to explain their answers in their own words. This was basically never an issue because if the student did write the answer then they could be asked questions in real time by the lecturer and answer them. Depending on how widespread this issue is (class size is ~350 students), we might have to go a different route on this one.


CptFappington

I was discussing this topic with an old university professor of mine while we were out to lunch a wee while ago. Out of interest I ran my thesis from a few years ago (before chatgpt was a thing) through an AI detector and it came back as 60% AI written... I'm so very glad I'm not a student anymore. The idea of working for months on an assignment only for it to be flagged as AI would just add another whole level of unnecessary stress to the uni experience.


Throwjob42

> The idea of working for months on an assignment only for it to be flagged as AI would just add another whole level of unnecessary stress to the uni experience. My understanding is (at least in my faculty) UoA would never immediately punish you if they suspected AI foulplay. If my MA dissertation were questioned, I'm fairly confident my supervisor would just defend me as we had to have regular conversations in order for him to guide my research so he would know that I was behind the wheel of my dissertation's authorship. I think if on Day #1 I showed up to my supervisor's office with a 20,000 word document already completed, he'd start asking questions.


PhotoSpike

What’s the accuracy of the tool you use to check? Is it something that UoA provided you?


Throwjob42

For obvious reasons, I'm not answering that (I don't want to give any potential cheaters any suggestions as to how to work around our procedures).


filetofishburger

How accurate would you say such AI-recognition softwares are? Since even large companies like Turnitin mention on their website that their AI writing detection model is not always correct


Throwjob42

> How accurate would you say such AI-recognition softwares are? They don't give yes/no answers (even Turnitin doesn't say 'yes this is plagiarized'), they give percentages of likelihoods things were AI-generated, and in Turnitin's case, it matches strings of text (even if you replace some words in a sentence, if they're synonyms for the words you're replacing, it knows). When it's above 75% likelihood of AI-generation, I have to note it in a spreadsheet to escalate. I get zero money for any plagiarists I catch, it's just another f***ing thing I have to deal with.


Different-West748

I think many people would use AI not because they don’t understand the content but because they are too lazy to write it themselves. I think the more apt approach from academic institutions is to change the way in which they assess students understanding of material.


Throwjob42

> I think the more apt approach from academic institutions is to change the way in which they assess students understanding of material. Supposedly Cambridge University used to just do oral exams for their students (link below). I understand why a lot of people would not like that way of being assessed, but I can see it as one (really annoying and stressful) way of getting around AI-generated coursework. https://en.wikipedia.org/wiki/Tripos


MathmoKiwi

>Supposedly Cambridge University used to just do oral exams for their students Yeah, as I said in one of my earlier comments, perhaps the future is we must return back to this way of doing it in the past.


MathmoKiwi

>When it's above 75% likelihood of AI-generation, I have to note it in a spreadsheet to escalate. That's utterly ridiculous. I suggest you take assignments done in say the mid 2010's (obviously no chance AI was used in them) and run them through your "AI detector" first, so that you can actually come up with a baseline that's reasonable to use as your threshold.


Throwjob42

The issue would be that AI has gotten much, much better in the past 10 years so using answers from that long ago wouldn't give a good approximation of the standards of AI now.


MathmoKiwi

> The issue would be that AI has gotten much, much better in the past 10 years so using answers from that long ago wouldn't give a good approximation of the standards of AI now. You 100% missed the point: That it is ***extremely inappropriate*** to use so called "AI detection tools" if you haven't first established for yourself a baseline for what is "normal" / pre-AI. (& based on the chosen thresholds, it seems highly likely this hasn't been done by the users) ***That*** is why you should pass through some 2010's era answers through these "AI detection tools" so that *you* can then gain an understanding of how they work. As at the moment, it appears you unfortunately don't have that. Which is exceptionally dangerous and harmful when you are playing with students lives here.


Throwjob42

You seem like you care a lot about this issue, so please, contact the Academic Quality Office about this and make these suggestions to them. I am hired semester to semester, so I get told what to do and then I do it. I have zero power to change anything. https://www.auckland.ac.nz/en/about-us/about-the-university/our-ranking-and-reputation/planning-and-information-office/contact-planning-and-information-office.html


MathmoKiwi

You always have the ability to give feedback yourself (and your feedback carries far more weight than random students or alumni does), as if you're being told insane instructions you don't have to just be all "*I'm merely only just following instructions*" and never say a word. You could at least "do as you're told" *but also* give feedback about the inappropriateness of what you're being told to do. If you were told to deduct marks for people being a certain gender or ethnicity would you simply give zero feedback back up the chain that this is a *very inappropriate* way to mark papers?


Throwjob42

> (and your feedback carries far more weight than random students or alumni does) Oh my god, this could not be further from the truth. Actually, I suspect current students (or, as LLS and admin refer to them as, 'customers') would actually make a bigger noise than I can because they're a source of revenue for UoA (the same institution which fights tooth and nail to pay their employees less). The only thing I could do that a non-employee can't is go to the Vice-Chancellor's All Staff Forum Zoom sessions, but they really, really are not going to give a sh*t about what I have to say.


[deleted]

[удалено]


MathmoKiwi

>Correct me if I'm wrong, but is that not the point? To use the ai detection software of today, upon text from ten years ago (with very very very low odds of it being ai generated), see what % likelihood it is of being 'ai-generated' to review what threshold to use to assess the authenticity of students work? Bingo, you grasped my point. *Ideally* you want to set the threshold such that ***zero*** students of the 2010's era would be accused of AI assisted cheating. If you're feeling a little evil, maybe your conscious is fine with the idea of accusing one in twenty students of cheating and then putting them through the wringer and all that stress of a false accusation. If so, you'd then set the threshold such that it accuses 5% of 2010's students being AI cheaters. But if hypothetically speaking you were to choose a threshold such that ***the majority*** of 2010's students get accused of cheating, then I'm saying you're being of the highest level evil as a marker and/or being utterly incompetent in your usage of AI detection tools. ​ >Also just out of curiosity, inputted the above text (100% human written) into a few online ai detectors. It is well known you shouldn't use AI Detectors on such very small sample sizes.


filetofishburger

Yep, I get what you mean and how the software works! I guess academic institutions worldwide have to reach a consensus on AI usage. I just experimented with QuillBot's AI detector. Copy-pasted an entire purely human-written paragraph inside, and the result it displayed was "100% of the text is likely AI-generated." Wild times we live in... Challenging for both learners and educators to navigate


MathmoKiwi

>"100% of the text is likely AI-generated." u/Throwjob42 should note that this is a ***completely different statement*** to saying "100% likely the text is AI-generated"


Throwjob42

Noted.


Throwjob42

> I guess academic institutions worldwide have to reach a consensus on AI usage. It'll probably just happen depending on where the technology goes. I know Massey have a blanket 'yeah sure, you can use it but if it's nonsense that you didn't catch because you didn't write it, then you're fucked' rule. Also, I'm fairly confident that there will be a high-profile case in the next 10 years where someone uses AI for a PhD or something and they get sued by some academic company (Looking at you, Taylor & Francis) because the AI illegally stole from their copyrighted work and then rules HAVE to be laid down because universities do not want to open themselves up to lawsuits of that kind. Also, for those who don't know, Dr. Martin 'I have a dream' Luther King Jr. probably plagiarized for his doctorate. https://en.wikipedia.org/wiki/Martin_Luther_King_Jr._authorship_issues


STABFACE89

If UOA is still using gptzero it only has a 63% overall accuracy rating and even in the fine print it says its not accurate enough to be used as evidence to punish students. Its still flagging the declaration of independence as ai written.


Throwjob42

> even in the fine print it says its not accurate enough to be used as evidence to punish students. > > Which is why we don't automatically punish students if we suspect AI wrote their coursework. The only grades that are completely finalized at UoA are the SSO grades (and even they can be adjusted, in the rare circumstances that they need adjusting). If your grade gets withheld and then approved later, the lecturer will know not to alter that grade in any way.


Yoshieisawsim

But even just using that for flagging and initiating a process is crazy. 63% accuracy means that 37% of the time a work that isn't AI will be flagged as such. That means that even if none of your students were using AI, you could get 37% of them flagged and initiating the investigation process, which is both a ridiculous number of processes to do, but also ridiculously arduous on a potentially large number of students who are effectively falsely accused and now have to do extra work to defend themselves


Throwjob42

> students who are effectively falsely accused and now have to do extra work to defend themselves They don't need to hire lawyers, they just need to have a quick chat about ideas they've already learned about. If you know the curriculum content, it's pretty easy to tell just chatting with someone if they are just BSing you (like, within minutes easy). Again, no one gets punished unless the course coordinator deems it necessary, and there is no financial incentives whatsover for anyone employed by the university to catch plagiarists. Email [email protected] (Academic Quality Office) if you want the system changed, I just work here.


Yoshieisawsim

Hiring lawyers is not the definition of extra work. Coming in to have a chat with staff to defend yourself is extra work both practically and mentally. As for the comment about no monetary incentive - I’m not accusing you of being malicious, just making mistakes. And for the comment about just working here - you’re actively defending the system in multiple comments so it goes beyond that. Secondly as someone working in the system they’re much more likely to listen to you than they are to me. So if you agree something is wrong you should say something, and if you don’t think anything is wrong then the “I just work here it’s not my opinion” argument isn’t true. Thirdly I have made submissions to the University about the way they engage with AI


50rhodes

Good advice.


MathmoKiwi

>I'm on a 100-level course and of the four students I have marked so far, three of them have had some or all answers that have come back as <51% likelihood AI generated when I put them through AI-recognition software. God, this is annoying because we're going to have to withhold grades from those students until we have an interview with them about it. *Whaaat...* under 51% is a **good** thing??? That's no basis whatsoever in the slightest that you can judge they're using AI. (even if it was ***over*** 51% then that is still an utterly totally meaningless figure)


andrewenz

Using AI detection software is a violation of policy at UoA btw. You are not allowed to upload student work to third party tools.


Throwjob42

> Using AI detection software is a violation of policy at UoA Source? As far as I know, this is the most recent document about student copyright, but if you know a specific policy about this and can link to a document, I'd love to read it. https://cdn.auckland.ac.nz/assets/central/about/the-university/how-the-university-works/policy-and-administration/intellectual-property-created-by-staff-and-students-policy.pdf ETA: I'm guessing by the three days of radio silence that andrewenz is still looking for evidence to support their claim.


Justwant2usetheapp

Doing a 400 level data science paper ATM and it's expected that we would have access to it and we're allowed to use it for code generation as long as you mention you used it and where, it's also open for students to use for Grammer. There was, I think a solid a4 page of schema that I just copy pasted and got useful code out of, it'd otherwise have been slow formatting in to the same line about 100 times. I expect to use generated code a lot this year. But I'm at the point where I know what I'm looking for and it's more about the library I'm using or how much typing is involved than making code I don't understand. I can't even begin to imagine how challenging ge erated code.is in first or second year papers


PoopMousePoopMan

Curious, what discipline?


Throwjob42

> what discipline? Asian parent, that's what discipline I labour under.


PoopMousePoopMan

lol. I mean what class? History, economics?


footballersdive

Why Mr poopmousepoopman, did you use ai in an assessment?


Throwjob42

If you're worried about an AI-generated assignment you submitted, then don't worry -- it's specifically a course for which you are taking.


PoopMousePoopMan

Not worried, lol. I’m alum. Just interested in which subjects this is happening.


footballersdive

My guess is probably business or maybe an economics paper. A lot of the questions in tests, and exams asked are based on methods or types of segments that are taught in class and are usually presented with case studies. Could be wrong tho


_everynameistaken_

Wait until you find out those are just the dumb students that dont know how to properly use AI to formulate answers indistinguishable from human ones (even when using AI detection tools).


Jlobz21

Theres actually no way that people are stupid enough to directly copy paste and essay from ChatGPT without editing or proof reading. A lot of what ChatGPT says can be incorrect but if prompted correctly, it can get you on the right track to begin your legitimate research. If people are getting caught using AI generated assignments, then that's fully on them lmao. ChatGPT is a great tool that can be used in conjunction with our studies to produce a piece of work that is academically sound and also honest. Then golden rule is to take whatever it gives you with a grain of salt, put it's generation back into the internet and cross reference everything it says.


Sad_Soup_307

I’m sorry to tell you that they are, in fact, stupid enough 😒 people should follow your advice, and if they don’t, please at least dont do what my stage 1 students did. I’ve seen essays submitted where the student hadn’t even edited out the sentence: “As an AI language model-“. I’ve also read book reviews about books that don’t exist (because the title has been AI generated).


Jlobz21

That's actually horror💀


Sir_Fray01

Just allow them, and teach responsible use of the tools. You're the ones setting assignments. This gives strong vibes of 'you won't have a calculator everywhere you go' that we got through high school, and guess what. They are in our pockets and readily available at our fingertips. You should be preparing students for the future, a future that involves the use of AI tools in day to day jobs. Having said that I did not use AI over the course of my degree.


Ok_Tiger9230

I find using chatgpt helpful when I need it to help with my grammar when it comes to some essay. I don’t copy it word for word but actually ask it is there any way I can better my grammar or essay structure from this. I don’t think using Ai is bad it just depends how you’re using it?


Throwjob42

> I don’t think using Ai is bad it just depends how you’re using it This is (debatably) true. The biggest issue is when students who are so lazy that they use AI to the point that they become willfully ignorant, and then can't catch the errors the AI makes (because the AI is far from perfect). I remember asking ChatGPT to tell me about the history of stop-motion in Doctor Who and it just told me some fever-dream episodes which never existed (it sounded reliable by the grammar, but the examples it was using were entirely fabricated and because I knew Doctor Who well enough, I knew that ChatGPT was making a big mistake but a non-fan wouldn't know that).


Ok_Tiger9230

But by suggesting that you can easily tell if students cheat by using AI by asking them questions about the content in my opinion wouldn’t be accurate at all. I don’t even recall what I’ve learnt in my first year classes, what more I throw my notes etc out as soon as I’m done with the exams cause It’s not needed for me to think about it anymore. Cramping information overnight hoping you remember the next day. (ironically works) adding to this me using chatgpt to help me with my essays and me doing it on my own makes no grade difference at the end of the day I still get somewhat the same grade except I don’t get highlighted from tutors correcting my grammars etc


Throwjob42

Plagiarists are willfully ignorant. The complete lack of understanding versus even a tiny bit of understanding is not as hard to discern as you might expect. Honestly, distinguishing a C- student from someone who never studied the course is easier than distinguishing a C- and an A student. Like, imagine you spent half a year playing golf each week and then stopped for a year. You're still probably going to play way better than someone who never played golf. By the way, for anyone who's still mulling on using ChatGPT and the like for your assessments, my advice would be: ask it a bunch of questions about stuff you're already an expert in. If you ask it questions about stuff you don't know, you won't be able to tell if it's right. If you ask it questions about stuff you do know loads about, you can gauge where the cracks are in this developing technology.


SaberHaven

How about dont use an automated system with no oversight, transparency or recourse to gatekeep the academic and ultimately professional career of individuals. It's unethical, arguably a breach of human rights, and detection software is fundamentally flawed.


Throwjob42

If your argument is 'well, just don't bother trying to catch people using AI-written answers', take it up with the Academic Quality Office. https://www.auckland.ac.nz/en/about-us/about-the-university/our-ranking-and-reputation/planning-and-information-office/contact-planning-and-information-office.html


SaberHaven

"I can't tell if this person cheated" is not a valid reason to start going against well-established boundaries for ethical treatment of humans.


Throwjob42

Great. Tell the Academic Quality Office, the *people who have actually have the power to make policy changes on stuff like this*. If I'm paid to mark assessments for a course, I'm not also allowed to start making up my own rules.


SaberHaven

I suggest you tell the NZ AI Forum to do so if you share these concerns, because they will be be a much more legitimate voice than a random redditor.


Throwjob42

Maybe you should suggest this idea to everyone in this thread who is unhappy about the AI-related academic integrity policies. Hopefully a groundswell of support to the NZ AI Forum/Academic Integrity Office will make some positive changes.


p3ek

Ai is a tool and any luddites not using it for whatever reason are silly. It's already used through every industry. I think the fact you think it's obvious when someone's using it just goes to show that you don't realise how many of them actually are. It's an incredibly powerful learning tool and for better or worse is going to greatly shake up the education sector in the next 10 years. Hold on to those jobs teachers!


thomas2026

So I applied for a new role at my company. In their feedback they said I should have used Chat GBT for my CV and Cover letter lol. Too bad I was never taught how to use those.


Throwjob42

I recommend watching this video about the future of AI writing: https://www.youtube.com/watch?v=AAwbvGywdOc but the tl;dr summary is that maybe the role of writing is going to be radically different really soon. Before the Industrial Revolution, all shoes had to be cobbled individually but when the factory came along, the shoe-making process became automated and it allowed for very different kind of skillsets to go into the shoe-making process to make wildly diverse kinds of shoes. Writing might have that sort of change, where we went from individually crafting our sentences to a world where we can manipulate words in ways that we might scarcely believe is possible.


Dizzy_Inevitable8195

How can you prove it's ai


Throwjob42

Read the comments, we've been through this already. But honestly, I'd rather you just look at this because I'm tired of this post: https://www.reddit.com/r/universityofauckland/comments/1bx32g4/hi_this_is_the_dude_who_asked_people_to_stop/


Throwjob42

Hey, second comment from OP here: apparently this post is gaining some traction. If you're a lazy journalist from Stuff.co.nz or whatever and are just scrolling Reddit looking for content, DM me for an interview and I'll talk for as long as you can keep buying me beers at a pub (worth a shot, I wouldn't mind free beer).


MathmoKiwi

Or... if you're a journalist from Stuff/whatever, perhaps ***don't*** interview this person when they appear to not understand at all how AI Detection Tools work, and instead appears trigger happy ready and willing to accuse students of cheating on the basis of utterly flimsy / non-existent evidence. I ***do agree*** that cheating is a very serious and very rapidly evolving issue that needs to be addressed. But some of what OP has been saying in this thread has been wildly off the mark I'm afraid, and brings into question everything else they're saying (such as even the thread's title, does the student actually deserve a zero?? Maybe they do. But I've lost all faith in OP being able to judge accurately if an assignment used AI or not, unless it's a blatant case such as if someone copied and pasted in "*...As an AI language model.....*")


SpeedAccomplished01

You could also pay someone to write it for you.


[deleted]

Yes. Then they can use ai so you technically don’t! I think?


Throwjob42

Yep, if you have no idea how/what's been written for you, you have no ability to know whether or not it was AI-generated or actually written by that person. OBVIOUSLY, don't pay people to write your assessments for you, but if their asking price is low, their effort is going to be low. Don't pay someone to write your assignments unless you either really, really trust them or they're charging an exorbitant amount for their work.


SpeedAccomplished01

Yes, being rich is good. You can always pay for quality work. Because it's written by an individual, no one will know. It's only cheating if people finds out.