> Finally, students are asked to figure out prompts that will make ChatGPT give a correct algorithm and proof. I haven't managed this myself! I'm looking forwards to seeing what students manage.
Isn't there a probabilistic nature to ChatGPT replies? So even if a student finds a response that gives a correct proof, that doesn't mean it'll work every time. Or am I wrong here?
You're right, ChatGPT is probabilistic. None of this is graded by the way -- it's all just for fun and bragging rights.
I've asked students to share their full dialog, both prompts and replies, so the whole class gets to see; and I'll invite one or two to talk through their attempts. This is all just a trick to make students engage with "how do you you spot bugs in a proof?", hopefully more than they would from just reading CLRS! Often, students engage well when they're hearing the material from other students.
Aside from a temperature of 0, which always results in the same completion, the details and translation examples (aka, in-context learning, few-shot) can force very reliable results, say, 8/10 times, meaning a sample-and-vote gives consistent results when the temperature is non-zero.
Edit: I was not in any way rude nor saying anything incorrect.
If you want to see how to do what I’m talking about, here’s an almost finished article describing the above:
Isn't there a probabilistic nature to ChatGPT replies? So even if a student finds a response that gives a correct proof, that doesn't mean it'll work every time. Or am I wrong here?