Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Finally, students are asked to figure out prompts that will make ChatGPT give a correct algorithm and proof. I haven't managed this myself! I'm looking forwards to seeing what students manage.

Isn't there a probabilistic nature to ChatGPT replies? So even if a student finds a response that gives a correct proof, that doesn't mean it'll work every time. Or am I wrong here?



You're right, ChatGPT is probabilistic. None of this is graded by the way -- it's all just for fun and bragging rights.

I've asked students to share their full dialog, both prompts and replies, so the whole class gets to see; and I'll invite one or two to talk through their attempts. This is all just a trick to make students engage with "how do you you spot bugs in a proof?", hopefully more than they would from just reading CLRS! Often, students engage well when they're hearing the material from other students.


I think this is a great idea. I love when teachers do something fun and innovative like this!


You can set a ceiling on temperature or just simply dictate a low setting to insure repeatable performances by the LLM. This could be on your reading list as well: https://sites.google.com/view/automatic-prompt-engineer


Aside from a temperature of 0, which always results in the same completion, the details and translation examples (aka, in-context learning, few-shot) can force very reliable results, say, 8/10 times, meaning a sample-and-vote gives consistent results when the temperature is non-zero.

Edit: I was not in any way rude nor saying anything incorrect.

If you want to see how to do what I’m talking about, here’s an almost finished article describing the above:

https://github.com/williamcotton/empirical-philosophy/blob/m...


Upvoted you, don't mind the false flagging




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: