Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my experience, an agent with "fresh eyes", i.e., without the context of being told what to write and writing it, does have a different perspective and is able to be more critical. Chatbots tend to take the entire previous conversational history as a sort of canonical truth, so removing it seems to get rid of any bias the agent has towards the decisions that were made while writing the code.

I know I'm psychologizing the agent. I can't explain it in a different way.

 help



I think of it as they are additive biased. ie "dont think about the pink elephant ". Not only does this not help llms avoid pink elphants instead it guarantees that pink elephant information is now being considered in its inference when it was not before.

I fear thinking about problem solving in this manner to make llms work is damaging to critical thinking skills.


Fresh eyes, some contexts and another LLM.

The problem is information fatigue from all the agents+code itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: