If there were any reasonable way to do something like this, I would love to see it.
Not necessarily a bond to be paid back when accepted, but rather, something to ensure against AI. "If you assert this is not AI, insert $10. If a substantial number of people think your submission is AI, you lose the $10."
Right. Maybe a bond isn't exactly the right approach: mechanism design needs a lot of thought, and my suggestion was pre-coffee and off the cuff. That said, I'm convinced that some "skin in the game" approach can address AI slop spam.
Agreed. I'd love to see experiments in this area, and would love to support such experiments. I think they'd go hand in hand with a trust-oriented model.
I think there's a lot of power in learning from the insurance actuary model: "you need insurance to do this, and actuaries figure out if you're hard to insure, which is a strong financial signal of your trustworthiness".