A Solution to the Deluge of Bug Bounty and FOSS AI Slop Submissions
It seems many maintainers are drowning in AI submissions to bug bounties and FOSS contributions. Many are frustrated by the volume of low quality PRs or submitters who don't seem to understand what they are submitting.
Some maintainers are resorting to extreme measures or crying out for help. One is attempting to set up a whole new system of social trust to address the issue.
I'm happy to see experimentation in this area, and hope something shows efficacy and traction soon. I want to throw another idea out into the ether that I think could help solve the problem.
But first, let's analyze the problem:
Generative AI coding has become very cheap. Depending on how people use it, they either pay a flat fee, which gives them a marginal cost of zero to produce AI code (up to some usage limit) or they pay a very small “per token” fee. In either case, the marginal cost of producing bug bounty submissions and open source pull requests has fallen to near zero, so people are doing much more of it. It's not hard for someone to submit hundreds or thousands of submissions quickly with one of these subscriptions. There was no cost to submit PRs or bug reports in the past, but it took a person's time, which is valuable. But with LLMs, even that cost has shrunk dramatically.
The potential benefits to submitters can vary. For bug bounties, it's for a potential financial reward, and for PRs, it may be to help their reputation, resume, or to test a new AI bot they whipped up.
Maintainers are annoyed at the low quality of submissions, in part because it's a cost (opportunity cost of their time) to review the PRs or bug bounty submissions. It also crowds out reviews of new contributors who cannot be trusted by default.
I believe there's a relatively easy solution to this: the maintainers can charge a small fee to submit PRs or bug bounties to their projects. If the maintainers then believe the submission was made in good faith (whether they accept the submission or not), they should return the fee. If they don't believe it was made in good faith, they keep it. Maintainers have an incentive to return the fee to good faith contributors so that they continue to receive good quality submissions. Submissions not addressed in a timely manner could be automatically refunded.
That's it1. That alone should greatly reduce AI slop submissions. It takes something with a marginal cost near 0 and increases it enough to stop mass submissions of slop, because the submitters will risk losing money proportionally to the number of their slop submissions. I don't know exactly where that price should be, maybe $1. Maybe it varies by project. It's hard to know what price will be enough to stop the slop deluge, but I don't think it would need to be too high to stop mass offenders, and I'm confident such a price exists.
To avoid discouraging good submissions, the result of each submission should be transparent, and the display of the result of returning the fee or not should be outside the control of the maintainers. This way, would-be contributors could audit their track record of returning the fees to good faith efforts. Having some friction in the PR or bug bounty process will still risk lowering good faith submissions when the submitters are worried about losing their fee. But we should expect that to mostly reduce lower quality good faith submissions (which have always been there, just not in large quantities), because they're the ones least confident in having their submissions viewed as good faith.
One real life drag on such a system is that financial transactions normally come with small transaction costs. So each project would need to decide who eats that cost. I would guess that large, important projects would probably ask the submitters to eat it, and small projects who are desperate for help might volunteer to eat it. Between that and the pricing, there's plenty of room to experiment. Transaction costs are typically very small, and if it solves the problem, then it could easily be worth it.
No need to detect AI with more AI. No worrying about AI slop submitters changing github usernames. No need to close off projects or end bug bounties. Just put a small, refundable cost to submitters, and watch the slop submissions drop dramatically. Even if I'm wrong and it doesn't stop the problem, at least you can fund more maintainers to review the slop!
Footnote
1 I mean, "that's it" conceptually. Building this application and integrating it with social coding platforms would take effort. But they would be rewarded with the transaction costs.