A thread appeared on UKBF this week that, on the surface, looks like a straightforward complaint about eBay. Look a little closer though, and it's actually a window into one of the most underappreciated risks facing small business owners right now.
@paulears , a long-standing member of this community, found his eBay account restricted after AI moderation removed several of his best-selling listings. The reason given? The items violated the policy on devices that can receive or transmit signals. The items in question? Aviation and marine radios, listed in the aviation and marine radios category. The category that exists specifically for aviation and marine radios.
It would be funny if it weren't costing him money.
The pattern is consistent: a large platform deploys AI moderation or compliance tooling, usually as a cost-saving measure, the system makes a contextually wrong decision, and the small business on the receiving end finds that human staff either can't or won't override it.
An AI system trained to catch illegal surveillance equipment, signal jammers, or unauthorised transmitters will learn certain keyword and category associations. "Transmits." "Receives." "Radio frequency." Taken in isolation, those patterns look the same whether you're selling a black-market signal jammer or a perfectly legal aviation radio to a licensed pilot. The AI doesn't know the difference because nobody trained it to know the difference, and, critically, nobody built in a mechanism to correct it when it gets it wrong.
@Gecko001 raised an even thornier version of this in the thread: if the systems that enforce the rules were themselves built or refined using AI, and those same AI systems are what sellers must now use to work around the restrictions, you have a feedback loop with no obvious source of ground truth. Who's actually in charge of the loop?
The honest answer, in many of these cases, appears to be: nobody.
1. Use AI to reverse-engineer the trigger
This is the suggestion that @WaveJumper floated in the thread, and it's the right instinct. Take your removed listing, feed it to an AI tool (ChatGPT, Claude, or similar), explain the situation, and ask it to identify which phrases are most likely to have triggered a policy flag. Then ask it to help you rewrite the listing in a way that accurately describes the product without using language that pattern-matches to restricted categories.
It feels absurd to use AI to fix an AI problem, and it is absurd. But it works, and it's the fastest route back to trading while you pursue the longer-term appeal.
2. Make your category context explicit
Don't assume the AI knows where your listing sits. Reinforce the context directly in the listing title and description. "Aviation-band receiver for licensed aircraft use" is doing more work than just "radio receiver," even if the latter feels more natural. You're essentially adding training context that should already exist in the system but doesn't.
3. Document everything and escalate in writing
Most platform appeals processes respond better to a calm, structured written explanation than to a phone call. State the category, state the product, state the legal framework if applicable (e.g., CAA licensing for aviation equipment), and request a human review explicitly. It doesn't always work, but it creates a paper trail and occasionally it does.
4. Don't put all your eggs in one platform
This is the harder conversation, and we'll come back to it.
Thousands of small eBay sellers have built businesses on the assumption that if they follow the rules, play fair, and list in the right categories, they'll be fine. paulears did all of that. It didn't matter, because the system that enforces the rules is no longer accountable in any meaningful way.
When a human moderation team makes a wrong call, there's a process. You can escalate. You can speak to someone with authority. You can cite policy back at them and expect a considered response. When an AI system makes a wrong call, and the human staff above it don't have the power or the mandate to override it, that process disappears. You're not dealing with a bad decision, you're dealing with a locked system.
This is a structural business risk, and it's one that grows as more platforms hand more decisions to AI without adequate governance. For a seller whose best-performing listings are suddenly gone with no clear appeal route, the impact can be immediate and severe.
The lesson here isn't "don't use eBay." It's: know your dependency, and have a plan for when the platform fails you.
That means cross-listing on multiple channels where possible, owning your customer relationships directly (email lists, your own website), and treating platform access as something that can be withdrawn, because it can.
Here's a useful way to think about it. If you hired a new member of staff and on their first day handed them the authority to suspend customer accounts, remove products, and restrict business activity, with no manager able to overrule them, you'd consider that genuinely dangerous. Nobody would do it. Yet that's exactly the structure several major platforms appear to have built.
AI should be treated the same way you'd treat a new hire. That means:
If you're a small business owner who depends on any third-party platform, the questions worth asking are:
Do you have a similar experience with AI moderation on eBay, Google, Stripe, or another platform? Share it in the thread below, or join the original discussion.
@paulears , a long-standing member of this community, found his eBay account restricted after AI moderation removed several of his best-selling listings. The reason given? The items violated the policy on devices that can receive or transmit signals. The items in question? Aviation and marine radios, listed in the aviation and marine radios category. The category that exists specifically for aviation and marine radios.
It would be funny if it weren't costing him money.
It's Not Just eBay
Before we dig into what went wrong and what to do about it, it's worth acknowledging that this isn't an isolated quirk. @fisicx was quick to point out in the thread that Google operates in exactly the same way, with AI-managed penalties landing on businesses with no meaningful route to challenge them. @Paul Kelly ICHYB shared a remarkably similar experience with Stripe: a five-year-old web hosting account on a platform used by thousands of web hosts was suddenly flagged as non-compliant. Almost certainly by an AI system, with no clear human in the loop to correct it.The pattern is consistent: a large platform deploys AI moderation or compliance tooling, usually as a cost-saving measure, the system makes a contextually wrong decision, and the small business on the receiving end finds that human staff either can't or won't override it.
Why Does This Keep Happening?
The straightforward answer is that these systems are trained on patterns, not understanding.An AI system trained to catch illegal surveillance equipment, signal jammers, or unauthorised transmitters will learn certain keyword and category associations. "Transmits." "Receives." "Radio frequency." Taken in isolation, those patterns look the same whether you're selling a black-market signal jammer or a perfectly legal aviation radio to a licensed pilot. The AI doesn't know the difference because nobody trained it to know the difference, and, critically, nobody built in a mechanism to correct it when it gets it wrong.
@Gecko001 raised an even thornier version of this in the thread: if the systems that enforce the rules were themselves built or refined using AI, and those same AI systems are what sellers must now use to work around the restrictions, you have a feedback loop with no obvious source of ground truth. Who's actually in charge of the loop?
The honest answer, in many of these cases, appears to be: nobody.
The Immediate Problem: What Can You Do Right Now?
If you find yourself in paulears' position, there are a few practical steps worth trying before you reach for the nuclear option of screaming into the void.1. Use AI to reverse-engineer the trigger
This is the suggestion that @WaveJumper floated in the thread, and it's the right instinct. Take your removed listing, feed it to an AI tool (ChatGPT, Claude, or similar), explain the situation, and ask it to identify which phrases are most likely to have triggered a policy flag. Then ask it to help you rewrite the listing in a way that accurately describes the product without using language that pattern-matches to restricted categories.
It feels absurd to use AI to fix an AI problem, and it is absurd. But it works, and it's the fastest route back to trading while you pursue the longer-term appeal.
2. Make your category context explicit
Don't assume the AI knows where your listing sits. Reinforce the context directly in the listing title and description. "Aviation-band receiver for licensed aircraft use" is doing more work than just "radio receiver," even if the latter feels more natural. You're essentially adding training context that should already exist in the system but doesn't.
3. Document everything and escalate in writing
Most platform appeals processes respond better to a calm, structured written explanation than to a phone call. State the category, state the product, state the legal framework if applicable (e.g., CAA licensing for aviation equipment), and request a human review explicitly. It doesn't always work, but it creates a paper trail and occasionally it does.
4. Don't put all your eggs in one platform
This is the harder conversation, and we'll come back to it.
The Bigger Issue: Platform Dependency as a Business Risk
Here's what this story is really about, underneath the immediate frustration.Thousands of small eBay sellers have built businesses on the assumption that if they follow the rules, play fair, and list in the right categories, they'll be fine. paulears did all of that. It didn't matter, because the system that enforces the rules is no longer accountable in any meaningful way.
When a human moderation team makes a wrong call, there's a process. You can escalate. You can speak to someone with authority. You can cite policy back at them and expect a considered response. When an AI system makes a wrong call, and the human staff above it don't have the power or the mandate to override it, that process disappears. You're not dealing with a bad decision, you're dealing with a locked system.
This is a structural business risk, and it's one that grows as more platforms hand more decisions to AI without adequate governance. For a seller whose best-performing listings are suddenly gone with no clear appeal route, the impact can be immediate and severe.
The lesson here isn't "don't use eBay." It's: know your dependency, and have a plan for when the platform fails you.
That means cross-listing on multiple channels where possible, owning your customer relationships directly (email lists, your own website), and treating platform access as something that can be withdrawn, because it can.
What Responsible AI Implementation Actually Looks Like
The broader frustration in this thread is one I share. AI is being rolled out by large organisations as a cost-cutting tool, slotted in where humans used to sit, without the training, nuance, or override mechanisms that those humans provided.Here's a useful way to think about it. If you hired a new member of staff and on their first day handed them the authority to suspend customer accounts, remove products, and restrict business activity, with no manager able to overrule them, you'd consider that genuinely dangerous. Nobody would do it. Yet that's exactly the structure several major platforms appear to have built.
AI should be treated the same way you'd treat a new hire. That means:
- Training and oversight: a new team member learns the specifics of your context, your edge cases, your customers. An AI system needs the same, and that investment doesn't disappear just because it's software.
- A clear escalation path: when the new hire gets something wrong, there's a human with authority who can correct it. AI systems need the same structure built in, not bolted on as an afterthought.
- Accountability: someone in the organisation needs to own the AI's outputs. If no human is accountable for a decision, that decision has no accountability at all.
- Graduated authority: you'd start a new hire on tasks where mistakes are recoverable before you give them decisions with real consequences. The same logic applies.
The Takeaway
paulears' situation is specific to eBay and aviation radios. But the underlying dynamic, AI deployed as a replacement for human discretion without the governance to match, is playing out across platforms and industries right now.If you're a small business owner who depends on any third-party platform, the questions worth asking are:
- If this platform's AI flags my account tomorrow, what happens to my revenue?
- Is there a human I can reach who has the authority to fix it?
- What's my fallback?
Do you have a similar experience with AI moderation on eBay, Google, Stripe, or another platform? Share it in the thread below, or join the original discussion.
