'Smart pricing' can look like discrimination if you can't explain it
Dynamic pricing is not new, but AI-based individualized pricing changes the optics and the stakes. When prices vary based on behavioral signals, customers tend to interpret it as unfaireven if the model frames it as 'optimization.'
The technical problem hiding inside the PR problem
- If your model uses proxies (location, device, browsing patterns), you can unintentionally create protected-class correlations without ever ingesting protected attributes.
- Explainability becomes a product requirement: you'll need internally defensible answers to 'why did this user see that price?'and you'll need them quickly.
- Data minimization matters. The more data you feed the pricing system, the harder it is to argue you're not building surveillance-by-transaction.
What businesses should do before they ship this
- Put guardrails in place: caps on variance, fairness testing, and human-review workflows for edge cases.
- Create user-facing transparency that's actually readable. If customers feel tricked, you've already lost.
- Prepare for platform and regulator reactionspayment providers, marketplaces, and app stores can all impose restrictions if your pricing looks predatory.
The core tension: AI can optimize revenue, but it can also optimize customer outrage if governance lags behind the model.
