Here's one theory for why Apple has done so little (i.e., nothing) to police fingerprinting violations after the
AppTrackingTransparency rollout: they're building a technical solution instead.
When we got Private Relay in iOS 15, that provided half of the answer (traffic that goes via the relay can't be fingerprinted…which is the whole point of it), but the missing piece was whether Apple would be willing to foot the bill for relaying all in-app iOS traffic in order to prevent 'tracking' inside apps too — your Google searches are one thing, but proxying all of your Netflix streams is an order of magnitude different.
As Eric Seufert explains here, we now have a model for how Apple might close that loop: the SDK Runtime proposal in Google's Privacy Sandbox. If Apple brings the same concept to iOS, they'll have a technical way to block 'tracking' without needing to handle all native app traffic for every iOS user.
(For what it's worth, I also think this is likely.)
Over the last few months, AppsFlyer has been publishing a number of articles about their in-development Data Clean Room product. For those of us still getting up to speed on that space, here is another piece that ticks through a variety of real-world mobile use cases.
I feel all of the examples given here are valid, technically possible, and quite interesting to consider… but there's still a critical missing link: how Apple/Google/privacy regulators will view these solutions.
For example, ATT on iOS limits usage of data for ad 'tracking', period. That's a categorical prohibition, and there is no convenient carve-out that makes things OK if the data is going into a double-blind system (like a data clean room) first.
In other words, data clean rooms might be the equivalent of 'data laundering', but they're likely still going to count as tracking under the letter of the policy.