We should have a new kind of test expectations to precisely defined how many pixels difference we have with the reference, to avoid any regression/show improvement.
<rdar://problem/114138795>
An older proposal was to add a mask image that precisely defines which pixels can be different. It's more complicated to implement of course, especially as we'd need tooling to create such masks.
Fuzzy matching can specify a range of pixels (but not which ones).
Oh, I think the request here is to put the fuzzy matching data in TestExpectations, rather than in the test. Matthieu, could you clarify what you're asking for?
@Simon, Yes this is about our expectations/our current state, so it should be in the TestExpectations (like the textual expectations with a bunch of PASS and FAIL) It's not about fuzzing per se (but it could reuse the fuzzing infrastructure to determine the pixel differences).