Unless you have a system that is a black box, it will always be open to some sort of manipulation (by folks such as myself). But you can drive a truck through the manipulation hole of the RPI ... so the bar is really low if you want to implement a non-black box system improvement. It is certainly not my preference, but an improvement is not difficult ... one that goes deeper in determining real SOS.
My preference would be a true learning system ... a machine learning approach (machine learning is really just statistical learning). This would be a system that would train on past outcomes while using team resumes and other features (of the learning model) as the inputs from which the system would learn ... and ultimately predict (the most optimal field of participants). While I think this would yield the most optimal result, it would suffer from the same thing that sometimes plagues other machine learning systems. Explainability. These feature models are so complex that they are not understandable to the typical human. And the average fan would likely not accept this when their team is left out or they are not seeded in a manner that meets their expectation.
Even an ML approach can be subject to manipulation (not by teams ... but by the ML engineers that build the system) ... in the training data that is used to generate feature selection and to ultimately train the model. We see this with ChatGPT, Bard, and others. But at least here you can institute some controls that make this an honest approach to the problem. And again, it would not subject to manipulation by schedule makers.
Brian