I do not have a strong opinion regarding DSR yet ... as they have offered very little information about it. Not nearly enough to open it up to proper statistical testing and scrutiny. The only thing you can really do is assess/back test final results (which is not perfect either, since postseason play is non-neutral until you get the the CWS/WCWS ... but a start). I think for it to get any serious consideration, the developers will need to open it up for analysis.
I am also skeptical of a few things it claims to attempt to address. One is that it claims to address the heavy weight placed on SOS that provides an unfair advantage to schools from major conferences (which is certainly the case with the RPI ... and has only been partially addressed in the past). Thus far, I am not seeing evidence that DSR does anything in this regard.
Having the value of a win or loss being fixed sounds great ... but how is that value accurately determined early in the season when data is sparse? Another example of the system needing to be explained in depth. What is being sacrificed to achieve the above goal?
Detailed analysis would need to be done on Win Quality and Win Expectancy.
I am not a fan of Margin of Victory as a model feature. I think that pitching (and how a coach manages the pitching staff with the goal of winning ... not winning by as many runs as possible) plays too important a role in softball ... and especially baseball to make this a useful feature. I think it would also have unintended consequences and influence decisions to the detriment of the game. The presence of the run rule also pollutes such a feature.
I really wonder whether the features of their model have been back tested against real results. If so, publish the results. For now, if it is just entertainment ... it is what it is. But if there is ever a push for real use and adoption, it needs to be examined closely and tested rigorously.
Brian