Something missing here is process. Let's step back and answer some questions:
- Do we have a good (or at least agreed upon) understanding of weapon mechanics? Do we know how one hand differs from two hand from dual wield?
- Do we understand how weapon speed affects skill shots, vs how it affects auto attacks?
- Do we understand how skill shots affect auto attacks?
If we can't agree on that part, there's no point discussing relative damage, if there? That's be putting the cart way before the horse.
Now, if we do understand the above, the next step should
not be to go out and test it, but to use that information to predict what
should happen. Build a little variance into the model, just to account for player reaction times, differing play styles, etc.
Now, once we have a working hypothesis, we test. Ideally, we get results from a variety of sources. One player, not matter how many iterations they perform, will fall into a distinct pattern. The data collected won't necessarily reflect how the system works as a whole, but how it works for that player. And we need to collect a reasonable sample size. Two few iterations and random chance looms large.
Then we need to see how well the data fits our model. How good were our predictions? Do we have results that are way out of range? Skewed toward one edge of our predicted range? Did we miscalculate in our model? How do we adjust?
And adjust and repeat until the model reflects the data we're collecting to an acceptable degree.
Lastly, we publish the model. We don't need to publish the whole of the data. We know the data collected will fit the model. We can invite others to test the model for themselves, confident in our results. Not to mention, the model will be a lot easier to read and understand than all the data interpretation.
A lot of work, you say? Yes, it is. What's the price you're willing to pay for publishing correct and robust information? Anyone can publish a, "This, I believe," treatise, and one almost certainly to be challenged by someone with a differing belief. But, if you do it right, then even in the face of so-called, "conventional wisdom," the work will stand.
I don't really expect too many people to go through all this. They ought, but I'm a realist. What I mostly intended to show was the failings we're seeing here are not failings of people, but failings of process. If we had a solid process for doing this kind of work, we can better pull people out of the equation.