Thank you for starting this discussion. My short response is: If the variability in student performance is cause by something more than random error, then a rubric can be applied in valid and reliable ways. People can develop poor rubrics and the learning exercises within the curriculum that produces the students' work can be poorly designed which makes it difficult to determine if assessment rubrics are meaningful. If there is "expertise" in a discipline, then there is non-expertise. Consequenlty, there should be reliable ways of discriminating the expert from the non-expert performance whether it's a report, completing a lab, a statistical analysis, doing research, performing a dance, etc). The rubric is the methodology for reliably making this discrimination.
Of course, the same set of yardsticks can't work for all curricula...even though the improved versions corresponding to various disciplines must still be invoked at school, undergraduate or post-graduate levels, in order to quantify evaluation, as clinically as one can. But then they must be able to distinguish the random errors in the students performance from the systematic ones which might result from bad pedagogical skills of teachers. The later would be commonly visible among students as a definite downward shift, wrongly ascribed to them.
Nevertheless, at a researcher's level, such attempt may be quite misleading...since such rubrics can, in no possible way, assure the evaluator of the exclusion of the cognitive bias in an experimenter's regress, especially when such influence becomes significant as a confirmation bias. This bias can unfortunately construct or destroy the physical understanding (see "Son of seven sexes: The social destruction of physical phenomena" by Harry Collins). This situation is further compounded e.g. in Bells inequality or other modern experiments on quantum measurements with the observer inevitably (unknowingly !) "producing" the results of the experiment. Then it is hard to have a clear resolution between the "social" and a "natural" bias. It is perhaps because we have science with a heart, and are umbillically tied to our equipments, theories and evidences...
I am sure that no single rubric will apply to all situations, but the utility of the rubric is two-fold. First it gives the student a clear idea of the expectations and the required level of detail/explanation. This is invaluable for practical skill assessment such as in a laboratory or in a discussion based question-answer situation. The other component of the utility is that it makes the instructor look at what the learning outcomes of the lesson are supposed to be and by expressing them, makes the instructor critically evaluate the instruction.
Not every situation will call for a rubric. An assessment involving a quantitative answer might be evaluated on a strict correct or incorrect basis. But, authentic assessment, examination of the process and underlying skills we want a student to walk away with can be thoughtfully developed in conjunction with a rubric.
I would argue that many times we as instructors use our own internal rubrics when grading students. Having them 'externalized' makes the process (hopefully) transparent and constructive.