I am a college football fan and like many fans, this is the time of year when I am reminded about how flawed the poll (or ranking) system is. Truth is, I am reminded each week about how silly the whole thing is. It starts in the summer, when votes are cast long before the season starts. These pre-season polls rank teams from across the nation, 1 to 25, based on how good the ‘experts’ think they will be. These uninformed rankings serve as the initial anchor point from which just about every team will surely move. Over the course of the year, wins, losses, schedule strength, and perceived ‘style points’ influence weekly changes and at the end of the year a final record entry is made. All of this is subjective and highly questionable, yet accepted as the way we do things. Each week, some ‘experts’ cast their votes based on who they believe to be the ‘best’ and others make a judgement based upon who they feel is the ‘most deserving.’ Without question, there is bias based on region, conference, and favorite team.
The same is true for personal accolades that attempt to recognize the best player, the best coach, and various other creative categorization. In the military, we do the same each year. We rank individuals amongst their peers. We have a standard criteria that we use, but the application of that criteria is highly subjective. Most would say that we rank people based upon a combination of performance and potential, but it usually isn’t that simple. Do we base it on relative seniority across a peer-group? Do we create a list based upon scope of responsibility? Do we focus on the outcomes generated? Is it based on who is most ready to promote to the next level? Do we penalize those who are leaving the military merely because they are leaving and won’t benefit from a performance appraisal that they may have otherwise earned? There are countless other questions to ask and it is as easy to argue that either ‘yes’ or ‘no’ is both (either) the accurate and (or) right answer for each given question.
Is an 11-1 Oklahoma better than an 11-2 Stanford team with no common opponents and no head-to-head competition? The ‘experts’ think so, but we will never know. Is Keenan Reynolds (Navy) the best football player in college football? Not likely. Is he the most deserving based on Heisman Trophy criteria? Hard to say, but many would say yes. Ranking anything from ‘1 to n’ is a challenge. Is the process used to do so infallible? Not unless math, science, or head to head competition is involved. Does the result make follow-on decisions easy? Absolutely. That is, after we get done with arguing why our favorite team was #5 and not #4 or we as individuals were ranked number #15 and not #8. Once we put everything in an orderly line, names become numbers, and we accept the result, everything falls into place. If we can promote eight people, we pick the names next to numbers 1-8 and promote them. If we have one trophy to give, we give it to the name at the top of the numbered list. If we have four playoff positions, we know the teams with a number 1, 2, 3, or 4 get the opportunity to compete. In doing so, have we recognized the ‘best,’ the ‘most deserving,’ both, or neither? Once we unveil the names next to the numbers we chose, did we meet the desired intent?
To many, the answer is obvious, we should recognize the ‘best.’ But without head to head competition, there is no way to confidently assess who is best. And even then, the winner today may be the loser tomorrow, but the result when time expires is final and difficult to argue. Only when the criteria is publicly shared, the rules are understood and agreed upon prior to the ‘kick-off’, and the decision makers enjoy complete trust and confidence is the result likely to be accepted by those who did not directly benefit from it.
If we see value in the rankings that result, then we ought to care enough to clearly articulate the criteria we will be using, show our work as to how we measured each against that criteria and go out of our way to strengthen the trust those who are affected by the result have in us. There are times when we want the ‘best’ to be at the front of the pack and there are times when we want to recognize those we deem ‘most deserving.’ The key is to define what we value before we assign value to it and ensure the process is beyond reproach.
- Do you agree to the terms and conditions of the process? If no, find a new game.
- Do you clearly understand the criteria? If no, ask for clarification.
- Do you trust those overseeing the process? If no, we have a huge problem.
If the answer is yes to all three, congratulations! You are co-owner of the result no matter where you or your team may have landed in the final ranking.
I have seen this process both ways. As an XO we ranked according to performance. Here at C6F we rank based on most deserving, and I use that term loosely. Most deserving is based on when you arrive, time in grade, when you are in zone, in a milestone, and scope of responsibility.
The later provides an easy straight forward mechanism to rank folks. In my opinion this is a lazy practice. Ranking is easy and leaders do not have to provide any meaningful feedback, mentorship, or develop their Sailors. Bottom line there is no incentive to perform above and beyond, but this seems to be the process generally understood by boards. Folks are expected to start low, trend to the right, and gradually exceed seniors reporting average. The deserving model tends to fit this thought process. This model does not allow Sailors to breakout because in doing so Sailors that have been at the Command longer do not demonstrate the expected right trend and the perception is that when you break someone out it is at the expense of those Sailors that have waited their turn.
Using a performance model requires active involvement from the entire chin of command. The chain of command needs to be engaged with a Sailors goals and look for opportunities for the Sailor that are in line with achieving those goals. Every member in the chain of command should be able to tell the Sailor why they ranked where they did and what they need to do to rank higher next time around.
If I were King for a day I would decouple P, MP, EP from the ranking process for both officer and enlisted. For the enlisted I would give the CO/Senior Rater a number of exam points that are distributed among those Sailors that would only be applicable during the next exam cycle, so evaluations would still matter, but the Rater would have more latitude in determining bonus points. Potentially tie points to the Senior Raters average compared to the average of the Sailor if there needs to be a more equitable distribution of bonus points. No points for being below Senior Raters average, but points for being at and above their average.
With the changes coming to selection boards identifying those Officers that truly should be promoted early and those above zone Officers that must be promoted. I don’t see how the term early promote is applicable to in and above zone Officers. With both Enlisted and Officer I would tie their evaluations closer to the Senior Raters average. Bottom line put the responsibility on the Senior Rater.
Having just wrapped up my Navy Civilian appraisals, and their new year performance objectives I think that system has some merit. At least the commands expectations for individuals are clearly laid out at the start of the year and I am allowed a greater degree of latitude in determining to what extent my civilians have or have not reached those goals and objectives that we have mutually agreed on. There is something to be said in evaluating an individual against themselves and being able to reward them for exceeding expectations and their innovation. Additionally it helps to have a clearly laid out job description, something that is missing with the majority of our Officer and Enlisted billets.
CDR Brad “Stewie” Melichar
Deputy Director of Intelligence (N2A)
Force Cryptologist
U.S. Naval Forces Europe and Africa
U.S. SIXTH Fleet