I have a lot of passion about feedback when it comes to hockey player development, because I think it is probably the most important factor to improve player performance. Darryl Belfry, who is regarded as one of the best player development coaches in the world, uses actual game analysis as the primary way to provide feedback on improvement areas for players.
As the governing body of hockey in the U.S., USA Hockey understands the importance of player feedback. At the USA Hockey 16/17 Girls Camp which took place in Oxford, OH this past June, feedback was highlighted in the parent meeting as a key component of the camp. In Part I of this post about the USA Hockey Girls Camp feedback, I wanted to focus on understanding the three levels of feedback utilized during and after the camp. Part II of this topic will discuss my thoughts on how effective the feedback process has been.
1. On-Ice Feedback – During Practice and Games
Just like with their regular teams, coaches were quite consistent in talking to players individually and in groups during practices to share their thoughts on specific, tactical ways to improve a drill or situation. Same for a player coming to the bench during one of the games after a shift – coaches would lean over to players and give advice on what adjustments could be made to improve a player effectives. These situations are quite comfortable for all the coaches at an event like this since most were DI coaches or previous DI players. As I mentioned in my previous post about player feedback, in-game comments are the easiest for a coach to communicate.
2. One-on-One Feedback with one of the Team Coaches
All teams had two head coaches. On about the fourth day of week-long camp, each player had a 10-15 minute conversation with one of their coaches. It is my understanding that most players were asked to do a self-review in anticipation of the meeting. From talking to several parents, the coach-player conversation was then highly dependent on the coach. Some coaches were well-prepared and had video clips to show players as a way to communicate their feedback, some coaches had simple basic priorities for players to focus on while others relied on the player’s self-evaluation as the primary source of the feedback conversation. Given the variance in feedback methods, I suspect the feedback meeting process was not highly structured by the camp organizers.
3. Letter Grade and Player Development Performance Criteria
About four weeks after the end of the 16/17 Girls Camp, my daughter received by snail mail a form letter which included an evaluation which is supposed to serve as a benchmark for a player’s performance at the camp. This entails a letter grade and a rubric on the “Player Development Performance Criteria”. Here are the details.
At the top of the player evaluation sheet, the players was provided a rating of A, B or C with the following explanation
“A” grade = Excellent – ranks in the top 1/3 of players at camp
“B” grade = Good – ranks in the middle 1/3 of players at camp
“C” grade = Below average – ranks in the bottom 1/3 of players at camp.
The Player Development Performance Criteria had 5 possible selections (from best to worst):
- Very Good
Each skater then had attributes selected within two categories. General and position-specific attributes with a selection in one of those five boxes (“X” for each attribute). Here are those attributes:
- Makes Possession Plays (i.e. keep team on offense; limited turnovers)
- Angling: pressure to take away time/space; dictate play with body/stick
- Stick Positioning
- Quick Transitions
- Off-Puck Habits & Puck Support
- Scoring Ability
- 200-Ft Player
- Skating Ability (north/south; agility; speed)
- DZone Execution First
- Puck Retrievals
- Good First Pass or Exit
- Win Race Back to D-Side of Play/Net
- Wine Board Battles
- Deter Offensive Opportunities
- Scan to Make Exit Play; Fast Transition to Breakout
- Work Well with D-Partner
- Gap Control: (North/South & East/West)
- Puck Retrievals & Ability to Stay Off the Wall
- Ability to Leave Perimeter and Gain Inside Ice
- Owning Space with Puck
- Scanning/Awareness of Teammates & Opponents
- Use Teammates to Make Plays
- Zone Entry: Ability to create depth/layers/lanes
- Create & Maintain Offense
I don’t know the process that was used to aggregate the evaluators feedback, but am assuming they collected a populated rubric from all the evaluators for a position and then aggregated the data to take an average of the selections. (I hope they used some online tool to aggregate this all, because there are lots of ways to simplify collecting this information). Then I suppose this compiled data was used as the rating for each player’s Development Performance Criteria. I would then assume the average across all Development Performance Criteria was calculated and the each player was force ranked into one of the three tiers to give the letter rating of A,B or B based on which third they ranked.
Other than the rating and the rubric box selection – no other personalized information was included in the feedback. No short paragraph summary (like you would see in a student report card) from the coach or evaluators to provide additional context was provided.
It is important to note that the ratings are based on the criteria described above. If different criteria were used (which will be discussed in the next post), then a player’s rating might be different if those criteria were closer or further away from the capabilities of a player.
In Part II on this topic I will share my perspective on the good, the bad and the ugly of this feedback process.