The salary cap has been an incredibly important part of NHL hockey since arriving in the 2005-06 season. It helped create parity between franchises rather than players repeatedly flocking to the wealthiest clubs. For fans and media, it’s another factor by which to evaluate players, creating folk heroes on league-minimum deals or villainizing former stars because they aren’t living up to their new contracts.
And for teams, it’s about squeezing every penny and getting the best out of your roster year-by-year. The teams that use it the best are usually the ones that are consistent contenders, and the teams who have their rosters littered with albatross contracts usually find themselves at the bottom of the league.
In the coming weeks, we’ll be bringing back the salary cap rankings at Daily Faceoff, in which we take a look at which teams do the best job at managing their money. However, this isn’t just as simple as identifying who has the most cap space or which teams have all their star players locked up for cheap. There are several different factors to consider, some more complex than others.
Which is what this article is for. Before we dive into the rankings themselves, I figured I’d take the time to explain some of the aspects of this system, including what each category actually represents and the process behind creating some of them.
This introduction has been published every year as a way to remind returning readers about the process, catch new readers up to speed, as well as provide updates where needed, but it has generally provided the same information. Not this year, however. I’ve decided to overhaul some of the evaluation tools, particularly with how I define good contracts, so if you want to stay informed, it’s more important than ever to read this year’s edition.
Without further ado, let’s begin.
In previous years, this was referred to as “good contract percentage,” which was relatively straightforward. I used a system that evaluated players and tiered them into different levels, did the same with contracts, and depending on where the player and their contract ended up, determined if the contract was good or bad. Then, I totalled all the contracts looked at for one team, and gave it a percentage.
However, just going off that tiered system didn’t seem to be good enough. It basically made all good and bad contracts equal, even though that is certainly not the case. More context was needed if I wanted to give an accurate evaluation for teams’ ability to sign quality contracts, so I’ve decided to overhaul the system.
Now, contracts will be rated on a scale. The way in which I evaluate a player will remain the same, but how a player relates to their contract in this process is significantly different and should give a more accurate reflection of how good or bad their contract is.
Below, I will provide both a refresher on the player evaluation system, as well as explain the new contract evaluation system. Once again, I looked at every forward, defenseman and goalie signed to a contract with NHL experience in the last three seasons when putting this system together, from the top minute munchers like Quinn Hughes to Carter Mazur, who’s played just 1:17 of ice time in that span. That said, it’s just regular season data, so Alexander Nikishin and Zeev Buium aren’t included in this database because they haven’t played a regular season game yet.
The only exception will be Gabriel Landeskog, who feels like too big of an omission to ignore considering his salary cap hit and his established legacy in the NHL. Since this is a very unique situation, I’ve put him in the model with his metrics from his most recent regular season in 2021-22. It’s not a perfect solution, but it’s better than pretending that the Colorado Avalanche don’t have a $7 million cap hit on their books.
For the skaters, I created this statistical profile using 10 different stats in three different categories.
However, unlike previous years, I did not tier the players into elite, first line, top pair, etc. Instead, I just left their ranking as is among the total number of forwards and defensemen, a detail that will be more important in a moment.
For the goalies, I created a statistical profile using six different stats.
Like the skaters, the goalies were not tiered into elite, starter, backup and replacement this season, instead leaving their rank as is among all the goalies.
In past seasons, this would be the part where I’d tier the different contract values into the similar levels that I did with the player evaluation, and use the two results to determine if the contract was good or bad. But this is where things change significantly.
Along with all the data that I collected for player and goalie stats, I also plugged in their cap hits into my database, with all the information coming from PuckPedia. After that, I took the player’s rank in my model at their position (forward, defenseman, or goaltender), and compared it to where they ranked amongst cap hits to get what I’ll simply call their “contract rating.” In theory, the top-ranked player at each position should be the highest-paid player, and the lowest-ranked player should be the lowest-paid.
The goal with this new system is to get a better grasp as to how good or how bad a contract is. Instead of a binary system of simply good or bad, the contract rating is much more dynamic. It rewards teams when they sign a player to well below their contract value, punishes them when they sign bad players to high cap hits and doesn’t reward or punish them for getting players close to market value.
On top of that, it does eliminate some of the pickiness that came from previous iterations of my model. There would be some instances where a player with, for example, an elite-level cap hit would only rank in the 30s or 40s of my model, just outside of the elite level, and automatically make the contract bad. While the new system would still have them with a negative rating, it may only be in the -10 to -20 range, which is closer to market value than “bad.”
That said, not all the pickiness has been eliminated. Since this is looking at the past three seasons of data, players who may have been just okay in 2022-23 and 2023-24, but broke out in 2024-25, will be hampered by the model, even if that recent breakout is the new established level for that player and they were signed to a contract as such. Since I’m not capable of incorporating projections for players down the road, both in terms of younger players getting better and older players getting worse, it is one of the few flaws in this process.
Implementing the contract rating also means that, for the first time since the beginning of the cap rankings in 2022, all contracts will be included. Before, only cap hits valued at $1 million or higher were included in this section, while all the cap hits lower than that were put into a separate category called “quality cheap deals.” Since all players evaluated have contract ratings, there’s no need to separate them anymore.
So once all of the players have been evaluated and have their contract ratings, each team’s proficiency at signing contracts is evaluated by combining all the contract ratings for all the evaluated players they have signed to contracts. Since every rating will either be positive, neutral or negative, it will give every team their own cumulative rating.
Okay, now we get back to the familiar categories for the cap rankings.
Anyone with a fair amount of hockey and salary cap knowledge should know what no-trade and no-move clauses are, so I won’t waste any time there. Any player with one is considered, no matter how much they make. I then rank teams based on how few they have, with the idea being that teams with fewer clauses like this have more flexibility in moving pieces around to free up space, as well as not having their hands tied with forced protection in expansion drafts.
If you need an example as to why having fewer clauses on your roster can be beneficial to navigating the salary cap and making moves, take it up with the Toronto Maple Leafs, who probably wouldn’t have minded trading a soon-to-be-walking Mitch Marner for a long-term certainty in Mikko Rantanen last season, but couldn’t because of Marner’s no-trade clause.
This element looks at the money on each team’s cap space that isn’t going to players on their NHL roster. This includes buyout cap hits, retained salary in trades, cap recapture penalties, termination penalties (looking at you, Los Angeles Kings), contracts buried in the minors, and performance bonus overages.
This idea here should be pretty self-explanatory. You want as much cap space as possible to build your roster, so any cap space lost to players not on the roster (or, in the case of performance bonuses, cap space lost to work performed in a prior season) is a waste.
When you think of a core player, you think of the players that you want to build your team around for the long term. However, a core player in this scenario is only half of that.
With this category, I look at players on each team locked up to contracts with a term of four years or longer, and evaluate them based on the quality of the players in this category. For this, I also use the statistical profile of these players and average it, ranking teams from best to worst. Of course, with the statistical profile being altered this season, it means it will be based on the player’s rating and not their tier, but other than that, this category shouldn’t be affected too much.
The idea here is to evaluate teams based on how effectively they identify top-end players and lock them up long term, and as such, only sign depth players to shorter deals.
And finally, the most important part of looking at a team’s cap space efficiency is by looking at how much cap space they actually have. However, it’s not quite as simple as ranking them based on cap space.
Instead, I take where they rank from 1 to 32 in the league in cap space, and compare it to where they rank 1 to 32 in the league based on their roster’s quality, acting under the assumption that the best team should have the least cap space, and the worst team should have the most. With this, it gives more leniency to competitive teams up against the cap, because that’s where they should be to be as competitive as possible, and punishes bad teams that are up against the cap because they shouldn’t have that much salary on the books.
To rank teams, I once again turned to each player’s stat profiles. I picked 12 forwards, six defensemen, and two goalies from each team based on the highest average ice time, established that as their “main roster,” and then evaluated the team based on those players. Again, with a new stat model, this will now be based on the player’s specific ranking in the league compared to the tier they are in, so much like Quality of Core, there will be some changes, but it will be largely unaffected.
To get the final ranking of all 32 teams, it’s extremely simple. I take their rankings in the five categories and aggregate them to get an average ranking position, and then order the teams based on that.
It gives us a good picture as to which teams do the best overall in the different facets of managing the salary cap and which teams do the worst. Doing poorly in one category won’t completely destroy your ranking, just like doing well in one won’t save it either.
And that’s it. Starting next week, the true rankings begin, and we’ll work our way through all 32 teams to figure out which one is the best of the best. I’m also going to try to update the rankings as we go along in this series, so it’s possible that teams may jump from one part of the series to another if their ranking changes, but I’ll make sure to keep you updated at the beginning of each article as we go along.
More must-reads:
Get the latest news and rumors, customized to your favorite sports and teams. Emailed daily. Always free!