Treynor Ratio
The Treynor Ratio measures a portfolio's excess return per unit of systematic (market) risk, using beta as the denominator rather than total standard deviation.
Developed by Jack Treynor in the 1960s, the Treynor Ratio builds on the insight that in a well-diversified portfolio, unsystematic (company-specific) risk can be eliminated through diversification. What remains is systematic risk — the exposure to broad market movements that cannot be diversified away. The ratio therefore asks: how much excess return did the manager generate for each unit of this unavoidable risk?
The numerator is the portfolio return minus the risk-free rate (the same as in the Sharpe Ratio). The denominator is the portfolio's beta relative to a benchmark, typically the S&P 500. A portfolio with a beta of 1.2 that earns the same excess return as a portfolio with a beta of 0.8 will have a lower Treynor Ratio, reflecting that it took on more market exposure to achieve that result.
The Treynor Ratio is most appropriate when comparing managers whose portfolios are components of a larger overall allocation. If an investor holds many funds simultaneously, the relevant risk for each fund is its contribution to the total portfolio's systematic risk, not its standalone volatility. In this context, the Treynor Ratio is more informative than the Sharpe Ratio.
For concentrated portfolios or those holding significant unsystematic risk, the Treynor Ratio can be misleading because it ignores the portion of risk that beta does not capture. A portfolio with a low beta but high idiosyncratic exposure can post an impressive Treynor Ratio while still carrying substantial total risk.
Beta estimates are also sensitive to the time window and benchmark chosen. Monthly versus daily returns, different lookback periods, and different index choices can produce meaningfully different beta values and therefore different Treynor ratios for the same portfolio.