Evaluation of Individual Stock and Sector Level
Disclaimer: This work has been submitted by a student. This is not an example of the work written by our professional academic writers. You can view samples of our professional work here.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
Published: Tue, 12 Sep 2017
Is there any method of asset allocation within a stock portfolio that can repeatedly and over time outperform a passive index (buy and hold strategy)? The objective of this study is to compare strategies that have been used over the last decades by academics and professionals alike, and to expand on that study to create a real-time portfolio at the end of year t, to observe the portfolio’s behaviour during the next year (t + 1). This portfolio, unlike those created in previous studies, is not limited to the study of individual stocks, but instead gives importance to sector allocation. In addition, this study also focuses on implementing a long-short strategy in those assets: with the same overall exposure to the market, will a long-short strategy that depends on financial metrics exhibit a better risk-adjusted return than a 100% long strategy? In other words, are financial metrics capable of not only detecting undervalued stocks, but also of detecting those overpriced?
Sector allocation is an especially important factor, with previous studies failing to consider sector allocations before moving to find the best classified assets within each sector; rather, they have jumped directly to stock selection. One of the most common strategies for stock picking in the asset management industry today relies on initially choosing sectors that, from a macro perspective, are expected to outperform the market. From this stance, analysts proceed to evaluate specific stocks to choose potential winners. Despite this being the industry standard, only a scarce number of studies exist that use common ratios between sectors to analyse and devise allocation strategies that are first sector-based and then based on individual stock.
The objective of this project, therefore, is to focus on a set of financial metrics, both at individual stock level and at sector level, to examine if there is a positive relationship between these ratios and alpha creation. In order to achieve this, a portfolio will be constructed and rebalanced yearly, according to previous end-of-year data. Several traditionally-appraised financial measures, such as P/E ratio, free cash flow to enterprise value ratio and book-to-market value ratio will be employed, as will certain profitability ratios that include data from the income statement, such as gross profit, operating profit and EBITDA. The reasoning behind using measures higher up within the income statement is due to solely to accounting choices – in comparison to net income these are less affected by an individual company’s accounting process. In fact, revenue and other described measures of profit, are more consistent year to year than net income. Subsequently, this provides the rationale that these measures are better able to predict future cash flows and, consequently, next year’s performance.
Forward-looking measures such as analysts’ consensus recommendations and forward EPS will also be utilised and tested. Departing from the hypothesis that these individuals conduct an exhaustive analysis of the financial data at year end t to predict the performance during year t+1, the accuracy of the forecasts will be tested against those same financial measures and valuation metrics existing at year end t.
Data will be extracted from a set of databases comprised of Compustat, CRSP and I/B/E/S. Fundamental data will be extracted from end-of-fiscal-year filing, to allow a time lag for the data release of a quarter period (3 months), before a portfolio is rebalanced. Hence, with a fiscal year ending in December of year t, a lag in the release of data will always exist and portfolios will be rebalanced at the end of the first quarter of year t+1. Monthly returns for every stock will be compounded throughout that year over 12 months.
Each year’s universe of stocks will then be ranked by the different valuation metrics to construct a portfolio at year end t in order to assess the portfolio’s performance during t + 1. Three different sets of portfolios will be constructed for each financial metric each year and, within each set, two strategies will be implemented. For a stocks-only portfolio, only those stocks ranking in the upper quintile (top 20%) will be used each year. For the sector-only portfolio, a market capitalisation average of each sector will be calculated, and the portfolio will be formed by the top 20% sectors ranked in any given year. For the sector and stocks portfolio, an implementation of both criteria will be evaluated. That being said, the portfolio is formed from the top quintile stocks within the top quintile sectors each year. As a whole, this assumes a long strategy, buying in on those stocks on a value-weighted basis each year to constitute a portfolio.
In the long-short strategy, the bottom quintile of each respective category will be shorted, and the proceeds used to buy an extra 30% of the top quintile of stocks. Using a long-short strategy will help the researcher examine the feasibility of using these ratios in recognising overvalued stocks as well as undervalued companies, and use this information to construct a more profitable portfolio. A 130/30 long short strategy is used, where a 150/50 long short strategy or other proportions could have also been tested. However, the 130/30 strategy is chosen following the creation and later popularisation of 130/30 mutual funds and investment vehicles. This choice stems from an initial study suggesting that 130/30 was the optimal proportion of long-short positions in a portfolio, even though no empirical data has been found that a 130/30 strategy later maximises alpha. Despite this, given its popularisation and position as an industry standard, our analysis proceeds with this strategy.
Ultimately, performance attribution and portfolio statistics will be calculated, such as average return, total payoff, standard deviation, Sharpe ratio and alpha according to the Fama French 3 factor model, correcting for small minus big and high minus low book market value (Fama and French, 1992). This will help in our analysis of the results, to provide a clear and concise indication of which ratios perform best under each strategy and under each level (sector and stock).
Re-emphasising the importance of sector level asset allocation strategies, particularly at a time in the financial industry when performance attribution analysis stresses return on the relative weighting of sectors in portfolios, it is surprising that existing studies underscore the importance of certain ratios or fundamental data for stocks while lacking the ability to employ a method to identify undervalued sectors. Previous studies from Shiller and Bunn (2014) construct a 140-year regression series based on the relationship between the earnings of different sectors and their yields, creating a CAPE (Cyclically Adjusted Price Earnings) index that identifies sectors with upside potential. Their research indicates that market sectors show price mismatches that can be exploited. According to them, the CAPE index is capable of outperforming the market by an average of 4%. Therefore, the objective of this project is to expand on their results by examining a number of other ratios and financial fundamentals, particularly those related to profitability measures, and to investigate whether these, both at individual and sector level, are capable of forming a portfolio that outperforms the broader index and a buy and hold investment strategy.
Gray and Vogel (2012) try to depict not only the ratio that is able to predict higher performing stocks, but also those in the lower ranges; this implies being able to detect not only what are known in the financial investment world as value stocks, but also overvalued growth stocks. According to their research, some measures are more efficient than others in providing insight into which stocks are overpriced. Gray and Vogel (2012) therefore conclude that EBITDA/EV and GP/EV are the metrics that are best able to identify the overvalued stocks. The results in this dissertation agree that the GP/EV ratio is useful to identify overvalued stocks and is hence a good metric to build long/short strategies, but the results also consider free cash flow/EV as a favourite on a risk-adjusted basis for implementing a long-short strategy at stock level. The results of the following study show that stocks exhibiting a low FCF/EV experience low returns, demonstrating an ability to identify overvalued stocks. Such a contradiction might be explained by the difference in the universe of stocks used or, more specifically, by the use of lag for data release, which corrects the assumption that results are available to the public at the end of fiscal year t. This lag is introduced by Hughen and Strauss (2015) in their comparable study of profitability ratios in portfolio allocation.
The analysis in this project goes beyond what the Gray and Vogel (2012) study implies and develops a portfolio strategy to buy stocks that exhibit higher ratios, but also a complementing 130/30 strategy, which short sells stocks exhibiting poor ratios, and proportionally buys in excess those that exhibit a healthy ratio. As Miller (2001) shows in his work, overvaluation of stocks is far more common and of greater absolute value than undervaluation. This supports a rationale for this work. However, care should be taken when dealing with long-short strategies. As suggested by Michaud (1993), costs stemming from short sales in a portfolio could prove quite significant. However, Jacobs and Levy (1995) argue that these costs are not much higher than a long-only portfolio, and are well under those charged by active management.
Professionals and practitioners alike have historically depended on several fundamental and financial measures to assist them in the portfolio selection process. Perhaps the most famous is the price-to-earnings ratio (P/E) along with the ratio between earnings before interest, taxes, depreciation, and amortization (EBITDA) and total enterprise value. Fama and French (1992) argue that book to market ratio perhaps most accurately explains the cross section return of stock, which they later include in their three-factor model.
In our approach, we include these traditional metrics, while also relying on profitability measures, such as gross profit/EV, introduced by Novy-Marx (2010), and operating profit divided by market value, as presented in Fama and French’s (2015) 5-factor model. Ball et al. (2015) proves that the suggestions shown in Novy-Marx’s (2010) paper, in which he proposes the existence of a very strong cross relation between gross profit and future returns, regardless of the financial leverage or structure of the firm, are true by constructing portfolios based on highly profitable firms as represented by gross profit/enterprise value. Novy-Marx (2010) concluded that because gross profit is the measure of profit less affected by accounting choices in the income statement, it results in a clear and normalised comparison between different companies. However, Ball et al. (2015) argue that gross profit is not significantly superior to net income (earnings) when analysing an extended time period. After analysing other measures of financial data, they conclude that operating profit, as a percentage of market value, does offer a significantly higher alpha. Therefore, this project continues with the aforementioned financial metrics, and focuses on sector and stock selection to create an annually-rebalanced real-time portfolio.
Hughen and Strauss (2015) attempt to use different financial measures to construct portfolios at sector, stock and combined stock and sector levels. The following study complements and verifies the conclusions of Hughen and Strauss (2015) regarding the superior indicators of profitability measures versus traditional measures of valuation such as P/E and book to market in all three levels, and extends their research by looking at forward looking measures and a value weighted approach to the sector allocation, rather than the equal weight approach used in their research. The limitations of assuming sectors to be equally weighted across the portfolio, and not a function of the market value of the components of those sectors, contradict the notion of constructing a value-weighted portfolio. Their construction of portfolios at stock level is value-weighted, whilst at sector levels they equally weight each sector within their top quintile. This is a counterintuitive approach and this paper tackles that limitation by weighting the sectors accordingly respective to their components’ market capitalization, making periodical rebalances within the year unnecessary and increasing operational efficiencies in a real-life practical situation. It should be mentioned that the universe of stocks used in this study pertains to the S&P500, which by definition is a market-weighted index. The project finds some discrepancies with respect to Hughen and Strauss’ paper, in particular surrounding the performance of the free cash flow ratio. A possible explanation for this is that this study states free cash flow as a percentage of total enterprise value, whilst Hughen and Strauss (2015) compute it as a percentage of market value. The approach taken within the subsequent study results in a much higher risk-adjusted return for the ratio, as measured by the Sharpe ratio, both for long strategies and to identify overvalued stocks.
In their research of different financial ratios, Loughran and Wellman (2011) found that EBITDA over enterprise value offers superior performance to a predefined buy and hold benchmark. Their analysis, which comprehends data starting from 1963 to 2009, holds that EBITDA/EV possess a very significant regressive coefficient with future performance. Gray and Vogel (2012) confirm this hypothesis, analysing a time period of 30 years starting in 1980, in their research of different financial metrics. This paper confirms that, at a stocks-only level, EBITDA along with gross profit, both measured as a percentage of enterprise value, offer the highest risk-adjusted returns. For the analysis at both sector and stock level, EBITDA fails to show the same accuracy as the stock-only analysis. Therefore, the following study builds on the findings of previous studies by providing a more thorough examination at sector level.
Gray and Vogel (2012) extended their research further by considering periods of economic crisis, in order to identify which financial ratio is most appropriate during high volatility economic downturns. However, they were unable to conclude which ratio is able to identify winners or losers during periods of financial distress, because none behaves in the same systematic manner during selected periods of extreme economic contraction.
In their study of different economic coefficients and measures, Welch and Goyal (2007) conclude that the relationship between sector level performance and macroeconomic industrial data is unstable and at most, follows a random relationship. With that in mind, the focus of this paper is instead on building sector data as a market weighted average of the individual microeconomic company ratios and forecasts. Each individual constituent fundamental metric at year end will be used to position the allocation of each asset for the next year based on a ranked system. This construes that this analysis will be based on each stock’s financial information at year end t to later construct sector level ratios and metrics, and is not based on macroeconomic or sector level data that, according to Welch and Goyal (2007), do not provide any significant cross-relation with future performance.
Although the focus of previous literature is in the attribution of portfolio performance to the different ratios and metrics used, the objective of this paper is to examine whether these same metrics, mainly traditional measures, forward looking estimates and profitability ratios, are able to exploit sector and stock level mispricing and generate real-time winner portfolios.
Given the availability of forward estimates in the I/B/E/S database, a period from 1990 to 2016 will be examined in this paper. The choice of time period is not a random one; rather, to have consistency in data across the analysis and throughout all the variables used, this period is chosen from the start. To see the limitations of an extended data period, Gray and Vogel’s (2012) work show an exemplary illustration of such restrictions. They use a period of 30 years, starting in 1980, evaluating which financial metric can predict future performance. They complement their analysis on fundamental metrics by looking at analysts’ estimates and consensus forecasts, succeeding to recognise the lack of certain information in the beginning years of their timeframe, therefore failing to
Nos interesa saber, al igual que el trabajo de Graham y Dodd (1934), cómo el uso de normalización de los diferentes ratios y fundamentos es capaz de cambiar nuestros resultados. Según sus estudios, la normalización o media sobre cierto tiempo de estas métricas financieras, es capaz de mejorar la predicción de los resultados comparado con una estimación anual. Según su análisis, la normalización debería ser entre 7 y 10 años. Anderson y Brooks (2006) recientemente confirmaron esto, llevando a cabo un estudio de la métrica P/E, la cual también utilizamos en nuestro análisis, pero a la inversa (Earnings/ Market Value). Según su estudio, basado sobre el mercado en U.K., usando el promedio de este ratio de 8 años en lugar de usar las métricas del año anterior, resulta en un crecimiento de las ganancias de un 6%, ya que es capaz de filtrar el ruido de earnings. Siguiendo estos análisis, nuestro estudio abarcará también ratios normalizados durante una serie de años, concentrándonos en el universo de acciones del S&P500, para confirmar que esta hipótesis es apta en nuestro análisis. Sin embargo,
V.I Evaluation Metrics
This paper will focus on three different categories of data inputs. There is an abundant choice of methods and variables in the accounting and financial research world, there is a large set of variables and measures to assess a firm’s valuation. In order to establish the model, an initial differentiation between these variables should be made.
To start with, we look at the long standing traditional metrics that long have been appraised by the professionals in the financial industry. This involves the inverse of the P/E ratio, given as Earnings over Market Value of the firm, Book to Market value and Free Cash Flow to Enterprise Value. These ratios, introduced decades back in the origins of value investing by Graham and Dodd (1934), show mixed results according to existing literature. Including this long favourite measures in this research will prove useful when comparing to the other measures.
Earnings will be computed following Fama and French’s (2001) approach:
Earnings = Earnings Before Extraordinary Items – Preferred Dividends + Income Statement Deferred Taxes
Book value/Market Value
Book Value will again be calculated as Fama and French (2001) propose. Following on its definition,
Book Value = Stockholder’s Equity – Preferred Stock
Free Cash Flow/Enterprise Value
Analogous to Novy-Marx’s (2010) work, we compute free cash flow as
FCF = Net Income + Depreciation & Amortisation – Working Capital Change – Capital Expenditures
Enterprise Value will also need to be calculated. Following Loughran and Wellman (2011), we
compute it as
EV = Market Value + Short-term Debt + Long-term Debt + Preferred Stock Value – Cash and Short-term Investments
The enterprise value variable will be used again in multiple valuation measures.
Profitability measures as reported in the income statement will also be used as valuation methods. The focus will be Gross Profit, EBITDA and Operating Profit. EBITDA and Gross Profit will be computed as a percentage of Total Enterprise value, as suggested by the work of Gray and Vogel (2012), whilst Operating Profit will be looked at as a percentage of Market Value. From here on, we’ll expand on this and compute an average of this three profitability measures, in order to analyse if a composite metric is able to detect the cross relation between fundamentals and future returns.
The reasoning behind using an average of these three different measures stems from the work of Hughen and Strauss (2015), as they find that the composite measure are less sensitive to changes in the firm’s structure across different sectors and within sectors, as well as providing more information than just a single variable. This implies that the average measure is less affected by differences in financial leverage across sectors, which results in a more standardised comparison between firms in different sectors.
Gross Profit/Enterprise Value
Once again following Novy-Marx (2010), we compute every year’s gross profit as
Gross Profit = Revenue – Cost of Goods Sold
Operating Profit/Market Value
Operating Profit, as define in the income statement will be used for this metric.
EBITDA, defined as Earnings Before Interest, Tax and Depreciation & Amortisation is calculated by the simple sum of operating and non-operating income;
EBITDA = Operating Income before Depreciation + Non-Operating Income
Equally weighted average of the three profitability ratios.
The reason for selecting profitability measures higher up the income statement, and not focusing solely on the inverse P/E ratio, Earnings/MV or expectations of forward earnings, is because the higher up the income statement we go, the more consistent data proves to be year on year: that is, figures are more normalized and suffer fewer variations, which could explain why they result in being better predictive models, filtering out excessive noise. According to Dichev et al. (2013), profitability metrics are more persistent than earnings and forecast future performance more accurately than net income. Earnings data is affected by accounting choices, whereas gross profit and operating income suffer fewer distortions from this.
Analysing a set of fundamental past data won’t be the only proxy used to rebalance our portfolio: analysts’ stock recommendations will also be evaluated.
Two different sets of forward data will be used. In the first place, an average of the consensus forecast of next fiscal year’s EPS divided by the current market value of each firm will be used. This forecast will be an average of the estimates of each analyst throughout the fourth quarter of year t for year t+1.
The consensus mean recommendations from analysts from the fourth quarter of the year t for year t+1 will also be employed. These recommendations are a ranking from 1 to 5, with 1 signalling a strong buy and 5 a strong sell. This is the mean of the different analysts recommendation existing at that time for each individual stock.
V.II Data Criteria and Universe
To ensure a minimum amount of liquidity in our analysis, we pick the historical constituents of the S&P500 Index as our universe of stocks. This results in our analysis not being driven by the performance of smaller capitalisation firms, for which data might not be readily available. As our analysis involves implementing a long/short strategy, the ability to do so with large capitalisation stocks in practice results much easier. Therefore, every year, the appropriate constituents in our portfolio are updated, reflecting the changes in the overall index. This implies that our universe of stocks closely replicate the S&P 500 Index on a yearly basis. The constituents as of 1990 will first be extracted, and updated every year thereafter.
The analysis is then limited to those companies with a positive market capitalization as of December of year t, as well as to those companies with at least 2 years of data, in order to perform all the analysis in a consistent universe of stocks.
In order to conduct the analysis across sectors in a more uniform manner, certain companies were removed from the universe of stocks. This includes REITs, utility and financial firms, as denominated by CRSP.
From this, a benchmark is constructed with our new universe of stocks; that is, all those fulfilling the above criteria. This benchmark is a value weighted portfolio of all the stocks for a given year, rebalanced yearly at the end of each previous year (December 31st). Therefore, being a market value weighted portfolio comprising most S&P500 stocks, it should closely resemble the S&P500 Index. Comparing the quarterly performance of both our benchmark and the index for the period to analyse between 1990 and 2015, and running a corresponding regression, it is found that they correlate with a coefficient of 99.17%. As seen by this observation, our universe of stocks bears similarities with the index, although the payoff at the end of the period differs. The benchmark provides a payoff of $11.13 for a $1 investment (or a 1113%) at the start of the period, in 1990. The S&P500 index returns a payoff of $8.86 (886%) at the end of the period. This figures assume complete reinvestment of capital and a compounded growth rate.
We represent year t+1 to be the year for which the portfolio’s performance will be monitored, and year t to be the year in which the fundamental data which will estimate performance will be extracted. As most US companies have a fiscal year corresponding to the calendar year, our model will retrieve end of year fundamental data for these companies, corresponding to December year t, allow for a data release lag, and compute the portfolio. The lag in data release is introduced as companies don’t disclose their annual financial statements until the quarter after their fiscal year end. This usually happens within two months, as observed from historical data. Taking this factor into account, the model will allow for a lag of one quarter, therefore allowing for information to be readily available to the public at each point in time. Denoting t.(x) as the xth quarter of year t, and t+1.(x) as the xth quarter of year t+1, the above implies extracting fundamental data as of t.(4), allowing for a lag in data release during t+1.(1) in order to construct the portfolio at t+1.(2). The performance will then be measured during one year from then.
This model so far deals only with the companies which disclose their end of year information by the end of the calendar year, so a provision must be made for the proportionally low, but still significant, number of companies whose annual results are released at a different date. Hughen and Strauss (2015) tackled this issue by rebalancing quarterly their portfolio, but they recognized the limitations of using quarterly results rather than normalizing their ratios and profitability measures by using annual ones. Gray and Vogel’s (2012) work consists of an annually rebalanced portfolio as of June 30 every year. Their approach is to use, for firms with fiscal year ending within the last quarter of the previous year, or the first quarter of the year, those fundamentals. For companies with fiscal years ending after March 30, previous year’s fundamentals will be used. This implies that, no matter when the end of fiscal year is, the latest annual filling will always be employed to construct their portfolio, even when this filling is from the second quarter of the previous year. In the following model, the approach will be somewhat different,
Therefore, first, a differentiation between the two strategies implemented should be made. Value weighted These buy-and-hold portfolios are attractive not only because they minimize trading costs, but because they are simple to implement from an operational perspective.
Mention sector allocation using SICS – merge Compustat and CRSP databases.
Ball, R., Gerakos, J., Linnainmaa, J. and Nikolaev, V. (2015). Deflating profitability. Journal of Financial Economics, 117(2), pp.225-248.
Bunn, O. and Shiller, R. (2014). Changing times, changing values. 1st ed. Cambridge, Mass.
Dichev, I., Graham, J., Harvey, C. and Rajgopal, S. (n.d.). Earnings Quality: Evidence from the Field. SSRN Electronic Journal.
Fama, E. and French, K. (1992). The Cross-Section of Expected Stock Returns. The Journal of Finance, 47(2), p.427.
Fama, E. and French, K. (2006). Disappearing dividends: changing firm characteristics or lower propensity to pay?. 1st ed.
Fama, E. and French, K. (2015). A five-factor asset pricing model. Journal of Financial Economics, 116(1), pp.1-22.
Gray, W. and Vogel, J. (2012). Analyzing Valuation Measures: A Performance Horse-Race Over the Past 40 Years. SSRN Electronic Journal.
Hughen, J. and Strauss, J. (2015). Portfolio Allocations Using Fundamental Ratios: Are Profitability Measures Effective in Selecting Firms and Sectors?. SSRN Electronic Journal.
Jacobs, B. and Levy, K. (1993). Long/Short Equity Investing. The Journal of Portfolio Management, 20(1), pp.52-63.
Loughran, T. and Wellman, J. (2011). New Evidence on the Relation between the Enterprise Multiple and Average Stock Returns. Journal of Financial and Quantitative Analysis, 46(06), pp.1629-1650.
Michaud, R. (1993). Are Long-Short Equity Strategies Superior?. Financial Analysts Journal, 49(6), pp.44-49.
Miller, E. (2001). Why the Low Returns to Beta and Other Forms of Risk. The Journal of Portfolio Management, 27(2), pp.40-55.
Novy-Marx, R. (2010). The other side of value. 1st ed. Cambridge, MA: National Bureau of Economic Research.
Welch, I. and Goyal, A. (2007). A Comprehensive Look at The Empirical Performance of Equity Premium Prediction. Review of Financial Studies, 21(4), pp.1455-1508.
Cite This Work
To export a reference to this article please select a referencing stye below: