I’m worried this is the calm before the storm. The storm being WW3. Iran is having problems (riot’s, religion killings). China’s leader is being protested against. Russia (and Ukraine crisis). We seem to have it under control, but do we really? That’s just an armchair analysis (in terms of control). But add a Trump 2024 to that mix. I’m not even saying Trump would do a good or a bad job, I’m just saying his actions would vary considerably from the current status quo, and this situation is very volatile. Right now the US seems to be maintaining their trajectory (no spike in unemployment), but a recession (which I don’t think is going to happen from rate adjustments alone) could make US interests different in terms of how they want to respond to world events (especially, especially if oil pipelines are disrupted).
Here’s an example of a custom volatility indicator I came up with. It’s not 100% original, but it is my own creation (Similar to Fischer Aggregate Index but using rolling standard deviations and a double rolling range of standard deviations)
I was curious how to incorporate standard deviation and returns as a volatility indicator and not rely on mean reversion assumptions using a range of nominal prices and current price.
So I thought about MACD and diverging SMA’s and thought. Well. I could do the same with a range of a 20day sdev with a 40day range between max and min sdev.
Which gave me this nice chart showcasing how the volatility (blue line, between 0 and 100%. When you are near 0 or 100% get ready for a reversal in a few days. You can see volatility spikes at a controlled pace almost in unison with a spike in cumulative return (orange, separate y axis).
I know how to build a Markowitz Weighted Portfolio, and how to ‘hack it’, just up the quantities associated with higher beta’s which represents the Risk Premium (i.e. how much over the Risk Free Rate is expected as return, aka known as risk premium of the market, based on the DGS3MO).
But I let it resolve to optimal sharpe ratio and simply display the beta’s as derived from MDYG (SP1500).
So based on CAPM Expected Return (Average Risk Premium for past 5 years is .0142 (1.42%), the CAPM return is 4.33% + 1.42% * Portfolio Beta of 1.00116592, which comes out to be 5.75% for next quarter.
A different forecast, one based on Markowitz simulations has 9% for next quarter.
Another forecast based on an expected return factor model forecasted results using a model that has 13% MAPE, the weighted forecasted return is 13% for next quarter (i.e. 13% +/- (13%^2) (i.e. 13% +/- 0.0169%)
What’s frustrating is knowing I hit the ball out of the park when it comes to CAPM portfolio’s and Markowitz, but to know that those in academia that actively trade are not fans of the material they are hamstrung to teach. So I get various strong opinions about what works. Very cult of personality about methodologies, but not me. I’m open to trying as much as I can just for the opportunity to learn.
The Inefficient Stock Market is a gold mine in terms of what factors to look for. I’ve been doing my own research (FRED data, commodities, foreign exchanges, indexes, sectors, SP1500 prices, fundamentals, financial statements, Critiques of Piotroski, French Fama 3 and 5 Factor Models, Arbitrate Pricing Theory). The book suggests improved/revised factor models using a mix of financials and fundamentals offering 30 to look out for.
If it works and proves to match the projected expected returns within the risks shown. Then this could be used to borrow money on margin call knowing your returns are modeled/controlled for and you can make money on the spread, but it’s risky. Borrowed money is usually at the Risk Free Rate, so you aim for a risk premium return by controlling for risk.
The philosophy behind the filters is, “this vs that. Bifurcation.” Split everything somewhat subjectively to a simple filter no matter how complex the calculation is on the back end, aka a 1 or 0 is coded for every value with default being 0 (such as na’s), and add these filters together across ETF’s and sift the top results. Which allows me to focus on revising and expanding individual logic in factors encapsulated in sql and/or python files. For example modifying thresholds which affect proportion of occurrence for a given factor(field). If query logic is based on median’s, it’s easy to get 50% of the values every time for each factor.
I finished my factor model, at least I coded up all the metrics. Revisions and additions will likely follow, but atm I have about 15 indicators that all have equal weight, and the highest ranking stocks are listed at the top.
A summary of what I have (I gleaned the requirements from here https://seekingalpha.com/article/4407684-why-piotroskis-f-score-no-longer-works and https://www.labsterx.com/blog/fundamental-analysis-using-yahoo-finance/) but I also included a few extra bells and whistles (like average mean buy/sell recommendation from analysts, a trending indicator based on a quarterly rolling correlation with date, and preferred cycles chosen based on business cycle)
* Positive three-year average retained earnings
* Sum of TTM cash flow from operations and cash flow from investments greater than 10% of revenue
* At least eight of last twelve quarters’ EPS greater than same quarter previous year
* Cash flow from operations greater than net income each of last three fiscal years
* TTM EBITDA greater than one-third of total debt
* Current ratio greater than one
* TTM equity purchased greater than equity issued
* TTM gross margin greater than subsector median
* Average five-year asset turnover greater than subsector median
I have a lot more information in it than simply sector information.
What I’m showcasing here is a special metric based solely on a rolling quarterly return shown as a ratio (1 = no return) alongside the top performing stocks based on this metric as well as sectors as well as the rolling quaterly return shown as a line chart and how the sectors have performed over time.
I used a slew of guides to come up with a prediction interval for fbprophet (uncertainty interval is described as a confidence interval).
I used a mix of the following URL’s to derive it.
I basically iterate over k-folds deriving an out of sample residual, then extract the combined standard deviation of the residuals to use as the estimate for the prediction error. The standard deviation is based around each fold’s 13 point forecasts (13 weeks = 1 quarter). The caveat being the standard deviation is static for all forecast points so I only draw the 95% prediction interval for the last position (normally prediction intervals grow).
Still a wip, but the nuts and bolts are there. Considering I estimate the optimal sell date… I need to focus an interval around that date.
I fixed an issue with the plots I had for ETS forecasts (contrasted with fbprophet).
It makes more sense not to append the dataframes, but to plot them separately to keep a clear line of distinction between actual and forecasted results.
So this stock is CHEF and the ETS forecast flat-lines, but the error scores for fbprophet are better (contrasted as a ratio between average error and price). So it looks like CHEF (a consumer defensive stock, which if we are in a slowdown should be a safe bet). Will continue to climb.
<v10.2
Include both ETS and fbprophet forecasts
I saw a discrepency between the way fbprophet forecasts and ets forecasts were plotting their outcome variable. So I decided to replot fbprophet using some different parameter’s to tease out the issue which
I’m still not certain how to aptly describe the discrepency, but I can plot it.
Problem: For fbprophet, the mean yhat is occasionally out of bounds of both confidence intervals (I’ve been noticing it on the latest value for recent forecasts), and this problem doesn’t happen w ETS forecasts.
Top plot forecast is ETS but doesn’t list prior forecasts (making it hard to do a 1:1 comparison).
I’m thinking I need to do some more investigating, but my working theories on the issue are
ETS and fbprophet are using an unknown combination of confidence vs prediction intervals (I presume ETS is prediction based on terms used in the documentation) and their not matching with actual results.
Or fbprophet is plotting as a mean confidence interval (not prediction) and the latest price is simply out of the bound of the means of the average, not a specific prediction forecast.
I ditched ETS forecasts for fbprophet mainly because I was getting too many flat lined ets prediction (i.e. Best model was naive). Fbprophet doesn’t have that problem and has the benefit of having less hyper-paramater’s to grid search over. The result so far is the errors are much lower than with the prior ETS.
All I really care about is the estimated outer cv error mean and standard deviation which will tell me the expected error (that is directly comparable to current price) on real world data.
Prior ETS forecasts
This is the inference part of a stock screener widget tool I’ve been working on
It’s kind of busy, but it’s not meant to be clean, it’s meant for quick and to the point inferences (utility > aesthetics = faster inferences).
The chart in the middle is the stock price (blue) mapped against a special formula for supply trend (yellow), and the trailing 2 year min/max (red/green). The two charts on the right are similar but magenta is sector, and cyan is index
The two on the left are special metrics. One is volume another special formula for volume, and a risk_trend_factor (black). The black one is the special filtering momentum indicator.
I print out nested cross validation scores of an ets forecast which is then plotted 13 weeks out (1 quarter). Here this stock (Health Care, LLY) is expected to get 8% quarterly return.
And I know since we are in a slowdown, it’s likely a good stock to ride out. Otherwise one can expect for an expansion or a recession and if either of those two occur, Healthcare is still a safe bet.
I’ve been working on a revised sector performance calculator.
It’s very frustrating trying to analyze sector performance. Seems an easy enough problem to solve, but there are varying factors considered by different guides as well as how to work with the curve.
The metric was based on a YoY change of LEI, (but I opted for ‘USPHCI’ a Coincident Index I could get from FRED), and the curve is simply a case statement from the matrix (cartesian product) of the following 4 conditions: change > 0, change< 0, change decreasing from prior period, or increasing?
The guide I was using was comparing 6 variations of a sectors return (raw return, return > General Market, return > other sectors within that cycle) to arrive at a [convoluted] z-score, where-as the way I’m comparing is simply the sectors return within a given cycle.
I didn’t arrive at any significant (t-tested) returns (i.e. derive p value of t test scores (based on sector means)). A p score < .05 would indicate that the return (represented as a t-score) is significantly different than 0.