(abs(difference)-abs(difference.rolling(20).min()))/(abs(difference.rolling(20).max())-abs(difference.rolling(20).min()))

# Category Archives: Economy

# The calm before the storm

I’m worried this is the calm before the storm.

The storm being WW3.

Iran is having problems (riot’s, religion killings). China’s leader is being protested against. Russia (and Ukraine crisis).

We seem to have it under control, but do we really? That’s just an armchair analysis (in terms of control). But add a Trump 2024 to that mix. I’m not even saying Trump would do a good or a bad job, I’m just saying his actions would vary considerably from the current status quo, and this situation is very volatile.

Right now the US seems to be maintaining their trajectory (no spike in unemployment), but a recession (which I don’t think is going to happen from rate adjustments alone) could make US interests different in terms of how they want to respond to world events (especially, especially if oil pipelines are disrupted).

# Custom Volatility Indicator

Here’s an example of a custom volatility indicator I came up with. It’s not 100% original, but it is my own creation (Similar to Fischer Aggregate Index but using rolling standard deviations and a double rolling range of standard deviations)

I was curious how to incorporate standard deviation and returns as a volatility indicator and not rely on mean reversion assumptions using a range of nominal prices and current price.

So I thought about MACD and diverging SMA’s and thought. Well. I could do the same with a range of a 20day sdev with a 40day range between max and min sdev.

Which gave me this nice chart showcasing how the volatility (blue line, between 0 and 100%. When you are near 0 or 100% get ready for a reversal in a few days. You can see volatility spikes at a controlled pace almost in unison with a spike in cumulative return (orange, separate y axis).

# CAPM Portfolio’s

# Stock Screener Factor Model

I finished my factor model, at least I coded up all the metrics. Revisions and additions will likely follow, but atm I have about 15 indicators that all have equal weight, and the highest ranking stocks are listed at the top.

A summary of what I have (I gleaned the requirements from here https://seekingalpha.com/article/4407684-why-piotroskis-f-score-no-longer-works and https://www.labsterx.com/blog/fundamental-analysis-using-yahoo-finance/) but I also included a few extra bells and whistles (like average mean buy/sell recommendation from analysts, a trending indicator based on a quarterly rolling correlation with date, and preferred cycles chosen based on business cycle)

* Positive three-year average retained earnings

* Sum of TTM cash flow from operations and cash flow from investments greater than 10% of revenue

* At least eight of last twelve quartersâ€™ EPS greater than same quarter previous year

* Cash flow from operations greater than net income each of last three fiscal years

* TTM EBITDA greater than one-third of total debt

* Current ratio greater than one

* TTM equity purchased greater than equity issued

* TTM gross margin greater than subsector median

* Average five-year asset turnover greater than subsector median

# Tableau Dashboarding of Sectors

I’ve been working on my stock data.

Some sql queries used for the data mart.

https://gist.github.com/thistleknot/dcb21713f11dd3c30632dff990f2804d

https://gist.github.com/thistleknot/5678d338f57d9bbd2a6c3a6e91384857

I have a lot more information in it than simply sector information.

What I’m showcasing here is a special metric based solely on a rolling quarterly return shown as a ratio (1 = no return) alongside the top performing stocks based on this metric as well as sectors as well as the rolling quaterly return shown as a line chart and how the sectors have performed over time.

# Stock Screener

v10.3

I used a slew of guides to come up with a prediction interval for fbprophet (uncertainty interval is described as a confidence interval).

I used a mix of the following URL’s to derive it.

I basically iterate over k-folds deriving an out of sample residual, then extract the combined standard deviation of the residuals to use as the estimate for the prediction error. The standard deviation is based around each fold’s 13 point forecasts (13 weeks = 1 quarter). The caveat being the standard deviation is static for all forecast points so I only draw the 95% prediction interval for the last position (normally prediction intervals grow).

Still a wip, but the nuts and bolts are there. Considering I estimate the optimal sell date… I need to focus an interval around that date.

v10.2.1

I fixed an issue with the plots I had for ETS forecasts (contrasted with fbprophet).

It makes more sense not to append the dataframes, but to plot them separately to keep a clear line of distinction between actual and forecasted results.

So this stock is CHEF and the ETS forecast flat-lines, but the error scores for fbprophet are better (contrasted as a ratio between average error and price). So it looks like CHEF (a consumer defensive stock, which if we are in a slowdown should be a safe bet). Will continue to climb.

<v10.2

Include both ETS and fbprophet forecasts

I saw a discrepency between the way fbprophet forecasts and ets forecasts were plotting their outcome variable. So I decided to replot fbprophet using some different parameter’s to tease out the issue which

I’m still not certain how to aptly describe the discrepency, but I can plot it.

Problem:

For fbprophet, the mean yhat is occasionally out of bounds of both confidence intervals (I’ve been noticing it on the latest value for recent forecasts), and this problem doesn’t happen w ETS forecasts.

Top plot forecast is ETS but doesn’t list prior forecasts (making it hard to do a 1:1 comparison).

I’m thinking I need to do some more investigating, but my working theories on the issue are

ETS and fbprophet are using an unknown combination of confidence vs prediction intervals (I presume ETS is prediction based on terms used in the documentation) and their not matching with actual results.

Or fbprophet is plotting as a mean confidence interval (not prediction) and the latest price is simply out of the bound of the means of the average, not a specific prediction forecast.

I ditched ETS forecasts for fbprophet mainly because I was getting too many flat lined ets prediction (i.e. Best model was naive). Fbprophet doesn’t have that problem and has the benefit of having less hyper-paramater’s to grid search over. The result so far is the errors are much lower than with the prior ETS.

All I really care about is the estimated outer cv error mean and standard deviation which will tell me the expected error (that is directly comparable to current price) on real world data.

Prior ETS forecasts

This is the inference part of a stock screener widget tool I’ve been working on

It’s kind of busy, but it’s not meant to be clean, it’s meant for quick and to the point inferences (utility > aesthetics = faster inferences).

The chart in the middle is the stock price (blue) mapped against a special formula for supply trend (yellow), and the trailing 2 year min/max (red/green). The two charts on the right are similar but magenta is sector, and cyan is index

The two on the left are special metrics. One is volume another special formula for volume, and a risk_trend_factor (black). The black one is the special filtering momentum indicator.

I print out nested cross validation scores of an ets forecast which is then plotted 13 weeks out (1 quarter). Here this stock (Health Care, LLY) is expected to get 8% quarterly return.

And I know since we are in a slowdown, it’s likely a good stock to ride out. Otherwise one can expect for an expansion or a recession and if either of those two occur, Healthcare is still a safe bet.

# Business Cycles

I’ve been working on a revised sector performance calculator.

It’s very frustrating trying to analyze sector performance. Seems an easy enough problem to solve, but there are varying factors considered by different guides as well as how to work with the curve.

The metric was based on a YoY change of LEI, (but I opted for ‘USPHCI’ a Coincident Index I could get from FRED), and the curve is simply a case statement from the matrix (cartesian product) of the following 4 conditions: change > 0, change< 0, change decreasing from prior period, or increasing?

The guide I was using was comparing 6 variations of a sectors return (raw return, return > General Market, return > other sectors within that cycle) to arrive at a [convoluted] z-score, where-as the way I’m comparing is simply the sectors return within a given cycle.

I didn’t arrive at any significant (t-tested) returns (i.e. derive p value of t test scores (based on sector means)). A p score < .05 would indicate that the return (represented as a t-score) is significantly different than 0.

Code: https://github.com/thistleknot/Python-Stock/blob/6c69e82d2b78479130ce2b95ecebcef4c2923f12/code/Screener/USPHCI_business_cycle_sector_analysis.ipynb

# Plato’s Allegory of the Cave

There was a scene in Game of Thrones where Baelish is talking to the Varys about the story of Westeros and the lie.

HRC said once that if you tell a lie long enough it becomes the truth (an old quote. Might be sourced from Nazi Germany WWII).

The point I’m trying to make is about Plato’s Allegory of the Cave.

I was thinking about how all my opinions of politicians is shaped and fed to me from the media (Plato’s Cave Wall and the puppeteer’s that control the shapes over the fire).

Chomsky said that a lot of money in our country is spent shaping people’s opinions.

Which ties back to what I say in Game of Thrones.

How much of our society is propaganda (of all societies) that we don’t even realize is a lie, but it’s just part of a palatable story (a collective mythos if you will) we tell ourselves to maintain power structures. (What really drove this point home was an article I read about Persia’s history https://www.youtube.com/watch?v=KIh1v7MiyVM).

That I think is what Plato’s allegory of the cave is about.

I think the US has a lot of transparency, but with freedom of speech has come the cost of loss of free time to study things to even become aware. Ignorance is bliss when it comes to EULA’s and the rights we give away when we choose something simply because it’s convenient (hard to explain that one succintly).

We are the helot’s of Sparta trapped on the Peloponnese, but we don’t’ serve Spartans, we serve Corporations (board members) while we ply for their work to pay our landlords or mortgages (the US is a plantation, I heard that as a quote once) because we don’t own the land we live on while being told by politicians we are free while running between job to job to make ends meet.

The freedom we think we have is equated with free speech, and consumerism. The latter is used to appease us but also drive innovation, but we don’t realize it’s also what keeps us in our perpetual state of need (the churn of consumer commodity capitalism drives both the corporations which we seek to be employed by, but also become our lords in a way). We jockey for prestigious positions and titles to outcompete each other which is what our lords want to drive innovation.

That’s the puppeteering.

# SP500 Quarterly Returns

SP500 v5

I noticed I had a stale variable for my adf metric and was coming in significant after fixing. So I differenced up to 2 years (vs 1 quarter) as well as derived 95% prediction intervals (vs confidence intervals)

Time period covered: 1970-01-01 to today

This is version 2 of an SP500 stock return analysis using rolling windows. I initially compared rolling windows of 10 years so I could get a positive only distribution (it actually took 20 years to get positive only return). But then I thought I could reduce the scope from macro and get more granular and focus on quarterly returns. I just needed to identify a stable distribution. I de serialize by reducing to quarterly and differencing and test for serial correlation with ADF test. A p value < .05 would mean we reject the null, but since it’s above .05 we accept the null hypothesis and reject the alternative.

I extrapolate based on the quantiles the annual return (i.e. (1+Return)^4) to kind of give best and worst case scenarios.

The estimate based on 10 years will have the same average because it’s the same data. Despite this, you can see the data has been de serialized. You can see the breakdown of quarterly returns alongside a yearly moving average (orange).

Red = Mean

Cyan/Green = Median

Yellow = .02 and .98 quantiles (used with Median, 96% inter-range)

Blue = 2 standard deviations above/below mean