THE TAYLOR EFFECT

 By

 Drs. R.W. Bass, G.W. Masters, and

Robert D. Taylor

 

“There is a tide in the affairs of men

Which, taken at the flood,

leads on to fortune …”

                                    Shakespeare

ABSTRACT

The effect of the motion of the Moon relative to the Earth has long been known to affect human physiology, as in the human female menstrual cycle and gestation period.  From time immemorial, ‘lunacy’ has been associated with the Full Moon.  Recently serious research papers and books ([4, [13]) by psychiatrists, psychologists, and statistically-oriented sociologists have provided overwhelming evidence of an extraordinary correlation between phases of the Moon and maximal episodes of violent behavior in mental wards, as well as cyclical increases in general human violence, such as in homicides, suicides, and fatal automobile accidents.

After years of research on the human circadian rhythm and the psychological aspects of human behavior in such fields as effectiveness of individual salespersons on different days of the lunar month, Robert D. Taylor discovered in 1987 that there is an undeniable correlation between local maxima and minima of prices in public auction markets (such as in the Dow Jones Industrial Averages [DJIA] or the S&P500 Index) and indicia of the relative positions of the Sun, Earth and Moon.  It has been known for literally centuries, in the highly developed field of Tidal Predictions [1], from Dynamical Astronomy, Fluid Mechanics, and Geophysics how to predict the daily fluctuations of tidal levels at any point on earth, with great precision, for millennia into the future.  Using commercially available Nautical Software to predict the tidal fluctuations, Taylor discovered that an identifiable sequence of times, which he calls Pivot Points, have an extraordinary correlation with times of Maximal Likelihood of change of the smoothed plot of equities price-action from (upwards) concavity to convexity (or downwards vice versa).  The Pivot Points occur at variable times, but never more frequently than 4 days apart, nor longer than 10 days apart, so that in any 10-day interval of calendar time there are always at least two Pivot Points.  Taking any Pivot Point, Taylor has shown how to forecast the concavity or convexity of the [smoothed] price-action to the next Pivot Point with extraordinary accuracy.  Using standard Statistical Procedures pertaining to the auto-correlation and cross-correlation of Time Series [11], such as in the well-established engineering discipline of LTI System Identification via ARX technology (Auto regression on an Exogenous Input), as in the MATLAB System ID Toolbox of L. Ljung, combined with the standard technology of Kalman-Bucy Filtering ([5], [12]) for extraction of signals from noise in estimating the State-Vector of a State-Space Model of a Time Series (as advocated by M.A. Aoki and his school of econometrics [6], [14]), Taylor has produced a Forecasting Methodology whose successes would in the Middle Ages have led to calls for his being burned for Witchcraft, though we surmise that in our own more enlightened times he will be nominated for a Nobel Prize in Economics!

Cracking the Circadian Code

According to the cover-jacket synopsis of Robert Taylor’s 1994 book [10] on Optimum Performance Scheduling (OPS), the author has made a breakthrough in the secret to perfect timing for success:  “After studying human behavioral patterns for several years, he has discovered the secret to predicting when everyone’s peak and sub-par performance times will be.  The system, known as … [OPS], is keyed to the biological clocks that regulate all animals, plants, and people, making the patterns of productivity universally predictable on an hour-by-hour basis.  … With OPS you can know when customers are more likely to buy, a meeting will be most productive, your athletic ability will be at its highest, and when the stock market will rise and fall.”

In 1994 Taylor was the owner of one of the largest construction companies of its kind in the Southeast, and a careful observer of the times at which his employees were most successful in sales presentations, which appeared to fluctuate cyclically.

Taylor researched the 1960’s-inaugurated field of chronobiology, and found that people isolated from sunlight and solar time-keeping cues have an internal clock which runs on a 25-hour day, and therefore gradually gets out of phase with the standard 24-hour solar day!  But is this internal pendulum synchronized to some perceptible influence independent of the standard solar clock?  There are published studies suggesting that living creatures can sense changes in the gravitational field in which they are immersed.  For example, it has been claimed that cockroaches caged in an opaque box will all gather to one side if a large spherical lead mass is positioned outside the cage.  Using state-government gathered statistical data to augment his own studies based upon statistics gathered from both friends and employees, Taylor found that human emotional and physical intensity fluctuates several times per solar day, not unlike the coastal tidal levels, which have maxima and minima 4 times on most days, but as a result of gradual phase-shifting have extrema only 3 times on certain days, at least two of which will be found in any 10-day period.

Watson & Crick received a Nobel Prize for unraveling the spiral-helix structure of the DNA molecule, which led to “cracking the code of life.”  When Taylor disclosed his discovery to a physician, Dr. Ted Abernathy, this specialist in pediatric and adolescent medicine wrote: “… the more I studied, the evidence of some great pattern connecting the ticking of all the clocks became overwhelming.  Now you have cracked the code.”

The original draft of Taylor’s 1994 book on OPStimeTM contained a chapter entitled “The Ups and the Dows,” but Taylor withdrew this chapter just before publication, and the book [10] as available now [ISBN 1-56352-187-3] contains only a sprinkling of hints of the applicability of the OPStime technology to forecasting market fluctuations, a mere suggestion for any motivated reader to follow up further on his own if so inclined.

But continuing to pursue this facet of chronobiology privately during 1987-1999,

Taylor spent literally thousands of hours of his own time in comparing correlations between intra-day trading records (every 5 minutes, or 79 ticks per trading day) and daily records (every half-hour, or 14 ticks per trading day) with cyclical physical phenomena, such as the famous saros (6585.32 days [about 18 years], after which the relative positions of Earth, Sun and Moon repeat), which was known to the ancient Babylonian and Mayan astronomers and to the builders of Stonehenge, who discovered that an eclipse in one saros occurs about 8 hours later and falls nearly 120 degrees of longitude further west in the next cycle.

Taylor’s most remarkable discovery came from plotting tidal heights (either 79 ticks or 14 ticks per 6.5-hour trading day) and noticing Pivot Points which characterize the days on which such plots switch from (downwards) convex to concave (or upwards vice versa).  Such a day can be discerned at a local maximum of successive local minima, or a local minimum of successive local maxima.  It is well-known ([6], [14]) that if the cross-correlation and auto-correlation series of two finite time-series define, for a given number n of lags, a Hankel matrix of rank n, then there is an LTI (Linear Time Invariant) state-space model of the ARX type, of state-dimension n, under which one of the series is “caused” by the other series in the sense that when the LTI system model is excited by the exogenous input series, the other series is reproduced as the system’s output.

For definiteness, consider the case of data recorded every half-hour per trading day, i.e. 14 ticks per day.  Thus the raw data consist of a finite economic time-series of discrete-time type, wherein the total length N of the series is 14 times the number of days under consideration.

If one takes, for example, 30 days (i.e. N = 420 ticks) of historical data, then a use of the MATLAB System Identification Toolbox’s highly-developed matrix-manipulation programs will produce an n-dimensional “black-box” model of the DJIA as the output of the LTI dynamics assumed to reside inside the box, when the input to the box is the tidal-height series taken as the known exogenous input.

But this exogenous input can be computed for millennia in advance!  So, if one knows the final n-dimensional state-vector at the end of the historical data, then one can attempt to forecast the future evolution of this vector (and so its scalar-valued projection defined to be the DJIA output), by running the model say 10 days into the future, using the predictably-“known” future exogenous input as the (putative) causative driving agent.

In the over-idealized version of this scheme, wherein there is neither “measurement noise” (affecting the state-vector’s projection into the DJIA) nor “process noise” (affecting the assumed exogenous input), an excellent forecast will be obtained up until the future time at which the non-stationarity of the actual system begins to become important, i.e. until the LTI assumption begins to degrade (in that the computed “constant” coefficients of the identified model start to change).

So it is intrinsically impossible to make highly accurate long-term forecasts based upon LTI techniques, because the hypothesis of constant coefficients becomes less and less plausible the farther into the future one attempts to compute a forecast.

Worse yet, the actual empirical data are self-evidently from a noisy rather than a smoothly-varying source.  If there were no noise, then the final state of the system would be the last known historical datum of the DJIA, together with its immediately-preceding (n-1) lagged values. But use of these n-values as the initial n-dimensional state-vector defining the forecast is problematic because of the great differences in forecasts based upon relatively “nearby-seeming” initial states.

Fortunately, the powerful technique ([5], [12]) for extraction of signals from noise known as Kalman-Bucy Filtering (and readily available from the MATLAB Control System and Robust Control Toolboxes) enables this difficulty to be circumvented.  The KBF technology, when run through the historical data, produces a “smoothed” estimate of the DJIA whose final n values define a terminal state-vector suitable for use as an initial state-vector in computing often-reliable forecasts.

The Kalman-Bucy Filter is a generalization of the Wiener Filter, and in honor of Wiener (who coined the term Cybernetics), the present approach is called Xybernomics.  The X, or symbol of the unknown (namely the unknown LTI-system model to be identified), has a dual function; it is used also to remind the user that one is not merely using an autonomous XID methodology, but rather an XIDx methodology, which incorporates the presence of a known exogenous input.  (When allowed to choose his own auto license plates, Taylor selected XOG.)

Of course, there can be upon isolated occasions an external veritable “shock” applied to any economic time-series (such as an announcement by the Federal Reserve of interest-rate decisions, or, for individual stocks, unexpected earnings announcements).  In the present context, this is like a “hammer blow” which discontinuously knocks the state-vector to a new position in its n-dimensional state-space. Econometricians refer to the visible effects of such external shocks as ‘anomalies.’  No forecasting system can anticipate all anomalies, and therefore the presently outlined forecasting methodology may be affected temporarily during significant anomalies.

Only two theoretical questions remain; discussion of the second (and most vexing) will be postponed until later below.

The theorem which says that n is the rank of the Hankel matrix defined by the autocorrelation of the input series and the cross-correlation of the input & output series is true only in the ideal, noiseless case.  Because of the noise, every singular value of the Hankel matrix will be typically non-zero, and it is a vexing theoretical and practical question to determine the true rank of a numerically defined Hankel matrix when the data contain even very small amounts of noise.   One may clean the noise out of a Hankel matrix, it has been proved rigorously by one of us, by taking its Singular Value Decomposition (SVD), inspecting all of the singular values, noting where they essentially level off at a very small value, and then setting all of the small values to zero, while subtracting the mean value of the small values from each of the larger ones, and subsequently inverting the SVD procedure to reconstitute a matrix which is then demonstrably an optimal estimate of what the Hankel matrix would have been if there had never been any noise.  Thus the rank of the Hankel matrix, and so the sought-for dimension n, is the number of non-zero singular values of this “cleaned up” Hankel matrix.  To some extent, this works in numerical practice, but there is no objective test for how small a singular value should be before one decides to set it to zero, and one is reduced to guessing based upon experience.  This is the most correct approach theoretically, but we cannot see how to render it meaningfully objective.

Taylor has elected to use instead an obvious “pragmatic” approach to the determination of n, which in practice works quite well.  In the example being discussed, he takes 75% of the historical data (i.e.N = 420, so N_pseud-hist = 315 ticks) and designates this data to be all that is known.  The above-described XIDx-plus-Filtering technology is then applied to make a forecast of the remaining 25% of the historical data (i.e. N_pseud-fut = 105 ticks), while assuming that this data is actually “out of sample” or truly unknown data.  But under the circumstances, the quality of this pseudo-forecast can be “graded” by taking the rms error between the forecast and the actual truth (preferably multiplied by the Akaike Information Criterion weighting factor

AIC = { (N + n)/(Nn) }

In order to penalize taking the dimension n so high as to “fit the noise” rather than the underlying signal).  Taylor then simply tries every n, in sequence, between say n = 2 and n = 32 by using his “pragdim” program.  This program chooses automatically the best estimate of n that which had worked the best in this automatically-evaluated, numerically-graded pragmatic test.

Taylor’s major discovery, based upon countless thousands of hours of experiments, is that use of the preceding forecasting technique often works wonderfully well when the beginning of a 10-day forecast (based upon the preceding 30 days of historical data) is taken to be a Pivot Point, and then the reliability of the forecast is discounted at any time after the next succeeding Pivot Point.

Taylor’s best results have been achieved when the preceding forecasting technique has been applied to a future interval defined to be EXACTLY from one Pivot Point to the next.  However, since there will always be a second Pivot Point in any 10-day interval, for programming convenience he has elected to continue to make 10-day forecasts, but to have them displayed via a graph in which the second Pivot Point is manifested by a vertical Red Line which serves to flag the viewer’s attention to the limited size of the actual optimal span rather than the full displayed span.

The final difficulty concerns the phenomenon, discovered by Taylor, which he calls the “occasional phase-flip effect.”

Normally, the (upward or downward) concavity/convexity of the forecast turns out to be precisely that of the actual data (when known).

But, upon occasion, the concavity/convexity of the forecast is the precise opposite of what actually happened!  That is, if the forecast had been printed on translucent paper, and then turned upside down and placed with its face against an outdoor window, and the forecast traced in black ink on the back of the page, another “wonderfully accurate” forecast would have been obtained.  In electrical engineering terminology, somehow the model has suffered an anomaly which has put the signal “180 degrees out of phase.”  That this is not merely an excuse for a poorly-performing forecast, but an objectively real phenomenon, will be discussed later below.  Suffice it to say that Taylor’s presently-preferred “work-around” circumvention of this phenomenon is to be more skeptical of the determination of n, and to apply more refined tests than the automated “pragdim” test mentioned above.  For example, instead of letting n be decided automatically as described above, Taylor will inspect visually the quality of the grading of the pseudo-forecast within the known recent historical data as n ranges from 2 to 32.   If none of the forecasts, for any n, is strikingly excellent, then Taylor assumes that some un-modeled effect is at work, and elects to “stand aside” instead of paying attention to the automatically-produced forecast based upon a supposedly “optimal” choice of n.  Using this more stringent version of “pragdim”, Taylor has reduced the percentage of “bad forecasts that would have been acted upon” between August 1998 and January 2000 to a mere 14%!!!

In short, Robert Taylor has made a remarkable discovery regarding equities markets, contrary to all received wisdom, and effectively reduced it to practice via the appended suite of MATLAB-language computer programs.

Prior Partial Anticipations

Although Taylor made his discoveries completely independently, when he disclosed them privately to us, we immediately pointed out that the objective merit of his discovery is strengthened by regarding noteworthy prior similar conceptions, albeit seen only “as through a glass, darkly”, as enhancements of Taylor’s fundamental plausibility (rather than simply as potential detractions from his priority).

Leonardo da Vinci designed a toy helicopter 500 years before the first man-carrying helicopter was built and flown by Sikorsky.

Accordingly, if the Taylor Effect is an objective reality, it is only to be expected that prior students of equities price-action would have published discoveries that can in hindsight be marshaled to buttress the solidity of Taylor’s findings.  The two approaches to study of equities prices are fundamental, based upon such information as studied by Accountants in a corporation’s Annual Report, and technical (popularly called “charting”).  Until quite recently, prevailing academic opinion has regarded “technical [market] analysis” as sheer lunacy [pun intended!], but for obvious reasons and in view of a visibly-burgeoning sea-change ([5], [6], [9], [11], [12], [14]) we restrict attention only to technical analysts.

In particular, we have called to Taylor’s attention the earlier work of J.M. Hurst ([2], 1970), of Dr. Claude E. Cleeton ([3], 1976), of Dr. Arnold Lieber ([4], 1978), and of Jim Sloman &  Welles Wilder ([7], 1983-1991),  which will now be reviewed.  [For titles of their books, see the References below.]

J.M. Hurst

Hurst was a physicist who took up electronics and communications engineering (including radar) before and during World War II and worked in the aerospace industry for 25 years prior to retirement from Douglas Aircraft in Santa Monica in 1960.  He spent the next decade (by his own account, some 20,000 hours of full-time work) analyzing economic time-series with the tools (numerical spectral analysis and large-scale computing) with which he had become familiar in his career in radar and signal-processing R&D.

Hurst discovered that the DJIA could be represented as the sum of 3 signals of distinctly different characteristics.  About 75% of the amplitude was a smooth, relatively slowly-varying, long-term underlying trend-signal of no apparent causation, and the remaining 25% consisted of two radically different components.  Some 23% consisted of the sum of up to 10 or 11 semi-predictable curves, similar to modulated sine waves, of slowly-varying amplitudes, frequencies and phases.  (Each of these curves looks like a “slowly pulsing,” somewhat distorted but recognizable sine wave, say

y(t) = A(t).sine(w (t).t  +  f(t)),

where the amplitude A(t), frequency w (t) and phase f(t) remain nearly constant, but slowly “tremble” about their mean values.)  The final 2% consisted of truly random “white noise,” added to the underlying smooth 98% like a jagged fringe.

This white-noise fringe had misled many people into believing that equities prices are simply random walks (i.e. Brownian motion, or integrated white noise).

Oskar Morgenstern (co-author with von Neumann of The Theory of Games and Economics Behavior) of the Princeton Institute for Advanced Study and his collaborator, mathematical economist Clive Granger, had published monographs allegedly demonstrating the random character of both stock prices and commodities prices.  They had taken the differences between successive prices (after first replacing the prices by their logarithms), and sought to demonstrate “Gaussian normality” of these stochastic processes.  However, they had sufficient intellectual honesty to admit in the middle of their works that their numerical techniques were insufficient to detect the presence of an additive component of the series of the type y(t) = A.sine(w .t  +  f),  0  <  w  <<  1;  this of course constitutes a loophole in the random walk theory big enough to drive a tank through, which is exactly what Hurst did.  In a personal conversation with Granger in 1980 one of us mentioned the possibility of identifying a “colored noise” model by means of standard electrical-engineering “ARX System-ID” and “Kalman-Filtering” techniques such as those now exploited by Taylor, referring to NASA-Ames’s mainframe computer program called MMLE3 (now available as a MATLAB system-ID Toolbox) which had been used to identify the aerodynamic derivatives of the Space Shuttle during its test flights  —  without knowing that Granger had already outgrown the random walk theory and had in 1978 co-authored a book on bilinear time-series predictive models!

Unfortunately, Princeton economics professor Burton Malkiel had written a best-selling book, A Random Walk Down Wall Street, citing Granger et al, that had convinced virtually the entire economics profession that the random-walk theory had been demonstrated to be correct beyond a reasonable doubt.  So that the reader will not blame Granger for this lamentable development, we hasten to add that in 1992 Granger served with distinction as the economics advisor to the Santa Fe Institute’s Time-Series Prediction Contest and contributed an excellent paper, Forecasting in Economics (cf. Sect. IV of [9]) which is perfectly compatible with the (LTI, system ID) methodologies mentioned hereinabove, and which specifically mentions “Kalman filter, state-space formulations” as often providing “superior forecasts” when used in their time-varying versions in cases wherein nonlinearity or nonstationarity renders the simpler LTI modeling inadequate.

Granger’s evolution from Random-Walker to Forecaster supports our position that Hurst’s 2% white-noise fringe is not of great importance.

Also, it appears that Harry S. Dent, Jr. has largely explained Hurst’s 75% trend component.  Dent [15] deserves the mantle of a prophet because nearly all of the highly specific yet radically contrarian predictions regarding the late ‘90s which he published in 1993 [8] have startled everyone by coming true!  In one stunning revelation, Dent [8] showed that by adding the birth-rate 46 years ago (weighted by 0.75) to the birth-rate 25 years ago (weighted by 0.25), one gets a profile whose long-term identity with the S&P500 is uncannily accurate!  Dent’s explanation is that the economy is largely consumer-driven, and that people begin to purchase major durable goods when starting families at age 25 and then reach their peak spending years by age 46 (sending children to college, buying final houses, etc.)

This leaves unexplained Hurst’s all-important “semi-predictable” 23%.  Designing 22 “comb filters” or narrow-bandpass filters whose overlapping bandpass intervals span the frequency range from 0 to 0.5 [the normalized anti-aliasing Nyquist frequency], and passing the DJIA through each comb in turn, and then subtracting the result from the DJIA and repeating with the next comb, Hurst decomposed 23% of the DJIA into the 10 or 11 oscillatory signals mentioned above. By identifying each of these 10 or 11 undulating, trembling sine-waves, and then “freezing” its amplitude A, period 2p/w, and phase f at its mean value, Hurst extrapolated each of these cycles into the future, with an aggregate stunningly effective result.  (Our only reservation about this procedure is that when we tried it the results resembled simply assuming that the recent past would repeat itself almost exactly in the near future  —  the so-called Adam Theory of Sloman & Wilder [7].)  Hurst sold the coefficients of a bank of “optimal” comb filters to one of his fans, Ray Frechette, for $350,000 and in their heyday, prior to Hurst’s disappearance [!], they developed a large following who made money consistently by acting upon their forecasts.

In his book [2], Hurst recounts a 30-day experiment done with the aid of friends, in which they proved that, even using crude hand-calculator and graphical methods, Hurst’s principles while paper trading sufficed to make a 9% profit every 9 days!

Alas, non-stationarity was to be the Achille’s heel of Hurst’s approach.  In the late 70s the constant weighting coefficients which Hurst had optimized for earlier market conditions gradually became less and less effective.  Frechette finally sent a pathetic letter to his subscribers saying that he was abandoning the methodology and pleading for someone to suggest a better approach!  (If they had had PCs, and if the Hurst approach had been automated to be updated regularly, this debacle might not have occurred.)

Like Taylor, Hurst had discovered the phase-inversion phenomenon.  His basic tool was a weighted moving average (or digital filter in today’s terminology), and he used the frequency-response plot’s possibility of a negative gain (at inappropriately high frequencies [low periods]) to illustrate phase-inversion.  Hurst stated: “Similarly, slightly higher frequencies (than that of cutoff) come through attenuated and 180 degrees out of phase.  …  In using moving averages to help define turning points of price-motion model components, one is now aware of the potential for high-frequency ‘creep through’  —  either in-phase or 180 degrees out of phase.  Such knowledge permits identification of these unwanted residuals, so that they do not mislead conclusions.”  Hurst’s insightful remedy was to augment the standard moving average by an “inverse centered moving average,” whose frequency response is always positive and therefore cannot affect phase.  Phase inversion in the standard centered moving average can be detected by simultaneously plotting the inverse centered moving average; any phase discrepancy between the two shows that the former has produced, as a mere artifact of processing, a false phase inversion!

We believe that a test of whether or not, in a particular modeling-forecast, Taylor’s procedures have introduced an artifactual phase-flip, can be developed along the lines of Hurst’s just-cited solution to the problem.  Specifically, the dominant two periodic terms in the exogenous input (tide-height [1]) can be represented as the real part of the sum of two complex phasors.  Then the complex frequency response can be computed for all frequencies between zero and the Nyquist frequency, and known methods from communications engineering for computing the “complex envelope” of the output can be employed.  If the absolute value of this envelope becomes zero at any frequency, then a phase-flip occurs in passing (from below) through that frequency.  We have made a preliminary effort to implement this test in Taylor’s forecasting procedure, but in all cases tried, the envelope’s magnitude stayed bounded away from zero.  If this were universally true, then the phase-flips encountered by Taylor cannot be artifacts but must have some more fundamental cause, such as another un-modeled exogenous input.

Note that Hurst’s model was autonomous, although, as we shall see, he speculated that in fact there must be an UNKNOWN exogenous factor, which he called “X motivation,” that was actually driving the cyclicity.  Hurst’s approach was kinematic rather than dynamic.  Just as Kepler found that the orbits of planets are conic sections (parabolas, hyperbolas, or ellipses), while Newton explained that gravitational dynamics caused motion to evolve into such orbits, so Robert Taylor has taken the Hurst cycles and provided the causal dynamics which produces them.

The big question in Hurst’s mind was, of course, what is the cause of this identifiable, semi-predictable cyclicity?  He stated: “The conclusion is unavoidable that the cyclic regularities noted must be due to the lumped sum of all other possible contributions to the human decision processes  —  or to what we have called ‘X’ motivation.”

Hurst continues: “And this is the concept that is difficult to accept.  To do so, we must admit the possibility that something causes millions of investors operating from widely differing locations, making countless buy and sell decisions, at varying points in time, to behave more or less alike  —  and to do so consistently and persistently!  How can this be?  The answer to this is not known, although reasonable theories can be formulated.”

Then Hurst came awesomely close to discovering the Taylor Effect: “Recent experimental evidence tends to show that such intangibles as fatigue and frame of mind are influenced by the presence or absence of physical force fields.  [He cites experiments on U-2 pilots artificially shielded from the Earth’s magnetic field.]  Now all of us exist in an environment that is simply riddled with force fields  —  gravitational, electrostatic, etc.  If such fields can influence some physical and mental functions, might they not influence others  —  perhaps causing masses of humans to feel simultaneously bullish or bearish in the market, for example?  Conjecture, but a possibility.  …  Seeing how it might come about helps us to accept the evidence of our own eyes and analyses, even when the results seem irrational by past experience.”

In summary, Hurst concluded: “The unknowns in the [decision-making] process probably mask the true cause of price-motion cyclicality.  … Although the cause of cyclicality is unknown, the nature of the effect is certain. … The implications of cyclicality include possible external influences of the decision processes of masses of investors more or less simultaneously.  If this is fact, you must guard yourself carefully against the same influences.  …   Cyclicality is probably not related to rational decision factors.  …  The extent of non-random cyclicality precludes any major contribution to price action by random events.  …   All price fluctuations about smooth long-term trends in the market … are due to manifestations of cyclicality.”

The twin achievements of Robert Taylor are to have discovered the true nature of Hurst’s “external motivation X”   —  which, Hurst speculated, might result from the effects upon human emotions of “gravitational fields”[!]  —  and to have utilized that knowledge in successfully forecasting market fluctuations in a manner automatically adaptive to changing conditions.

Dr. Claud E. Cleeton

Dr. Cleeton, for many years prior to his retirement, was Director of Electronics Research at the Naval Research Laboratory.  He mastered everything in Hurst’s book and built upon it.  In particular he (like Taylor and Hurst) noted [3] the phase-flip problem, which he calls the “out-of-phase problem.”  Cleeton’s solution is to utilize two different centered moving averages of different spans.  In his book [3] Cleeton states: “Figure 7-3 illustrates the distortion in a moving average when the amplitude error is negative.  …  The 30-day and 60-day moving averages crisscross to show a cyclic component in the DJI average.  The period of the cyclic component is such that large-amplitude errors are introduced in the moving average and further the 60-day average is moving out of phase with the price data.”  In summarizing, Cleeton concludes: “For periods less than the span length, the output of the moving average may be 180° out of phase with the data.  This distortion shows up in the charts and may be corrected.”

In the Taylor procedure, the analog of the Cleeton test for phase-flips would be to make the forecast for several different choices of n and plot them on the same graph.  If one or more is out of phase with another, then at least one of the dimensions n is unacceptable.  Taylor had independently discovered this idea in his own experiments, and notes that as the hypothetical state-space dimension n runs from 2 to 32, there may be several forecasts at adjacent dimensions which dramatically disagree owing to a clearly visible phase-flip.  Taylor has concluded that “at least one of the trial-dimensions n nearly ALWAYS provides a correct forecast; the main difficulty is to select which dimension n to accept and which values of n to discard.”  The difficulty is not so much that of discrimination when the analyst has time to examine all plots visually; the difficulty is to develop a reliable method of selecting the optimal dimension n strictly automatically, without subjective human intervention.

Cleeton told one of us in the early ‘80s that, in the early and mid ‘70s, that is, during the 7 years between the appearance of Hurst’s book [2] and that of Cleeton [3], this virtuoso technical analyst had amazed his friends by mailing out Christmas cards for 7 consecutive years which predicted the DJIA for the next 12 months, and being consistently awesomely correct!  In his book [3], Cleeton explains step-by-step and in great detail exactly how he did it.  The only complaint we have about Cleeton’s methods is that they involve too much painstaking manual labor, refitting, trial-and-error, and customization to be acceptable to anyone except a retired post-doctoral expert in numerical analysis who was willing to spend literally months on a single (one-year) forecast for a single time-series.

Busy twenty-first century investors want to consider many different stocks rather than just one index, and they want their analysis procedures fully automated so that they can be updated daily and/or weekly, rather than just annually.  In this sense, with all due respect to their pioneering brilliance, a fair comparison between the final results of Hurst & Cleeton and of Taylor would be that between da Vinci’s conceptual toy-helicopter design and Sikorsky’s operational man-carrying vehicle.

 

Jim Sloman & Welles Wilder

Welles Wilder, originally trained as a mechanical engineer, was widely regarded as the USA’s premier “technical” market-analyst when in September 1983 he received an unexpected phone call from Jim Sloman.  According to Wilder, young Sloman had placed among the top high-school students in the USA in a national exam given to all senior math students, and subsequently had been awarded a National Merit Scholarship to Princeton University where he studied math and physics in special advanced classes.

Wilder paid Sloman “a very large sum of money” to learn the rationale of the results which Sloman showed him.  Later Wilder printed 100 copies of his book The Delta Phenomenon and, in conjunction with a two-day private tutoring seminar for each purchaser, sold nearly all 100 copies for $35,000 cash each to new members of the secret Delta Society International.

Some years later one of the members broke his contract of confidentiality and a book was offered for public sale that was obviously a clone of Wilder’s book.  By threatening legal action, Wilder got the book withdrawn, but he knew that the secret was out.  With agreement from all of the members, he then sold  about 1000 new copies of the book for $3,500 each and gave the original purchasers three-quarters of the profits.

Quite recently, Wilder saw the 1996 reprint [13] of Dr. Arnold Lieber’s 1978 book [4], which he had not known about.  Realizing that further secrecy was futile, the members agreed to mass-market the book and it has now been offered to hundreds of thousands of potential buyers for less than $200/copy.

After college Sloman had tried several careers, including stock-broker and commodity trader, but decided that he lacked the right temperament for it.

Sloman began [7]: “Where my house is on the beach, when the tide is out there is a nice sandy beach, but when the tide is in, the water comes all the way to the rocks at the sea wall.  [In order to run on the beach, one uses] TIDE CHARTS to know when the tide will be out … .  A tide chart is a picture of an end result of the interaction of the sun, the moon, and the earth!  …  How long would it be until a day occurred that was identical to the current day?  Would this not present one complete interaction of the sun, the moon, and the earth?”

Sloman’s theory of  “Short Term Delta” (STD) is that markets repeat directly or inversely [phase-flipped] “every 4 days.”  [Taylor’s independently discovered theory of Pivot Points says that the basic period varies between 4 to 9 days.]

Sloman’s theory of  “Intermediate Term Delta” (ITD) is that markets repeat directly or inversely “every 4 lunar months.”

Sloman’s theory of ‘Medium Term Delta” (MTD) is that markets will repeat directly or inversely every 12 lunar months.  Also stated: “Markets repeat directly or inversely every lunar year.”

The word “inversely” is there (in STD, ITD, MTD)  because Sloman & Wilder also discovered the phase-inversion problem that had earlier plagued Hurst and Cleeton, and was soon to plague Taylor.

Sloman further defined “Long Term Delta” (LTD) as every 4 calendar years and “Super Long Term Delta” (SLTD) as every 19 years and 5 hours.  (In the saros, the period is 18.02 years; in the modern theory of tidal heights [1], the period is 18.6 years;

where they get 19 years and 5 hours, we do not know.)  However, interestingly, Wilder claims that in studying hundreds of years of data he has never seen a phase-inversion in either LTD or SLTD!

The LTD is a well-established phenomenon of cyclicality in the DJIA.  One of us in the late ‘70s asked the famous technical analyst, Joe Granville, at a public address to thousands of his fans, “why does the DJIA have a 4-year cycle?”  Granville (who had come on stage carrying a live goose) replied, “How would I know?  Why do the lemmings march into the sea every 4 years?”

We omit further discussion of the Sloman-Wilder discovery, because their method is not amenable to automated analysis; to decide whether or not to expect a phase-inversion at the end of one cycle, several highly subjective decisions have to be made while studying long-term historical charts.  Wilder admits that a few of the original Delta Society members never could quite get the hang of it!

We mention the Sloman-Wilder discovery, however, because no reader of their beautifully-printed, multi-colored chart-illustrated and breathlessly excitedly-written book can close it and still doubt that the Taylor Effect is an objectively real phenomenon.  If Wilder had been trained in digital signal processing (DSP) and electrical engineering rather than mechanical engineering, and if he had been able to do his own programming rather than being limited to hiring a programmer, he might have succeeded in not only getting an understanding of the nature of the problem and its central difficulty but in truly “solving” it and reducing its solution to practice, as has Taylor.

But Wilder admits that his “turning points” only have a 51% probability of occurring within a few days of when they are expected

So with all due respect to Sloman’s flash of genius, and to Wilder’s informed appreciation of the awesome magnitude of Sloman’s insight, they have not (in the sense of intellectual property law) truly reduced their ideas to practice as yet.

Conclusion

            The several prior partial, yet incompletely practiced, anticipations of Hurst [2], Cleeton [3], Sloman & Wilder [7], together with the unrelated but relevant psychiatric research of Lieber ([4],[13]), provide overwhelming evidence of the validity of Taylor’s discovery without in the slightest detracting from his priority as the first person to both make the discovery and then to reduce the discovery to a practically useful form.

 

REFERENCES

[1] Paul Schurman, Manual of Harmonic Analysis, 1940, U.S. Dept. of Commerce, Coast & Geodetic Survey, Special Publication No. 98.

[2] J.M. Hurst, The Profit Magic of Stock Transaction Timing, 1970, Prentice-Hall.

[3] Claud E. Cleeton, Ph.D., The Art of Independent Investing: A Handbook of

Mathematics, Formulas and Technical Tools for Successful Market Analysis and

Stock Selection, 1976, Prentice-Hall.

[4] Arnold Lieber, M.D., The Lunar Effect: Biological Tides and Human Emotions, 1978,

Doubleday & Co.

[5] Andrew C. Harvey, Forecasting, structural time series models and the Kalman filter,

1989, Cambridge University Press.

[6] Masanao Aoki, State Space Modeling of Time Series, 1990, Springer-Verlag.

[7] Welles Wilder, The Delta Phenomenon, or The Hidden Order in All Markets, 1991,

The Delta Society Int’l., P.O. Box 128, 5615 McLeansville Rd.,

McLeansville, NC 27301

[8] Harry S. Dent, Jr., The Great Boom Ahead, 1993, Hyperion.

[9] Andreas S. Weigand & Neil A. Gershenfeld, Time Series Prediction: Forecasting the Future and Understanding the Past, 1993, Addison-Wesley.

[10] Robert D. Taylor, OPS*time: The Secret to Perfect Timing for Success [*Optimum

Performance Schedule], 1994, Longstreet Press, Atlanta, GA.

[11] D.R. Cox et al, eds., Time Series Models: In econometrics, finance and other fields,

1996, Chapman & Hall.  [cf. particularly Ch.3 on Forecasting] [12] Curt Wells, The Kalman Filter in Finance, 1996, Kluwer Academic.

[13] Arnold Lieber, M.D., How the Moon Affects You, 1996, Hastings House.

[14] Masanao Aoki & Arthur M. Havener, eds., Applications of Computer Aided Time

 Series Modeling, 1997, Springer-Verlag.

[15] Harry S. Dent, Jr., The Roaring 2000s, 1998, Simon & Schuster.