TECHNICAL APPENDIX

1.  What pre-processing is required to put market data into a form suitable for use in the Xybernomics program?

While historical values of actual trading data are available for only 6 ½ hours per trading day and not at all on weekends and holidays, data for the exogenous input are available on a continuous basis.  If samples of actual trading data are taken every half-hour, from 09:30 AM to 4:00 PM, with the 09:30 value assumed to be identical to that at the close of the preceding day, there are 14 values per day available for processing.  The exogenous data can, of course be synchronously sampled at those same times.

There are several methods by which the data segments can be concatenated to form a continuous stream of data.  One such method is simply to ignore the gaps in the data, placing the segments side-by-side.  Such a method disrupts the time base and ignores the time-continuity of the dynamics of both the market and the effect of the exogenous input on the market; and will affect the accuracy of the market modeling and, therefore, the prediction.  Another method of linking the data segments is to fill in the gaps with constant values equal to the last known value of the variable.  While this method preserves the time base, it injects dynamically unrealistic values of the variable into the record, and will, therefore, also create prediction errors.  A third method of linking the data segments is to linearly interpolate between the variable values on either side of the gap.  That method preserves the time base and improves the continuity of the values indicated by the actual record, but ignores the dynamics both of the exogenous input and the trading market.  A fourth method of linking the data segments is to utilize an interpolator based upon a dynamic model of the market derived during the immediately preceding interval, during which actual data were available.

If sufficient data are available during the preceding interval to generate the dynamic model, this theoretically more-advanced method will preserve both the time base and the dynamics of the variables involved.  A final adjustment can be made to match the filled-in data to the actual data at the end points. The only prediction errors introduced by this method are those caused by inaccuracies in the short-term model used for interpolation.  All of these methods of linking discontinuous data segments have been investigated and found to be viable.  The method employed in obtaining the results related herein is that of filling in the data gaps with constant values equal to the latest known sample during non-trading days, and then taking each “day” to have only 14 ticks, as above.  The resulting input-output series are then piecewise-composed of 14 consecutive ticks, followed by a jump of 17 ´ 2 = 34 ignored ticks, each day, but during non-trading days, the market data is held constant at its last value.  The obviously problematic nature of this discontinuity is discussed further in Item 4 below.

2.  What mathematical model is employed for the market, and how are the model parameters determined?

The market model employed is that of a linear, time-invariant (over a reasonably short period of time) dynamical system driven by inputs which, except for unpredictable anomalies, are dominated, in the short term, by an exogenous input identified by one of our group, Robert Taylor.  The model is derived from a financial time series, { y(k) }, with bias removed, assumed to be (in the language of the MATLAB System Identification Toolbox) the output of an ARX(p,q) x(t) process with measurement noise, represented by the equations:

x(k) = a(1) x(k-1) + … + a(p) x(k-p) + b(1) u(k-1) + …+ b(q) u(k-q) + w(k),

y(k) = x(k) + v(k),

where the w(k) and v(k) are zero-bias, random input noise and measurement error, respectively; and where the input noise includes that part of the series not attributable to the ARX(p,q) process.

3.  How are the order (p,q) and the p+q parameters, c=[a’ b’]’, of the ARX(p,q) process model determined?

The ARX x(k) is taken as the market history series, y(k), in the regression

method of estimating the parameters of an ARX model under the assumption that p = q.  The order p is chosen by the use of AIC.  Specifically, the estimate of the model parameter vector, c,  for a given p and q, is done by stacking the ARX equations, with the measurements, y, for all values of k, and finding the OLS estimates of the c’s.  The linear regression equation resulting from stacking the ARX equations is:  y = [Y U] c + noise.  The OLS estimate of c, c^, is given by:  c^ = [X’X]X’ y , where the matrix X =[Y U] , and  [X’X] denotes the left inverse of [X’X] , and X’ denotes the transpose of X.

If the choice of p = q were to be questioned, there are several very good answers.  In the first place, it is necessary to take q < (p+1), for otherwise [as can be seen most easily by introducing the z-transform and replacing the above system by a model defined by a z-transform transfer function from the input to the output] we would be allowing for a constant multiplier of the input to be an additive component of the output, which is a priori absurd.  Secondly, this transfer function can be regarded as a ratio of polynomials in z, of which the numerator has degree q and the denominator has degree (p+1), where, as mentioned, it is necessary for plausibility that q < (p+1).  But there is no loss of generality by taking q = p, for if the actual degree of the numerator were of lesser degree, then the coefficients of the higher-order terms in z would be zero.  We have tested this on artificially-constructed numerical examples wherein the answer was known because the data was generated by a system with q < p, and the presently disclosed algorithms worked sufficiently well that the higher-order coefficients were “identified” as equal to zero to better than 17 decimal places.  Finally, the Ho-Kalman Lemma mentioned in Item 7 below implies that the system should be modeled via the state-space model (F,G,H) specified in item 5 below, wherein the model is isomorphic to any model obtained by a linear change of variables on the state-variables, which is equivalent to replacement of (F,G,H) by (inv(T) F T, inv(T) G, H T), where abs(det(T)) > 0, which allows us to take F in companion-matrix form [i.e. to introduce “phase variables”], which then leads automatically to the ARX(p,p) model-form chosen above.

4.  How do you justify deleting 17 ½ hours of the exogenous input per trading day and setting the trade history series value to a constant for 24 hours on each no-trade day?

These practices are based upon the tacit assumptions that the market is dormant during non-trading hours, that the model of the market is stationary, and that the errors introduced into the model are acceptable.  The exogenous input is a physical variable which, with respect to physical, chronological time, runs 24-hours/day, 7 days/week, continuously, and is known to be a multiply-periodic function of time with 31 independent frequencies, so omitting the weekends would introduce fallacious dynamics into the exogenous input more severe than the errors introduced by assuming that the exogenous input drives market activity only when the market is open.  In the future the equities markets will probably be open continuously and then the daily discontinuity noted in the present methodology will disappear.  But until then, the ultimate justification for our present practices is that these practices have been shown to produce useful market forecasts.  (See Item 1 above.)

5.  How are the variances q and r of w(t) and v(t) estimated?

The estimates of both q and r are set equal to the square of the following expression:  std(yd) / [abs(ym) + max(abs(yd))], where  ym = mean(y),  yd = debiased(y), std = standard deviation, abs = absolute value, and max = maximum value.  The values of both q and r are deliberately determined in a manner calculated to desensitize the Kalman filter’s stability to variability in (F,G,H,q,r), i.e. to enhance its “robustness” to modeling errors.   More accurate estimates of q and r are obtainable using the MSE of

y^(k) – a(1) y^(k-1) – … -a(p) y^(k-p),

and using the MSE of y(k) – y^(k) , where

y^(k) = a(1)y(k-1) – … – a(p) y(k-p) .

The extent to which the improvement in forecast accuracy, possibly attainable by using these more complicated but more precise estimates for q and r, would justify the additional programming complexity is under investigation, but the question remains open because the “tradeoff” involved is more subjective than objective.  In fact, from the point of view of desiring the Kalman Filter to be as “robust” as possible, it is not necessarily advantageous to take q and r to be realistic; in his published theory of “Rhobust” filter design via “Rho-synthesis”, R.W. Bass has shown that filter robustification can be maximized by taking completely artificial choices of q and r, selected entirely to make the filter robust irrespective of what the theoretically correct choices of q and r may be. This is legitimate because the Kalman-Bucy Filter has proved optimality only when the model parameters are exactly correct; indeed, as Kalman himself pointed out in his exposition of the Bass theory of asympotic state-estimators [see the book by Kalman, Falb, and Arbib], if the process noise and measurement noise are actually merely unknown wave-form disturbances [as in C.D. Johnson’s theory of Disturbance-Accommodating Control] rather than stochastic processes, then the filter’s estimate of the system’s state-vector will converge (with a small residual error) in a manner more dependent upon the filter’s stability and “structural-stability” [robustness] than upon the choice of a filter gain matrix via the Ricatti Equation defined by exact values of q and r.

(In aerospace engineering, the values of q and r are adjusted by trial-and-error purely empirically; this is called “tuning the Kalman filter.”)  The present methodology includes an adaptive feature in which q and r are adjusted, if necessary, until the Kalman filter is found to be stable, although in practice we have found that only in very unusual markets does this adaptivity-feature get exercised.

6. How is the Kalman-Bucy Filter used to determine the initial state for predicting the future?

The ARX -in-measurements model discussed in Item 3 is defined by the standard state-space-model equations:

X(k) = F X(k-1) + G U(k-1) + G w(k) ,

y(k) = H X(k) + v(k),

where X(k) = [x(k-p+1), … , x(k)]’ and F, G, H, and the covariances q and r of w and v can be determined easily as specified in Item 5 above, and, without loss of generality, it may be assumed that F is in companion-matrix form.  Applying the Kalman-Bucy Filter for this state-space model to a market data series of selected length up to the immediate past, we obtain an estimate of the state vector, X(t).  This estimate is the initial state then used for running the ARX model to predict future values.

7.  How do you justify a system model identifying the state variables, x(k) with the sequential measurements, y(k)?

A system model that identifies the state vector, x(k), with sequential measurements of the output, y(k) is one of many valid models for a single-output system.  There is a published theorem by Packard, Farmer et al (Phys.Rev.Lett., 1980) concerning “geometry from a time-series” in which it was demonstrated that collecting the lagged values of the series, with an optimal choice of the number of lags, defines a state-space which is differentiably homeomorphic to the actual state-space in case the time-series had been generated by projection from a state-space dynamical system.  In the case of an unknown system excited by a known input, the assumptions of linearity, stationarity, and finite-dimensionality, require, in the noiseless case, via the fundamental Ho-Kalman Lemma, that matrices (F,G,H) exist which define the type of state-space model that was specified in Item 6 above  —  which leads automatically to the use of lagged outputs as a state-vector when one makes a linear transformation that puts the matrix F into companion-matrix form.  It may be objected that both the Packard-Farmer Theorem and the Ho-Kalman Lemma depend upon assumptions which may not be realized perfectly in the case of economic time-series modeling. The only reply to that objection is to note that the empirical demonstration of our remarkably reliable market forecasts is the ultimate practical validation of our modeling choice.

8.  Do you estimate the error covariance of the estimates of the system parameters?

No.  The error covariance is not used here.  If it were, an estimate of it would be the inverse of  X’X / s  where  s = y’ [(I – X (X’XX’)^2] y  .

9.  How is the ARX modeling procedure used to forecast the market?

It is used to make relatively short-term forecasts between two consecutive predictable anomalies observable in the exogenous input, which can be called inflection points; previously, Robert Taylor had termed them Pivot Points, although inflection points appears to be more precise geometrically.  Note that, in predicting future market trends, the needed values of the exogenous input are known but the future values of the market data are not known and must be predicted by the ARX process itself. The absolute values of the predicted market values are generally not in good agreement with those of the actual values.  However,  the market trends indicated by those predictions are, generally, in good agreement with the actual market trends.  Since inflection points are known to interfere with the prediction of market trends, it is important to flag these points.  These inflection points are clearly provided by the methodology employed in our process.  The market trends indicated by the market forecast plots, in conjunction with the inflection point flags provided, have proved to be a remarkably good indicator of the actual market trends to be expected over that interval.

10.  How are the inflection points identified?

Linking together the 14-sample segments of the exogenous input produces

“cusps” in the resulting series, when displayed graphically.  These “cusps” in the resulting curve point upward for a number of segments, and then flip over and point downward for about the same number of segments.  An inflection point can be recognized visually as the point in the composite exogenous series at which the cusps change direction.  Robert Taylor presently renders more numerically-precise his selection of inflection points by choosing either the maximum of a sequence of local minima, or the minimum of a sequence of local maxima, which generally occur at points at which the visible “cusp-flip” from a row similar to  UUU… to a row of upside-down U’s can be seen, but which he finds to be a less subjective decision than picking the point of visible “flip” by visual inspection of a graphical display.  Earlier, Taylor had selected inflection points as those days on a Calendar of Daily Tides at which 3 rather than the normal 4 daily extrema occurred; but although this discovery had great heuristic value, and generally selects inflection points within a day of two of the two other methods discussed, it involves additional subjective complications which Taylor was relieved to discard when he discovered the more mechanical/numerical procedure specified above.

11.  How do you explain the high rate of success of the procedure in forecasting trends in the market?

The exogenous input is a very important consistent driving factor for any financial time series, driving the series in upward or downward fluctuations.  This is the essence of the “Taylor Effect”.

To be patentable, an invention or discovery must be demonstrated to possess novelty, utility, and non-obviousness.

Novelty is demonstrated here by the fact that although Hurst and Cleeton recognized that some exogenous input was driving the “quasi-predictable” quasi-periodic fluctuations which they exploited, and though Hurst speculated that “gravitational” or electrostatic or magnetic or electromagnetic fields might be involved, neither Hurst nor Cleeton [nor any predecessor] was able to derive a valid exogenous-input model.

Non-obviousness is demonstrated by the fact that although Sloman and Wilder correctly identified the exogenous input as the influence of the lunar gravitational field on human biological/emotional rhythms (as in psychiatrist Lieber’s “biological tides” effect), and got a rough estimate of the periodicity of the inflection points (which they specified as “every four days” in comparison to Taylor’s more accurate adaptive specification of “every four to nine-days depending upon the particular recent history of the exogenous input”), they were unable to reduce their discovery to useful algorithmic practice, despite tremendously diligent efforts over several years.

Finally, utility of the Taylor Effect is made manifest by the remarkable success of our unique market-data processing algorithms and procedure.

In conclusion, we have demonstrated that the discovery documented in the preceding technical essay and the present technical appendix, as reduced to practice in the computer program whose listing is appended, possesses each of the three desiderata of novelty, utility and non-obviousness to a high-degree, in a field which has been studied diligently by thousands of highly-motivated and talented investigators, without prior complete success.  Accordingly, the present disclosure constitutes an enabling disclosure in the sense of intellectual property law.  Moreover, the Taylor Effect is an excellent example of what one Supreme Court Patent decision called a “result long sought, seldom approached, and never attained.”