Today presents a very unusual but interesting problem for how future networks will be structured. As our new data input algorithm yields interesting new results, the networks have to be adapted to them but when your signal can only exist within a predefined band how do you use that band to tell the network when something doesn't matter? We have two approaches: Signals A and Signals B. They might have actual industry names but we thought of them independent of anyone else so we can call them whatever we want. Before diving into what these are, it is important to define what variables we can really change. These variables are magnitude and neutral point. Magnitude is the correct term here but I have no idea what the proper name for a signal's point of oscillation is. They are important terms though in defining what a network input might look like but it may be best to explain by example.
Imagine you have a stock that varies each day. Each day you take whatever that variation is (say +$2) and you add that to your graph. On days where the stock goes up, the point you plot is positive and on days the stock goes down, the point you plot is negative. After a while you'll start to see some interesting oscillating activity. Not really enough to tell you when to buy or when to sell, but enough that a network can start making some sense of it. These up and down variations are the magnitude of the signal and the point that the ups and downs seem to vary around (in our case: 0) is the neutral point.
Now, because a network will have a bad day if you give it inputs that are (in this case) not between 0 and 1, we scale any variations to between these values by dividing by twice the max and adding 0.5. If we didn't do this it would be a little like trying to run a car on Marine Diesel Oil. But, with this new data input algorithm, we now have multiple inputs and not all inputs are created equally. This means we need to scale both the magnitude and oscillation point accordingly and this is where we bring in the different methods.
Signals A:
Logic:
The network can theoretically reduce the weights of less important inputs but let's give it a hand
Method:
Take all 0.5 neutral point scaled data and multiply it by its importance factor. The importance factor basically just tells you how much of an impact the data has on the accuracy of the network. This is what the fancy new network input algorithm determines. The nice thing about this method is that anything that is basically worthless to the network will only have values near 0 and won't have any noticeable impact on the actual network calculations.
Signals B:
Logic:
The network only really cares about variation in signals so anything unimportant will be automatically filtered out by the training, we just have to scale the magnitude.
Method:
With the signals B method you have to subtract 0.5 from your nice scaled data and then re-scale it by multiplying it all by the importance factor. Then add the 0.5 back. This will have the effect of maintaining the neutral point while reducing the variation. Theoretically this should reduce the significance of bad variables on the overall network calculation and the network will by nature reduce the impact of the variable by lowering the associated weights.
Conclusion
Given these two methods, we are opting for Signals A for the time being given that it more directly takes into account the insignificance of certain input variables. Signals B might also hold merit but until it can be properly evaluated Signals A offers the more promising and more logical solution.