Or, let's start simpler: why should I care about entropy rate?
A lot of machine learning research nowadays is focused on finding minimal sufficient statistics of prediction (a.k.a. "causal states"), or just sufficient statistics, of some time series, whether it be a time series of Wikipedia edits or of Amazon purchases. Most of my research assumes that we know these causal states, and then tries to use that knowledge to calculate a range of quantities (including entropy rate and predictive information curves) more accurately.
This leads to the question... why? Why care about these quantities? Entropy rate enjoys a privileged status, due to Shannon's first theorem, so let's focus on predictive information curves for just a second.
For the initiated, the predictive information bottleneck are an application of the information bottleneck method to time series prediction, in which we compress the past as efficiently as possible to understand the future to some desired extent. For the uninitiated, predictive information curves tell us the tradeoff between resources required to predict the future and predictive power. In one of the first papers on the subject, Still et al identified causal states as one limiting case of the predictive information bottleneck. With that theorem in mind, one might reasonably ask the following question: why study causal states? Just study the predictive information bottleneck, and causal states pop out as a special case.
Surprisingly, or maybe not so surprisingly, it turns out that calculating predictive information curves and lossy predictive features is much easier when you have the lossless predictive features, a.k.a. the causal states. For instance, check out some of the examples in this paper. So, we end up in sort of a Catch-22 situation: to get accurate lossy predictive features, we need accurate lossless predictive features.
The jaded among us might finally wearily ask the following: now what? We set out to find causal states (lossless predictive features). Some smart people promised us that we could calculate these using the predictive information bottleneck, but now someone else has told us that those calculations are likely to be crappy unless we already have access to causal states.
At this point, I "pivoted", provoked by the following question: how can we tell if a sensor is excellent at extracting lossy predictive features? One way to find out is to send input with known causal states to the sensor, and then calculate how well the sensor performs relative to the corresponding predictive information curve, as was done in this inspiring paper. If we know the input's causal states, then we can calculate its predictive information curve rather accurately, and therefore can be confident in our assessment of the sensor's predictive capabilities.
At this point, Professor Crutchfield pointed something else out: a coarse-grained dynamical model might be desired if the original model is too complicated to be understood. Imagine generating a very principled low-dimensional dynamical description of complicated genetic or neural circuits. It's not yet clear that the predictive information bottleneck provides the best way of doing so, but it's at least a start.
These two applications are summed up by the following paragraph: "At second glance, these results may also seem rather useless. Why would one want lossy predictive features when lossless predictive features are available? Accurate estimation of lossy predictive features could and have been used to further test whether or not biological organisms are near-optimal predictors of their environment. Perhaps more importantly, lossless models can sometimes be rather large and hard to interpret, and a lossy model might be desired even when a lossless model is known."
Calculating the entropy rate (the conditional entropy of the present symbol given all past symbols) or excess entropy (the mutual information between all past symbols and all future symbols) is not as easy as it may seem. Why? Because there are infinities-- an infinite number of past symbols and/or an infinite number of future symbols.
You can certainly make a lot of progress by tackling this problem head on, looking at longer and longer pasts and/or longer and longer futures.
I'm pretty lazy, so I usually look for shortcuts. Here's my favorite shortcut: identifying the minimal sufficient statistics of prediction and/or retrodiction, also known as forward- and reverse-time "causal states". Then, you can rewrite most of your favorite quantities that have the "right" kind of infinities in terms of these minimal sufficient statistics. If you're lucky, manipulation of these joint probability distributions of these forward- and reverse-time causal states is tractable.
My favorite paper illustrating this point is "Exact complexity", but for the more adventurous, I self-aggrandizingly recommend four of my own papers: "Predictive rate-distortion of infinite-order Markov processes", "Signatures of Infinity", "Statistical Signatures of Structural Organization", and the hopefully-soon-to-be-published "Structure and Randomness of Continuous-Time Discrete-Event Processes".
And finally, here's a copy of my talk at APS (that I missed due to sickness) that covers the corollary in "Predictive rate-distortion of infinite-order Markov processes".