Why do we use logarithmic returns in Quantitative Finance

Why do we use logarithmic returns in Quantitative Finance?

The answer is several reasons, each of whose individual importance varies by problem domain.

we begin by defining a return:  r_i at time i, where p_i is the price at time i and j \equiv (i - 1):
   
 r_i = \frac{p_i - p_j}{ p_j }
Benefit of using returns, versus prices, is normalization: measuring all variables in a comparable metric, 
thus enabling evaluation of analytic relationships amongst two or more variables despite originating from price 
series of unequal values. This is a requirement for many multidimensional statistical analysis and machine 
learning techniques. For example, interpreting an equity covariance matrix is made sane when the variables 
are both measured in percentage.
Several benefits of using log returns, both theoretic and algorithmic.
First, log-normality: if we assume that prices are distributed log normally (which, in practice, may or may not be true for any given price series), then log(1 + r_i) is conveniently normally distributed, because:
    1 + r_i = \frac{p_i}{p_j} = \exp^{\log(\frac{p_i}{p_j})}
This is handy given much of classic statistics presumes normality.
Second, approximate raw-log equality: when returns are very small (common for trades with short holding durations), the following approximation ensures they are close in value to raw returns:
    \log(1 + r) \approx r , r \ll 1
Third, time-additivity: consider an ordered sequence of n trades. A statistic frequently calculated from this sequence is the compounding return, which is the running return of this sequence of trades over time:
    \displaystyle (1 + r_1)(1 + r_2)  \cdots (1 + r_n) = \prod_i (1+r_i)
This formula is fairly unpleasant, as probability theory reminds us the product of normally-distributed variables is not normal. Instead, the sum of normally-distributed variables is normal (important technicality: only when all variables are uncorrelated), which is useful when we recall the following logarithmic identity:
    \log(1 + r_i) = log(\frac{p_i}{p_j}) = \log(p_i) - log(p_j)
Thus, compounding returns are normally distributed. Finally, this identity leads us to a pleasant algorithmic benefit; a simple formula for calculating compound returns:
    \displaystyle \sum_i \log(1+r_i) = \log(1 + r_1) + \log(1 + r_2)  + \cdots + \log(1 + r_n) = \log(p_n) - \log(p_0)
Thus, the compound return over n periods is merely the difference in log between initial and final periods. In terms of algorithmic complexity, this simplification reduces O(n) multiplications to O(1) additions. This is a huge win for moderate to large n. Further, this sum is useful for cases in which returns diverge from normal, as the central limit theorem reminds us that the sample average of this sum will converge to normality (presuming finite first and second moments).
Fourth, mathematical ease: from calculus, we are reminded (ignoring the constant of integration):
    e^x = \int e^x dx = \frac{d}{dx} e^x = e^x
This identity is tremendously useful, as much of financial mathematics is built upon continuous time stochastic processes which rely heavily upon integration and differentiation.
Fifth, numerical stability: addition of small numbers is numerically safe, while multiplying small numbers is not as it is subject to arithmetic underflow. For many interesting problems, this is a serious potential problem. To solve this, either the algorithm must be modified to be numerically robust or it can be transformed into a numerically safe summation via logs.

As suggested by John Hall, there are downsides to using log returns. Here are two recent papers to consider (along with their references):

0 개의 댓글:

댓글 쓰기