Quantitative Analysis, Risk Management, Modelling, Algo Trading, and Big Data Analysis

Fast Walsh–Hadamard Transform in Python

Lesson 9>>

I felt myself a bit unsatisfied after my last post on Walsh–Hadamard Transform and Tests for Randomness of Financial Return-Series leaving you all with a slow version of Walsh–Hadamard Transform (WHT). Someone wise once said: in order to become a champion, you need to flight one round longer. So here I go, one more time, on WHT in Python. Please excuse me or learn from it. The choice is yours.

This post is not only a great lesson on how one can convert an algorithm that is slow into its faster version, but also how to time it and take benefits out of its optimisation for speed. Let’s start from the beginning.

1. From Slow WFT to Fast WFT

In the abovementioned post we outlined that the Slow Walsh-Hadamard Transform for any signal $x(t)$ of length $n=2^M$ where $M\in\mathbb{Z}^+\gt 2$ we may derive as:
$$
WHT_n = \mathbf{x} \bigotimes_{i=1}^M \mathbf{H_2}
$$ where $\mathbf{H_2}$ is Hadamard matrix of order 2 and signal $x(t)$ is discrete and real-valued.

The Kronecker product of two matrices plays a key role in the definition of WH matrices. Thus, the Kronecker product of $\mathbf{A}$ and $\mathbf{B}$ where the former is a square matrix of order $n$ and the latter is of order $m$ is the square matrix $\mathbf{C}$ such:
$$
\mathbf{C} = \mathbf{A} \bigotimes \mathbf{B}
$$ Fino & Algazi (1976) outlined that if $\mathbf{A}$ and $\mathbf{B}$ are unitary matrices and thus $\mathbf{C}$ is also unitary and can be defined by the fast algorithm as shown below:

fwht

which can be computed in place. The technical details of it are beyond the scope of this post however the paper of Fino & Algazi (1976) is a good place to start your research on Fast Walsh-Hadamard Transform algorithm.

The algorithm requires bit inversion permutation. This tricky subject has been covered by Ryan Compton in his post entitled Bit-Reversal Permutation in Python. For me, its a gateway (or a shortcut) to covert Matlab’s function of bitrevorder which permutes data into bit-reversed order. Therefore, for any Python’s list we obtain its bit-reversed order making use of Ryan’s functions, namely:

def bit_reverse_traverse(a):
    # (c) 2014 Ryan Compton
    # ryancompton.net/2014/06/05/bit-reversal-permutation-in-python/
    n = a.shape[0]
    assert(not n&(n-1) ) # assert that n is a power of 2
    if n == 1:
        yield a[0]
    else:
        even_index = np.arange(n/2)*2
        odd_index = np.arange(n/2)*2 + 1
        for even in bit_reverse_traverse(a[even_index]):
            yield even
        for odd in bit_reverse_traverse(a[odd_index]):
            yield odd
 
def get_bit_reversed_list(l):
    # (c) 2014 Ryan Compton
    # ryancompton.net/2014/06/05/bit-reversal-permutation-in-python/
    n = len(l)
    indexs = np.arange(n)
    b = []
    for i in bit_reverse_traverse(indexs):
        b.append(l[i])
    return b

that can be called for any Python’s list or 1D NumPy object as follows:

from random import randrange, seed
from WalshHadamard import *
 
seed(2015)
 
X=[randrange(-1,2,2) for i in range(2**3)]
print("X = ")
print(X)
 
x=get_bit_reversed_list(X)
x=np.array(x)
print("\nBit Reversed Order of X = ")
print(x)

what returns:

X = 
[1, 1, 1, -1, 1, -1, 1, -1]
 
Bit Reversed Order of X = 
[ 1  1  1  1  1 -1 -1 -1]

i.e. exactly the same output as we can obtain employing for the same signal of $X$ the Matlab’s function of bitrevorder(X), namely:

>> X=[1, 1, 1, -1, 1, -1, 1, -1]
X =
     1     1     1    -1     1    -1     1    -1
 
>> bitrevorder(X)
ans =
     1     1     1     1     1    -1    -1    -1

2. Fast Walsh–Hadamard Transform in Python

Given the above, we get the Fast WHT adopting a Python version of the Mathworks’ algorithm and making use of Ryan’s bit reversed order for any real-valued discrete signal of $x(t)$, as follows:

def FWHT(X):
    # Fast Walsh-Hadamard Transform for 1D signals
    # of length n=2^M only (non error-proof for now)
    x=get_bit_reversed_list(X)
    x=np.array(x)
    N=len(X)
 
    for i in range(0,N,2):
        x[i]=x[i]+x[i+1]
        x[i+1]=x[i]-2*x[i+1]
 
    L=1
    y=np.zeros_like(x)
    for n in range(2,int(log(N,2))+1):
        M=2**L
        J=0; K=0
        while(K<N):
            for j in range(J,J+M,2):
                y[K]   = x[j]   + x[j+M]
                y[K+1] = x[j]   - x[j+M]
                y[K+2] = x[j+1] + x[j+1+M]
                y[K+3] = x[j+1] - x[j+1+M]
                K=K+4
            J=J+2*M
        x=y.copy()
        L=L+1
 
    y=x/float(N)

where an exemplary call of this function can take place in your Python’s main program as here:

from random import randrange, seed
from WalshHadamard import *
 
seed(2015)
 
X=[randrange(-1,2,2) for i in range(2**3)]
 
print("X = ")
print(X)
print("Slow WHT(X) = ")
print(WHT(X)[0])
print("Fast WHT(X) = ")
print(FWHT(X))

returning the output, e.g.:

X = 
[1, 1, 1, -1, 1, -1, 1, -1]
Slow WHT(X) = 
[ 0.25  0.75  0.25 -0.25  0.25 -0.25  0.25 -0.25]
Fast WHT(X) = 
[ 0.25  0.75  0.25 -0.25  0.25 -0.25  0.25 -0.25]

3. Slow vs. Fast Walsh-Hadamard Transform in Python

When it comes to making speed comparisons I always feel uncomfortable due to one major factor: the machine I use to run the test. And since I do not have a better one, I use what I’ve got: my Apple MacBook Pro 2013, with 2.6 GHz Intel Core i7, and 16 GB 1600 MHz DDR3. That’s it. Theodore Roosevelt once said: Do what you can, with what you have, where you are. So, that’s the best what I have where I am.

Let’s design a simple test which will test the performance of both Fast and Slow WHT in Python as defined above. We will be interested in the computation times for both transforms for a variable length of the discrete signal $X(t)$ (here: fixed to be the same thanks to the Python’s seed functions that freezes signal to be the same every time it is called to be random) defined as of $n=2^M$ for $M$ in interval between $[4,15]$ as an example.

First we will plot a physical time it takes to get both transforms followed by the graph presenting the speedup we gain by employing Fast WHT. The main code that executes our thought process looks as follows:

from random import randrange, seed
from WalshHadamard import *
import time
 
maxM=16
 
m=[]; s=[]; f=[]
for M in range(4,maxM+1):
    shwt=fwht=t0=fast=slow=0
    # generate random binary (-1,1) signal X
    # of length n=2**M
    seed(2015)
    X=[randrange(-1,2,2) for i in range(2**M)]
    # compute Slow WHT for X
    t0=time.time()
    shwt,_,_=WHT(X)
    slow=time.time()-t0 # time required to get SWHT
    s.append(slow)
    del shwt, slow, t0
    # compute Fast WHT for X
    t0=time.time()
    fwht=FWHT(X)
    fast=time.time()-t0 # time required to get FWHT
    m.append(M)
    f.append(fast)
    del fwht, fast, t0
 
speedup=np.array(s)/np.array(f)
 
f1=plt.figure(1)
plt.plot(m,s,"ro-", label='Slow WHT')
plt.hold(True)
plt.plot(m,f,"bo-", label='Fast WHT')
plt.legend(loc="upper left")
ax = plt.gca()
ax.set_yscale("log")
plt.xlabel("Signal length order, M")
plt.ylabel("Computation time [sec]")
 
f2=plt.figure(2)
plt.plot(m,speedup,'go-')
plt.xlabel("Signal length order, M")
plt.ylabel("Speedup (x times)")
plt.hold(True)
plt.plot((4,M),(1,1),"--k")
plt.xlim(4,M)
 
plt.show()

where we obtain:
figure_1
and the speedup plot:
figure_2
From the graph we notice an immediate benefit of Fast WHT when it comes to much much longer signals. Simplicity of complexity in action. Piece of cake, a walk in a park.

Fast & Furious 8? Well, QuantAtRisk.com delivers you the trailer before the official trailer. Enjoy! But only if feel the need… the need for speed!

DOWNLOAD
     WalshHadamard.py

REFERENCES
     Fino B.J, Algazi, V. R., 1976, Unified Matrix Treatment of the Fast Walsh-Hadamard
         Transform
, IEEE, Transactions on Computers (pdf)

  • Jeffrey

    If anything, you’ve provided evidence that the direction(binary up/down) of price change is random using the WHT. You’ve left out magnitude information.

    • http://www.quantatrisk.com Pawel

      That’s exactly what I intended to achieve.

      • Jeffrey

        Fair enough. I don’t have any proof, but my opinion is that binary up/down has less than 15% of the total information content.

        • http://www.quantatrisk.com Pawel

          Someone once asked: “How do you eat an elephant?” And the answer is: “One bite at a time.” That same is true for tests of randomness. Big subject. Small tools applied, one at a time.

Contact Form Powered By : XYZScripts.com