Skip to content Skip to sidebar Skip to footer

Is There A Faster Way To Add Two 2-d Numpy Array

Let say I have two large 2-d numpy array of same dimensions (say 2000x2000). I want to sum them element wise. I was wondering if there is a faster way than np.add() Edit: I am addi

Solution 1:

Approach #1 (Vectorized)

We can use modulus to simulate the circulating behavior of roll/circshift and with broadcasted indices to cover all rows, we would have a fully vectorized approach, like so -

n = b.shape[0]
idx = n-1 - np.mod(shift.cumsum()[:,None]-1 - np.arange(n), n)
a += b[idx].sum(0)

Approach #2 (Loopy one)

b_ext = np.row_stack((b, b[:-1] ))
start_idx = n-1 - np.mod(shift.cumsum()-1,n)
for j in range(start_idx.size):
    a += b_ext[start_idx[j]:start_idx[j]+n]

Colon notation vs using indices for slicing

The idea here to do minimal work once we are inside the loop. We are pre-computing the start row index of each iteration before going into the loop. So, all we need to do once inside the loop is slicing using colon notation, which is a view into the array and adding up. This should be much better than rolling that needs to compute all of those row indices that results in a copy that is expensive.

Here's a bit more into the view and copy concepts when slicing with colon and indices -

In [11]: a = np.random.randint(0,9,(10))

In [12]: a
Out[12]: array([8, 0, 1, 7, 5, 0, 6, 1, 7, 0])

In [13]: a[3:8]
Out[13]: array([7, 5, 0, 6, 1])

In [14]: a[[3,4,5,6,7]]
Out[14]: array([7, 5, 0, 6, 1])

In [15]: np.may_share_memory(a, a[3:8])
Out[15]: True

In [16]: np.may_share_memory(a, a[[3,4,5,6,7]])
Out[16]: False

Runtime test

Function defintions -

def original_loopy_app(a,b):
    for j in range(shift.size):
        b=np.roll(b, shift[j] , axis=0)
        a += b

def vectorized_app(a,b):
    n = b.shape[0]
    idx = n-1 - np.mod(shift.cumsum()[:,None]-1 - np.arange(n), n)
    a += b[idx].sum(0)

def modified_loopy_app(a,b):
    n = b.shape[0]
    b_ext = np.row_stack((b, b[:-1] ))
    start_idx = n-1 - np.mod(shift.cumsum()-1,n)
    for j in range(start_idx.size):
        a += b_ext[start_idx[j]:start_idx[j]+n]

Case #1:

In [5]: # Setup input arrays
   ...: N = 200
   ...: M = 1000
   ...: a = np.random.randint(11,99,(N,N))
   ...: b = np.random.randint(11,99,(N,N))
   ...: shift = np.random.randint(0,N,M)
   ...: 

In [6]: original_loopy_app(a1,b1)
   ...: vectorized_app(a2,b2)
   ...: modified_loopy_app(a3,b3)
   ...: 

In [7]: np.allclose(a1, a2) # Verify results
Out[7]: True

In [8]: np.allclose(a1, a3) # Verify results
Out[8]: True

In [9]: %timeit original_loopy_app(a1,b1)
   ...: %timeit vectorized_app(a2,b2)
   ...: %timeit modified_loopy_app(a3,b3)
   ...: 
10 loops, best of 3: 107 ms per loop
10 loops, best of 3: 137 ms per loop
10 loops, best of 3: 48.2 ms per loop

Case #2:

In [13]: # Setup input arrays (datasets are exactly 1/10th of original sizes)
    ...: N = 200
    ...: M = 10000
    ...: a = np.random.randint(11,99,(N,N))
    ...: b = np.random.randint(11,99,(N,N))
    ...: shift = np.random.randint(0,N,M)
    ...: 

In [14]: %timeit original_loopy_app(a1,b1)
    ...: %timeit modified_loopy_app(a3,b3)
    ...: 
1 loops, best of 3: 1.11 s per loop
1 loops, best of 3: 481 ms per loop

So, we are looking at 2x+ speedup there with the modified loopy approach!

Post a Comment for "Is There A Faster Way To Add Two 2-d Numpy Array"