This is a short guide to features present in Numba that can help with obtainingthe best performance from code. Two examples are used, both are entirelycontrived and exist purely for pedagogical reasons to motivate discussion.The first is the computation of the trigonometric identitycos(x)^2 + sin(x)^2
, the second is a simple element wise square root of avector with reduction over summation. All performance numbers are indicativeonly and unless otherwise stated were taken from running on an Intel i7-4790
CPU (4 hardware threads) with an input of np.arange(1.e7)
.
Note
A reasonably effective approach to achieving high performance code is toprofile the code running with real data and use that to guide performancetuning. The information presented here is to demonstrate features, not to actas canonical guidance!
NoPython mode
The default mode in which Numba’s @jit
decorator operates isnopython mode. This mode is most restrictive about what can be compiled,but results in faster executable code.
Note
Historically (prior to 0.59.0) the default compilation mode was a fall-backmode whereby the compiler would try to compile in nopython mode andif it failed it would fall-back to object mode. It is likely thatyou’ll see @jit(nopython=True)
, or its alias @njit
, in use incode/documentation as this was the recommended best practice method to forceuse of nopython mode. Since Numba 0.59.0 this is no long necessaryas nopython mode is the default mode for @jit
.
Loops
Whilst NumPy has developed a strong idiom around the use of vector operations,Numba is perfectly happy with loops too. For users familiar with C or Fortran,writing Python in this style will work fine in Numba (after all, LLVM gets alot of use in compiling C lineage languages). For example:
@njitdef ident_np(x): return np.cos(x) ** 2 + np.sin(x) ** 2@njitdef ident_loops(x): r = np.empty_like(x) n = len(x) for i in range(n): r[i] = np.cos(x[i]) ** 2 + np.sin(x[i]) ** 2 return r
The above run at almost identical speeds when decorated with @njit
, withoutthe decorator the vectorized function is a couple of orders of magnitude faster.
Function Name | @njit | Execution time |
---|---|---|
| No | 0.581s |
| Yes | 0.659s |
| No | 25.2s |
| Yes | 0.670s |
A Case for Object mode: LoopLifting
Some functions may be incompatible with the restrictive nopython modebut contain compatible loops. You can enable these functions to attempt nopythonmode on their loops by setting @jit(forceobj=True)
. The incompatible codesegments will run in object mode.
Whilst using looplifting in object mode can provide some performance increase,compiling functions entirely in nopython mode is key to achievingoptimal performance.
Fastmath
In certain classes of applications strict IEEE 754 compliance is lessimportant. As a result it is possible to relax some numerical rigour withview of gaining additional performance. The way to achieve this behaviour inNumba is through the use of the fastmath
keyword argument:
@njit(fastmath=False)def do_sum(A): acc = 0. # without fastmath, this loop must accumulate in strict order for x in A: acc += np.sqrt(x) return acc@njit(fastmath=True)def do_sum_fast(A): acc = 0. # with fastmath, the reduction can be vectorized as floating point # reassociation is permitted. for x in A: acc += np.sqrt(x) return acc
Function Name | Execution time |
---|---|
| 35.2 ms |
| 17.8 ms |
In some cases you may wish to opt-in to only a subset of possible fast-mathoptimizations. This can be done by supplying a set of LLVM fast-math flags to fastmath
.:
def add_assoc(x, y): return (x - y) + yprint(njit(fastmath=False)(add_assoc)(0, np.inf)) # nanprint(njit(fastmath=True) (add_assoc)(0, np.inf)) # 0.0print(njit(fastmath={'reassoc', 'nsz'})(add_assoc)(0, np.inf)) # 0.0print(njit(fastmath={'reassoc'}) (add_assoc)(0, np.inf)) # nanprint(njit(fastmath={'nsz'}) (add_assoc)(0, np.inf)) # nan
Parallel=True
If code contains operations that are parallelisable (and supported) Numba can compile a version that will run inparallel on multiple native threads (no GIL!). This parallelisation is performedautomatically and is enabled by simply adding the parallel
keywordargument:
@njit(parallel=True)def ident_parallel(x): return np.cos(x) ** 2 + np.sin(x) ** 2
Executions times are as follows:
Function Name | Execution time |
---|---|
| 112 ms |
The execution speed of this function with parallel=True
present isapproximately 5x that of the NumPy equivalent and 6x that of standard@njit
.
Numba parallel execution also has support for explicit parallel loopdeclaration similar to that in OpenMP. To indicate that a loop should beexecuted in parallel the numba.prange
function should be used, this functionbehaves like Python range
and if parallel=True
is not set it actssimply as an alias of range
. Loops induced with prange
can be used forembarrassingly parallel computation and also reductions.
Revisiting the reduce over sum example, assuming it is safe for the sum to beaccumulated out of order, the loop in n
can be parallelised through the useof prange
. Further, the fastmath=True
keyword argument can be addedwithout concern in this case as the assumption that out of order execution isvalid has already been made through the use of parallel=True
(as each threadcomputes a partial sum).
@njit(parallel=True)def do_sum_parallel(A): # each thread can accumulate its own partial sum, and then a cross # thread reduction is performed to obtain the result to return n = len(A) acc = 0. for i in prange(n): acc += np.sqrt(A[i]) return acc@njit(parallel=True, fastmath=True)def do_sum_parallel_fast(A): n = len(A) acc = 0. for i in prange(n): acc += np.sqrt(A[i]) return acc
Execution times are as follows, fastmath
again improves performance.
Function Name | Execution time |
---|---|
| 9.81 ms |
| 5.37 ms |
Intel SVML
Intel provides a short vector math library (SVML) that contains a large numberof optimised transcendental functions available for use as compilerintrinsics. If the intel-cmplr-lib-rt
package is present in theenvironment (or the SVML libraries are simply locatable!) then Numbaautomatically configures the LLVM back end to use the SVML intrinsic functionswhere ever possible. SVML provides both high and low accuracy versions of eachintrinsic and the version that is used is determined through the use of thefastmath
keyword. The default is to use high accuracy which is accurate towithin 1 ULP
, however if fastmath
is set to True
then the loweraccuracy versions of the intrinsics are used (answers to within 4 ULP
).
First obtain SVML, using conda for example:
conda install intel-cmplr-lib-rt
Note
The SVML library was previously provided through the icc_rt
condapackage. The icc_rt
package has since become a meta-package and as ofversion 2021.1.1
it has intel-cmplr-lib-rt
amongst other packages asa dependency. Installing the recommended intel-cmplr-lib-rt
packagedirectly results in fewer installed packages.
Rerunning the identity function example ident_np
from above with variouscombinations of options to @njit
and with/without SVML yields the followingperformance results (input size np.arange(1.e8)
). For reference, with justNumPy the function executed in 5.84s
:
| SVML | Execution time |
---|---|---|
| No | 5.95s |
| Yes | 2.26s |
| No | 5.97s |
| Yes | 1.8s |
| No | 1.36s |
| Yes | 0.624s |
| No | 1.32s |
| Yes | 0.576s |
It is evident that SVML significantly increases the performance of thisfunction. The impact of fastmath
in the case of SVML not being present iszero, this is expected as there is nothing in the original function that wouldbenefit from relaxing numerical strictness.
Linear algebra
Numba supports most of numpy.linalg
in no Python mode. The internalimplementation relies on a LAPACK and BLAS library to do the numerical workand it obtains the bindings for the necessary functions from SciPy. Therefore,to achieve good performance in numpy.linalg
functions with Numba it isnecessary to use a SciPy built against a well optimised LAPACK/BLAS library.In the case of the Anaconda distribution SciPy is built against Intel’s MKLwhich is highly optimised and as a result Numba makes use of this performance.