REVISITED: Julia vs Python Speed Comparison: Bootstrapping the OLS MLE

I originally switched to Julia because Julia was estimating a complicated MLE about 100-times faster than Python. Yesterday, I demonstrated how to bootstrap the OLS MLE in parallel using Julia. I presented the amount of time required on my laptop to bootstrap 1,000 times: about 21.3 seconds on a single processor, 8.7 seconds using four processors.

For comparison, I translated this code into Python, using only NumPy and SciPy for the calculations, and Multiprocessing for the parallelization. The Python script is available here. For this relatively simple script, I find that Python requires 110.9 seconds on a single processor, 66.0 seconds on four processors.

Thus, Julia performed more than 5-times faster than Python on a single processor, and about 7.5-times faster on four processors.

I also considered increasing the number of bootstrap samples from 1,000 to 10,000. Julia requires 211 seconds on a single processor and 90 seconds on four processors. Python requires 1135 seconds on a single processor and 598 seconds on four processors. Thus, even as the size of the task became greater, Julia remained more than 5-times faster on one processor and around 7-times faster on four processors.

In this simple case, Julia is between 5- and 7.5-times faster than Python, depending on configuration.

Bradley J. Setzler


  1. You are essentially comparing the performance of two different minimization strategies for this particular problem. In you Python script a Simplex minimizer is used, whereas in Julia you use a Conjugated-Gradient based algorithm. Simply switching to scipy.optimize.fmin_cg (or scipy.optimize.fmin_powell) will bring this comparison on more equal footing!

    1. Hi Robert,

      Thanks for letting me know about the bug in the version of the Python script on Github. The speed comparison was done using fmin_cg, as can be verified by running the two scripts.

      I also tried out some other optimizers for comparison, finding to my surprise that Python’s version of Nelder-Mead fails on a large fraction of the estimates while Julia’s version of Nelder-Mead is successful. The version of the file on Github was still using Nelder-Mead, so it would have crashed if you had run it, and would not provide a meaningful speed comparison.

      All the best,

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s