Tuesday 27 February 2018

Calling all students: Google Summer of Code and CSA Trust Grant both happening now!

Deadlines are fast approaching for the following:

Open Chemistry Google Summer of Code: If you're a student and interested in hacking on open source chemistry software (and get paid a bit for the privilege), then get on over to the Google Summer of Code (GSoC) page of the Open Chemistry project. A whole bunch of Open Source chemistry tools have gathered together (including Open Babel, natch) and come up with project ideas that hopefully will spark interest. If you've always wanted to get involved with Open Source but didn't know how, this is a good chance to do so.

CSA Trust Grant: I was a recipient of a CSA Trust Grant myself, and funny story, I'm now a CSA Trustee and on the Grant Committee. Applications are now invited - it's pretty straightforward. If you look at the details of previous recipients you can see the sorts of things they applied for. The success rate is pretty high, so I really do encourage you to apply. And if you don't get it this year, well, try again next year (it worked for me).

Saturday 3 February 2018

Using those other processors

This year, I've decided it's time to make more use of those other processors that my PC has. At some point, it gets embarrassing to see my single CPU job trundling along at its max speed, while 80 or 90% of the processing power is just sitting idle.

My typical input is a SMILES file such as ChEMBL or PubChem, or a file I've created by processing these, or a set of reaction SMILES extracted from patents. In any case, the problem is the same - a large number of inputs that can be processed in parallel. Sure, you can split the file (see the 'split' command, though be careful to split on lines) but I tend to avoid that as it's a bit clunky. If on Linux, there's the magic "GNU parallel". But what I really should be doing is making use of multiple CPUs directly in my Python scripts.

And actually, that's not too hard. The multiprocessing module makes it pretty easy once you've figured it out. Here's my goto template for these calculations. I try to keep it as slimmed down as possible, because there's some magic going on behind the scenes and I don't want to have to debug any complex problems. (Note that there's also a multithreading module, but CPython cannot parallelize CPU-bound calculations using threads due to its Global Interpreter Lock.)

import multiprocessing as mp
import pybel

def calculate(data):
    return pybel.readstring("smi", data).molwt

if __name__ == "__main__":
    POOLSIZE = 4 # the number of CPUs
    CHUNKSIZE = 1000
    pool = mp.Pool(POOLSIZE)
    with open("output.txt", "w") as out:
        with open(r"C:\Tools\LargeData\chembl_23.smi", "r") as inp:
            for result in pool.imap(calculate, inp, CHUNKSIZE):
            # for result in pool.imap_unordered(calculate, inp, CHUNKSIZE):
            # for result in map(calculate, inp): # no multiprocessing
                out.write("%f\n" % result)

Notes:
1. This blog is now Python 3.
2. By editing the commented-out code, you can choose one of three variations. During development, you can avoid multiprocessing entirely by just using a regular map (equivalent to itertools.imap in Python 2). If using multiprocessing, you can choose to have the output in the same order as the input, or just have it as it comes; the latter is presumably faster.
3. If you need to kill a multiprocessing job, CTRL+C just won't do it, as it spawns an additional process to replace it (if anyone knows of a way to make this work please let me know). You need to use the Process Manager, choose the master Python process and kill that one.