Python's Multithreading dream true

Python's Multithreading dream true

ยท

4 min read

Suppose you're writing some CPU-intensive Python code and really trying to find ways out of single-threaded prison.

Introduction

At this moment, you might be using forked processes with multiprocessing or Numba's nopython parallel, or writing own CPython code just like Google Tensorflow creators did.

In this article, I'm focusing on my personal project where I trying to implement facilities for general purpose multitasking to be used in a form of simple python code that means employing a database-like approach for interpreters communication while keeping the GIL.

It becomes handier due to upcoming multiple interpreters support in Python

What is GIL?

According to [Python Wiki] (wiki.python.org/moin/GlobalInterpreterLock) the global interpreter lock, or GIL, is a mutex that protects access to Python objects, preventing multiple threads from executing Python bytecodes at once.

This lock is necessary mainly because CPython's memory management is not thread-safe. (However, since the GIL exists, other features have grown to depend on the guarantees that it enforces.)

The GIL is controversial because it prevents multithreaded CPython programs from taking full advantage of multiprocessor systems in certain situations.

Let's look at the example:

a = CreateSharedDict()

def func(SharedDict):
    SharedDict['a'] = 1
    SharedDict['b'] = 2
    return SharedDict

print(func(a))

Although the code don't seem to be troubling, some big questions arise in concurrent environment. What happens when another thread reads SharedObject at the same time? What happens when other thread modifies SharedObject at the same time? Despite the code above does not use SharedObject's fields for any conditional execution or evaluation of other values, it could have e.g.:

def func(SharedDict):
    SharedDict['a'] = 1
    SharedDict['b'] = 1
    SharedDict['b'] += SharedDict['a']
    return SharedDict

The programmer really wants to see the output:

{ a: 1, b: 2 }

However, with another thread doing the same "func(a)" we might see the output:

{ a: 1, b: 3 }

Which is not what programmer usually expects. So we really want to isolate two threads, make them feel like they are the only users of SharedObject. Most common solution is to use transactional memory.

But when would the transaction start and finish? The "a = CreateSharedDict()" variable is global and will not go out of scope unless the program finishes โ€” we cannot just keep transaction running forever because that means other transactions will never see changes to SharedDict. Most likely programmer wants something like this:

def func(SharedDict):
    start_transaction()
    try:
        SharedDict['a'] = 1
        SharedDict['b'] = 1
        SharedDict['b'] += SharedDict['a']
        return copy(SharedDict)
    finally:
        commit_transaction()

Which requires the programmer to explicitly mark the transaction's start and commit points. What really can ease this task is some decorator for wrapping a function into transaction:

@transacted
def func(SharedDict):
    SharedDict['a'] = 1
    SharedDict['b'] = 1
    SharedDict['b'] += SharedDict['a']
    return SharedDict

In the above with clear data dependencies, it's possible to preliminary determine the required data cells and, for example, lock them for the duration of the transaction, so the transaction will always succeed if it started successfully.

Immutable data

Although python is mostly based on mutable data, having some immutable data really helps in a multitasking environment.

Immutable data is fundamentally uncontended because you cannot modify it and readers don't conflict with each other.

But immutable data is not ideamatic for python, that's why a programmer would need to explicitly declare non-ideamatic concept with something like:

@transacted
def func(SharedDict):
    SharedDict['a'] = 1
    SharedDict['b'] = 1
    SharedDict['b'] += SharedDict['a']
    return immutable(SharedDict)

A big thanks to byko3y for his contribution.