Exploring Generators and Coroutines

Let’s re­vis­it the idea of gen­er­a­tors in Python, in or­der to un­der­stand how the sup­port for corou­tines was achieved in lat­est ver­sions of Python (3.6, at the time of this writ­ing).

By re­view­ing the mile­stones on gen­er­a­tors, chrono­log­i­cal­ly, we can get a bet­ter idea of the evo­lu­tion that lead to asyn­chro­nous pro­gram­ming in Python.

We will re­view the main changes in Python that re­late to gen­er­a­tors and asyn­chro­nous pro­gram­ming, start­ing with PEP-255 (Sim­ple Gen­er­a­tors), PEP-342 (Corou­tines via En­hanced Gen­er­a­tors), PEP-380 (Syn­tax for del­e­gat­ing to a Sub­-­Gen­er­a­tor), and fin­ish­ing with PEP-525 (Asyn­chronous Gen­er­a­tors).

Simple Generators

PEP-255 in­tro­duced gen­er­a­tors to Python. The idea is that when we process some data, we don’t ac­tu­al­ly need all of that to be in mem­o­ry at once. Most of the times, hav­ing one val­ue at the time is enough. Lazy eval­u­a­tion is a good trait to have in soft­ware, be­cause in this case it means that less mem­o­ry is used. It’s al­so a key con­cept in oth­er pro­gram­ming lan­guages, and one of the main ideas be­hind func­tion­al pro­gram­ming.

The new yield keyword was added to Python, with the meaning of producing an element that will be consumed by another caller function.

The mere presence of the yield keyword on any part of the function, automatically makes that a generator function. When called, this function will create a generator object, which can be advanced, producing its elements, one at the time. By calling the generator successive times with the next() function, the generator advances to the next yield statement, producing values. After the generator produced a value, the generator is suspended, waiting to be called again.

Take the range built-in func­tion, for ex­am­ple. In Python 2, this func­tion re­turns a list with all the num­bers on the in­ter­val. Imag­ine we want to come up with a sim­i­lar im­ple­men­ta­tion of it, in or­der to get the sum of all num­bers up to a cer­tain lim­it.

LIMIT = 1_000_000
def old_range(n):
    numbers = []
    i = 0
    while i < n:
        numbers.append(i)
        i += 1
    return numbers

print(sum(old_range(LIMIT)))

Now let’s see how much mem­o­ry is used:

$ /usr/bin/time -f %M python rangesum.py
499999500000
48628

The first num­ber is the re­sult of the print, whilst the sec­ond one is the out­put of the time com­mand, print­ing out the mem­o­ry used by the pro­gram (~48 MiB).

Now, what if this is im­ple­ment­ed with a gen­er­a­tor in­stead?

We just have to get rid of the list, and place the yield state­ment in­stead, in­di­cat­ing that we want to pro­duce the val­ue on the ex­pres­sion that fol­lows the key­word.

LIMIT = 1_000_000
def new_range(n):
    i = 0
    while i < n:
        yield i
        i += 1

print(sum(new_range(LIMIT)))

This time, the re­sult is:

$ /usr/bin/time -f %M python rangesum.py
499999500000
8992

We see a huge dif­fer­ence: the im­ple­men­ta­tion that holds all num­bers in a list in mem­o­ry, us­es ~48MiB, where­as the im­ple­men­ta­tion that just us­es one num­ber at the time, us­es much less mem­o­ry (< 9 MiB) 1.

We see the idea: when the yield <ex­pres­sion> is reached, the re­sult of the ex­pres­sion will be passed to the call­er code, and the gen­er­a­tor will re­main sus­pend­ed at that line in the mean­while.

>>> import inspect
>>> r = new_range(1_000_000)
>>> inspect.getgeneratorstate(r)
'GEN_CREATED'
>>> next(r)
0
>>> next(r)
1
>>> inspect.getgeneratorstate(r)
'GEN_SUSPENDED'

Generators are iterable objects. An iterable is an object whose __iter__ method, constructs a new iterator, every time is called (with iter(it), for instance). An iterator is an object whose __iter__ returns itself, and its __next__ method contains the logic to produce new values each time is called, and how to signal the stop (by raising StopIteration).

The idea of iterables is that they advance through values, by calling the built-in next() function on it, and this will produce values until the StopIteration exception is raised, signalling the end of the iteration.

>>> def f():
...     yield 1
...     yield 2

>>> g = f()
>>> next(g)
1
>>> next(g)
2
>>> next(g)
StopIteration:

>>> list(f())
[1, 2]

In the first case, when calling f(), this creates a new generator. The first two calls to next(), will advance until the next yield statement, producing the values they have set. When there is nothing else to produce, the StopIteration exception is raised. Something similar to this, is actually run, when we iterate over this object in the form of for x in iterable: …. Only that Python internally handles the exception that determines when the for loop stops.

Be­fore wrap­ping up the in­tro­duc­tion to gen­er­a­tors, I want to make a quick com­men­t, and high­light some­thing im­por­tant about the role of gen­er­a­tors in the lan­guage, and why they’re such a neat ab­strac­tion to have.

In­stead of us­ing the ea­ger ver­sion (the one that stores ev­ery­thing in a list), you might con­sid­er avoid­ing that by just us­ing a loop and count­ing in­side it. It’s like say­ing “all I need is just the coun­t, so I might as well just ac­cu­mu­late the val­ue in a loop, and that’s it”. Some­thing slight­ly sim­i­lar to:

total = 0
i = 0
while i < LIMIT:
    total += i
    i += 1

This is some­thing I might con­sid­er do­ing in a lan­guage that does­n’t have gen­er­a­tors. Don’t do this. Gen­er­a­tors are the right way to go. By us­ing a gen­er­a­tor, we’re do­ing more than just wrap­ping the code of an it­er­a­tion; we’re cre­at­ing a se­quence (which could even be in­finite), and nam­ing it. This se­quence we have, is an ob­ject we can use in the rest of the code. It’s an ab­strac­tion. As such, we can com­bine it with the rest of the code (for ex­am­ple to fil­ter on it), re­use it, pass it along to oth­er ob­ject­s, and more.

For example, let’s say we have the sequence created with new_range(), and then we realize that we need the first 10 even numbers of it. This is as simple as doing.

>>> import itertools
>>> rg = new_range(1_000_000)
>>> itertools.islice(filter(lambda n: n % 2 == 0, rg), 10)

And this is some­thing we could not so eas­i­ly ac­com­plish, had we cho­sen to ig­nore gen­er­a­tors.

For years, this has been all pret­ty much about gen­er­a­tors in Python. Gen­er­a­tors were in­tro­duced with the idea of it­er­a­tion and lazy com­pu­ta­tion in mind.

Lat­er on, there was an­oth­er en­hance­men­t, by PEP-342, adding more meth­ods to them, with the goal of sup­port­ing corou­tines.

Coroutines

Rough­ly speak­ing, the idea of corou­tines is to pause the ex­e­cu­tion of a func­tion at a giv­en point, from where it can be lat­er re­sumed. The idea is that while a corou­tine is sus­pend­ed, the pro­gram can switch to run an­oth­er part of the code. Ba­si­cal­ly, we need func­tions that can be paused.

As we have seen from the previous example, generators have this feature: when the yield <expresson>, is reached, a value is produced to the caller object, and in the meantime the generator object is suspended. This suggested that generators can be used to support coroutines in Python, hence the name of the PEP being “Coroutines via Enhanced Generators”.

There is more, though. Corou­tines have to sup­port to be re­sumed from mul­ti­ple en­try points to con­tin­ue their ex­e­cu­tion. There­fore, more changes are re­quired. We need to be able to pass da­ta back to them, and han­dle ex­cep­tion­s. For this, more meth­ods were added to their in­ter­face.

  • send(<­­val­ue>)

  • throw(ex_­type­[, ex_­val­ue[, ex_­­trace­back­­]])

  • close()

These meth­ods al­low send­ing a val­ue to a gen­er­a­tor, throw­ing an ex­cep­tion in­side it, and clos­ing it, re­spec­tive­ly.

The send() method implies that yield becomes an expression, rather than a statement (as it was before). With this, is possible to assign the result of a yield to a variable, and the value will be whatever it was sent to it.

>>> def gen(start=0):
...     step = start
...     while True:
...         value = yield step
...         print(f"Got {value}")
...         step += 1
...
>>> g =  gen(1)
>>> next(g)
1
>>> g.send("hello")
Got hello
2
>>> g.send(42)
Got 42
3

As we can see from this previous code, the value sent by yield is going to be the result of the send, (in this case, the consecutive numbers of the sequence), while the value passed in the send(), the parameter, is the result that is assigned to value as returned by the yield, and printed out on the next line.

Before sending any values to the generator, this has to be advanced to the next yield. In fact, advancing is the only allowed operation on a newly-created generator. This can be done by calling next(g) or g.send(None), which are equivalent.

Warn­ing

Re­mem­ber to al­ways ad­vance a gen­er­a­tor that was just cre­at­ed, or you will get a Type­Er­ror.

With the .throw() method the caller can make the generator raise an exception at the point where is suspended. If this exception is handled internally in the generator, it will continue normally and the return value will be the one of the next yield line that reached. If it’s not handled by the generator, it will fail, and the exception will propagate to the caller.

The .close() method is used to terminate the generator. It will raise the GeneratorExit exception inside the generator. If we wish to run some clean up code, this is the exception to handle. When handling this exception, the only allowed action is to return a value.

With these ad­di­tion­s, gen­er­a­tors have now evolved in­to corou­tines. This means our code can now sup­port con­cur­rent pro­gram­ming, sus­pend the ex­e­cu­tion of tasks, com­pute non-block­ing I/O, and such.

While this work­s, han­dling many corou­ti­nes, refac­tor gen­er­a­tors, and or­ga­niz­ing the code be­came a bit cum­ber­some. More work had to be done, if we want­ed to keep a Python­ic way of do­ing con­cur­rent pro­gram­ming.

More Coroutines

PEP-380 added more changes to corou­ti­nes, this time with the goal of sup­port­ing del­e­ga­tion to sub­-­gen­er­a­tors. Two main things changed in gen­er­a­tors to make them more use­ful as corou­ti­nes:

  • Gen­er­a­­tors can now re­­turn val­ues.

  • The yield from syn­tax.

Return Values in Generators

The keyword def, defines a function, which returns values (with the return keyword). However, as stated on the first section, if that def contains a yield, is a generator function. Before this PEP it would have been a syntax error to have a return in a generator function (a function that also has a yield. However, this is no longer the case.

Remember how generators stop by raising StopIteration. What does it mean that a generator returns a value? It means that it stops. And where does that value do? It’s contained inside the exception, as an attribute in StopIteration.value.

def gen():
        yield 1
        yield 2
        return "returned value"

>>> g = gen()
>>> try:
...     while True:
...         print(next(g))
... except StopIteration as e:
...     print(e.value)
...
1
2
returned value

No­tice that the val­ue re­turned by the gen­er­a­tor is stored in­side the ex­cep­tion, in Sto­pIt­er­a­tion.­val­ue. This might sound like is not the most el­e­gant so­lu­tion, but do­ing so, pre­serves the orig­i­nal in­ter­face, and the pro­to­col re­mains un­changed. It’s still the same kind of ex­cep­tion sig­nalling the end of the it­er­a­tion.

yield from

An­oth­er syn­tax change to the lan­guage.

In its most basic form, the construction yield from <iterable>, can be thought of as:

for e in iterable:
    yield e

Ba­si­cal­ly this means that it ex­tends an it­er­able, yield­ing all el­e­ments that this in­ter­nal it­er­able can pro­duce.

For example, this way we could create a clone of the itertools.chain function from the standard library.

>>> def chain2(*iterables):
...:     for it in iterables:
...:         yield from it

>>> list(chain2("hello", " ", "world"))
['h', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd']

How­ev­er, sav­ing two lines of code is not the rea­son why this con­struc­tion was added to the lan­guage. The rai­son d’e­tre of this con­struc­tion is to ac­tu­al­ly del­e­gate re­spon­si­bil­i­ty in­to small­er gen­er­a­tors, and chain them.

>>> def internal(name, limit):
...:     for i in range(limit):
...:         got = yield i
...:         print(f"{name} got: {got}")
...:     return f"{name} finished"

>>> def gen():
...:     yield from internal("A", 3)
...:     return (yield from internal("B", 2))

>>> g = gen()
>>> next(g)
0
>>> g.send(1)
A got: 1
1

>>> g.send(1)   # a few more calls until the generator ends
B got: 1
------------------------------------------------------
StopIteration        Traceback (most recent call last)
... in <module>()
----> 1 g.send(1)
StopIteration: B finished

Here we see how yield from handles proper delegation to an internal generator. Notice that we never send values directly to internal, but to gen, instead, and these values end up on the nested generator. What yield from is actually doing is creating a generator that has a channel to all nested generators. Values produced by these will be provided to the caller of gen. Values sent to it, will be passed along to the internal generators (the same for exceptions). Even the return value is handled, and becomes the return value of the top-level generator (in this case the string that states the name of the last generator becomes the resulting StopIteration.value).

We see now the re­al val­ue of this con­struc­tion. With this, it’s eas­i­er to refac­tor gen­er­a­tors in­to small­er pieces, com­pose them and chain them to­geth­er while pre­serv­ing the be­hav­iour of corou­tines.

The new yield from syntax is a great step towards supporting better concurrency. We can now think generators as being “lightweight threads”, that delegate functionality to an internal generator, pause the execution, so that other things can be computed in that time.

Because syntactically generators are like coroutines, it was possible to accidentally confuse them, and end up placing a generator where a coroutine would have been expected (the yield from would accept it, after all). For this reason, the next step is to actually define the concept of coroutine as a proper type. With this change, it also followed that yield from evolved into await, and a new syntax for defining coroutines was introduced: async.

async def / await

A quick note on how this re­lates to asyn­chro­nous pro­gram­ming in Python.

On asyncio, or any other event loop, the idea is that we define coroutines, and make them part of the event loop. Broadly speaking the event loop will keep a list of the tasks (which wrap our coroutines) that have to run, and will schedule them to.

On our coroutines we delegate the I/O functionality we want to achieve, to some other coroutine or awaitable object, by calling yield from or await on it.

Then the event loop will call our coroutine, which will reach this line, delegating to the internal coroutine, and pausing the execution, which gives the control back to the scheduler (so it can run another coroutine). The event loop will monitor the future object that wraps our coroutine until is finished, and when it’s needed, it will update it by calling the .send() method on it. Which in turn, will pass along to the internal coroutine, and so on.

Before the new syntax for async and await was introduced, coroutines were defined as generators decorated with asyncio.coroutine (types.coroutine was added in Python 3.5, when the coroutine type itself was created). Nowadays, async def creates a native coroutine, and inside it, only the await expression is accepted (not yield from).

The following two coroutines step and coro are a simple example, of how await works similar to yield from delegating the values to the internal generator.

>>>  @types.coroutine
...: def step():
...:     s = 0
...:     while True:
...:         value = yield s
...:         print("Step got value ", value)
...:         s += 1

>>>  async def coro():
...:     while True:
...:         got = await step()
...:         print(got)


>>> c = coro()
>>> c.send(None)
0
>>> c.send("first")
Step got value  first
1

>>> c.send("second")
Step got value  second
2

>>> c.send("third")
Step got value  third
3

Once again, like in the yield from example, when we send a value to coro, this reaches the await instruction, which means that will pass to the step coroutine. In this simple example coro is something like what we would write, while step would be an external function we call.

The fol­low­ing two corou­tines are dif­fer­ent ways of defin­ing corou­tines.

# py 3.4
@asyncio.coroutine
def coroutine():
    yield from asyncio.sleep(1)

# py 3.5+
async def coroutine():
    await asyncio.sleep(1)

Basically this means that this asynchronous way of programming is kind of like an API, for working with event loops. It doesn’t really relate to asyncio, we could use any event loop (curio, uvloop, etc.), for this. The important part is to understand, that an event loop will call our coroutine, which will eventually reach the line where we defined the await, and this will delegate the function to an external function (in this case asyncio.sleep). When the event loop calls send(), this is also passed, and the await gives back control to the event loop, so a different coroutine can run.

The corou­tines we de­fine are there­fore in be­tween the event loop, and 3rd-­par­ty func­tions that know how to han­dle the I/O in a non-block­ing fash­ion.

The event loop works then by a chain of await calls. Ultimately, at the end of that chain there is a generator, that pauses the execution of the function, and handles the I/O.

In fact if we check the type of asyncio.sleep, we’ll see that is indeed a generator:

>>> asyncio.sleep(1)
<generator object sleep at 0x...>

So with this new syntax, does this mean that await is like yield from?

Only with respect to coroutines. It’s correct to write await <coroutine>, as well as yield from <coroutine>, the former won’t work with other iterables (for example generators that aren’t coroutines, sequences, etc.). Conversely, the latter won’t work with awaitable objects.

The rea­son for this syn­tax change is for cor­rect­ness. Ac­tu­al­ly it’s not just a syn­tax change, the new corou­tine type is prop­er­ly de­fined.:

>>> from collections import abc
>>> issubclass(abc.Coroutine, abc.Awaitable)
True

Given that coroutines are syntactically like generators, it would be possible to mix them, and place a generator in an asynchronous code where in fact we expected a coroutine. By using await, the type of the object in the expression is checked by Python, and if it doesn’t comply, it will raise an exception.

Asynchronous Generators

In Python 3.5 not only the proper syntax for coroutines was added (async def / await), but also the concept of asynchronous iterators. The idea of having an asynchronous iterable is to iterate while running asynchronous code. For this new methods such as __aiter__ and __anext__ where added under the concept of asynchronous iterators.

However there was no support for asynchronous generators. That is analogous to saying that for asynchronous code we had to use iterables (like __iter__ / __next__ on regular code), but we couldn’t use generators (having a yield in an async def function was an error).

This changed in Python 3.6, and now this syn­tax is sup­port­ed, with the se­man­tics of a reg­u­lar gen­er­a­tor (lazy eval­u­a­tion, sus­pend and pro­duce one el­e­ment at the time, etc.), while it­er­at­ing.

Con­sid­er this sim­ple ex­am­ple on which we want to it­er­ate while call­ing some I/O code that we don’t want to block up­on.

async def recv(no, size) -> str:
    """Simulate reading <size> bytes from a remote source, asynchronously.
    It takes a time proportional to the bytes requested to read.
    """
    await asyncio.sleep((size // 512) * 0.4)
    chunk = f"[chunk {no} ({size})]"
    return chunk


class AsyncDataStreamer:
    """Read 10 times into data"""
    LIMIT = 10
    CHUNK_SIZE = 1024

    def __init__(self):
        self.lecture = 0

    def __aiter__(self):
        return self

    async def __anext__(self):
        if self.lecture >= self.LIMIT:
            raise StopAsyncIteration

        result = await recv(self.lecture, self.CHUNK_SIZE)
        self.lecture += 1
        return result

async def test():
    async for read in AsyncDataStreamer():
        logger.info("collector on read %s", read)

The test function will simply exercise the iterator, on which elements are produced, one at the time, while calling an I/O task (in this example asyncio.sleep).

With asyn­chro­nous gen­er­a­tors, the same could be rewrit­ten in a more com­pact way.

async def async_data_streamer():
    LIMIT = 10
    CHUNK_SIZE = 1024
    lecture = 0
    while lecture < LIMIT:
        lecture += 1
        yield await recv(lecture, CHUNK_SIZE)

Summary

It all start­ed with gen­er­a­tors. It was a sim­ple way of hav­ing lazy com­pu­ta­tion in Python, and run­ning more ef­fi­cient pro­gram­s, that use less mem­o­ry.

This evolved in­to corou­ti­nes, tak­ing ad­van­tage of the fact that gen­er­a­tors can sus­pend their ex­e­cu­tion. By ex­tend­ing the in­ter­face of gen­er­a­tors, corou­tines pro­vid­ed more pow­er­ful fea­tures to Python.

Coroutines were also improved to support better patterns, and the addition of yield from was a game changer, that allows to have better generators, refactor into smaller pieces, and reorganize the logic better.

The addition of an event loop to the standard library, helps to provide a referential way of doing asynchronous programming. However, the logic of the coroutines and the await syntax it not bound to any particular event loop. It’s an API 2 for doing asynchronous programming.

Asyn­chro­nous gen­er­a­tor was the lat­est ad­di­tion to Python that re­lates to gen­er­a­tors, and they help build more com­pact (and ef­fi­cien­t!) code for asyn­chro­nous it­er­a­tion.

In the end, behind all the logic of async / await, everything is a generator. Coroutines are in fact (technically), generators. Conceptually they are different, and have different purposes, but in terms of implementation generators are what make all this asynchronous programming possible.

Slides

References

Notes

1

Need­less to say, the re­sults will vary from sys­tem to sys­tem, but we get an idea of the dif­fer­ence be­tween both im­ple­men­ta­tion­s.

2

This is an idea by David Bea­z­ley, that you can see at http­s://y­outu.be/Z­zfHjyt­D­ceU