Inverse request invocation with coroutines for parallelism

This may originate from a quite sophisticated case: we have a list of data, and we want to transform it with a function that may invoke indefinite amount of requests to an web API with rate limit, and is vulnerable of errors. Our goal is to complete as more requests at a time, and invoke as less requests when an error occurs.

Since the transform function is of synchronous logic, i.e. a next request wouldn't be raised until the last request got its response, which prevents us from directly paralleling all requests, which is in fact the default strategy of all request lib, and easy to adjust to fit the rate limit, our parallelism can only take place between different calls to the function, which incurs batching the functions and sync the states like current API usage and errors between the site performing the requests and the site calling the functions.

A natural idea is to bind the two sites, i.e. let the calling site actually call the web API, and the function only returns its arguments. This inverses the common dependency order of the functions. In most languages, such inversion is impossible without coroutines. The resulting architecture is like: the controller polls the coroutines in batch, invoking the requests, providing the results, and fitting other requirements like process the quota or handle the errors.

An implementation in ts may be like:

const data = ...;

const reqlist = [];
let datadone = false;
const poll_co = async ([args, co]) => {
    const { done, value } = co.next(await /* call the API with args */);
    if (!done) reqlist.push([value, co]);
};
do {
    const ps = [];
    for (const _ of Array.from({ length: batchsize })) {
        const req = reqlist.shift();
        if (req) {
            ps.push(poll_co(req));
        } else if (!datadone) {
            const { done, value: co } = data.next();
            if (done) {
                datadone = true;
                break;
            }
            const { done: noreq, value: args } = co.next();
            if (noreq) continue;
            ps.push(poll_co([args, co]));
        } else break;
    }
    try {
        await Promise.all(ps);
    } catch (e) {
        // Process the error
    }
    // Just for simplicity, since we implemented poll_co in this way :)
    await new Promise(res => setTimeout(res, 60000));
} while (reqlist.length != 0);

The idea of the whole article is quite simple... Since with reasonable designs, future is just a special kind of coroutine, and the conversion between coroutine functions and async functions is usually not complex, just incurs some keyword replacement and syntax tweaks. But this is still inevitable if we're refactoring an implement serially execute the transform functions, even for languages like C++ which has a unified coroutine/async system, that the functions alone the whole call chain should be replaced with a new signature, because the type of the function is changed in deed. The actual web API calls are doomed to be replaced one-by-one.

But if the language is effect-based and with a reasonable type system to implicitly propagate the effects (which doesn't exist at all XD), we can omit the chain-replacing step, only have the call-replacing step and the controller part, since the added effect would just implicitly infect the functions alone the chain.

Well... So this is only an article showcasing one of the benefits of algebra effects: for implicitly-propagated effects, only the use site and the handle site needs to be focused, all other layers would automatically adapt to them.