July 22, 2021
October 13, 2020
·
8
Min Read

Test-driven API Docs: Using Flask and Akita to Autogenerate OpenAPI Specs

by
Sébastien Portebois
hands testing something
Share This Article

Editor’s note: Since we published this blog post, the Akita team has taken the ideas from this post and implemented an official Flask integration. (See the blog post here!) We believe this blog post remains interesting and helpful to people interested in integrations not yet supported by Akita!

I originally posted this to my Medium here.

✨ The promise

A couple months ago, I came across Akita Software, a startup that automatically watches API traffic in order to generate API OpenAPI definitions and to detect breaking changes.

As an architect at Ubisoft, I work building our internal platform-as-a-service, a job that involves building and integrating lots of internal APIs. We constantly worry about the best way to help partner teams integrate their APIs in the platform while providing strong quality guarantees to the platform consumers.

I was curious to investigate if Akita SuperLearn could become a trustworthy source of API definitions and the super-canary in our API coal mine to detect any breaking change.

The catch? Akita operates by watching network traffic. I wanted to try Akita out on our tests in CI. But the problem is that our tests don’t actually generate network traffic. This post describes the workaround I found for our Flask integration tests, based on automatically creating real HTTP traffic from the tests, and sending it to a dummy server.

I heard that other Akita users were also wondering about this, so I thought I would share.

🤔 The problem

We wanted to use Akita SuperLearn, which watches API traffic to learn API specs. At my company, we develop our API with Flask, and there run a lot of integration tests using Flask’s test client fixture. Which is great for tests because  it makes integration tests really fast and reliable! It abstracts all the underlying layers of the stack, yet it lets you test 100% of your code without other fakes, you only have to insert mocks or stubs for external dependencies. In short, it makes integration tests as fast as unit tests. Who wouldn’t want that?But that means that the test requests aren’t sent over “real” HTTP, and same for the responses.

Therefore there’s no traffic that the Akita SuperLearn client can observe, so no learning material.

To get the best of both Akita and Flask test client, I had to figure out how to hook into Flask to send this traffic over to Akita, ideally without having to change the tests.

💡 The solution, high-level view

Looking at this problem, I realized I could generate some real network traffic as a side effect of the tests without changing the tests themselves. Akita would be able to observe fake traffic matching the tests, and therefore generate some API definition.

I set out to find a solution that has the smallest possible impact on the tests: it should not require rewriting or updating the tests.

The key idea we leverage is to use our problem as our solution: all the incoming requests and responses pass through the Flask test client, which means they’re not going over the network. But if we override the Flask test client, we can then observe this information and expose it.

How to expose it? I describe what I did below in four steps.

  1. Well, since we use Flask, we already have the Werkzeug web application library in our dependencies, so the simplest solution is to start a simple Werkzeug HTTP server in a separate thread, listening on some port.
  2. We now have a place where to send the traffic we want to make visible to Akita. We then need to send the requests to this server, and make sure it can send back the correct responses. We do this by making our overridden Flask test client store each response in a map, whose key is the request correlation-id, in addition to its regular work. This updated Flask test client now sends a real HTTP request to localhost:thread-port before to send the response back to the test framework.
  3. The last piece is what happens in our dummy HTTP server. It reads the correlation-id header when it receives a request, gets the response from the shared hashmap and sends the response back. From a functional standpoint, it’s useless, but it makes these requests and corresponding responses visible to the Akita client!
  4. The only last part is to make the Akita client listen on that dummy HTTP server port, so that it can observe this traffic. We now have the incoming requests/responses visibility problem solved!

👷‍♂️ The solution, piece by piece

In your tests, you probably already leverage Flask test client, either with `app.test_client()` directly, or with a fixture like:

@pytest.fixture
def client():
    my_project.app.config['TESTING'] = True

    with my_project.app.test_client() as client:
        with my_project.app.app_context():
             # Do some initialization stuff
        yield client

Or :

@pytest.fixture

def test_client():

    configure_app(flask_app, config_name=Environments.TESTS)

    # I use this a lot for custom Test client classes to inject custom things

    my_project.app.test_client_class = CustomApiTestClient

    client = my_project.app.

    yield client

The main idea here is to use this client as the entry point to manage the “exposer” HTTP server:

@pytest.fixture
- def test_client():
+ def test_client(exposer_thread):
    configure_app(flask_app, config_name=Environments.TESTS)
    # I use this a lot for custom Test client classes to inject custom things
    my_project.app.test_client_class = CustomApiTestClient
    client = my_project.app.test_client
    yield client

And we create this new fixture in our conftest.py:

@fixture(scope="session")
def exposer_thread(request):
    start_exposer_thread()
    def close():
        stop_exposer_thread()
    request.addfinalizer(close)

This is not perfect, but since pytest doesn’t offer clean global setup/teardown mechanisms, it gets the job done without adding new dependencies.


We now have to implement these start_exposer_thread and stop_exposer_thread functions to actually start (and stop) the exposer code in a thread.

The code here is a little bit more verbose, because we need to handle a thread, but the main idea is simple:

  • we run start_exposer to start an HTTP server, which handles all the requests with reply_with_stored_response
  • reply_with_stored_response checks the request correlation ID, and looks for a corresponding response stored in a shared hashmap.

You can check the code in the complete implementation in the gist.

The missing piece now is about how we add the responses to this shared hashmap.  _publish_to_exposer takes a request descriptor and the full Flask Response object. It makes sure a correlationID is set, or creates a new one otherwise, and stores the response in the map with this id. It then sends the request to the localhost HTTP port set for the exposer thread.

Finally, we need to call this _publish_to_exposer function. That’s where our custom test client comes handy. Because we can override it, we can make sure we call this publish function from our Flask test client overridden open() method:

  def open(self, *args, **kw):

        resp: Response =  super().open(*args, **kw)

        # Send a copy of the request so that it’s visible for the Akita Agent

        resp.freeze()

        _publish_to_exposer(kw, deepcopy(resp))

        return resp

Since all our tests usually have a fixture to inject the test client, the real trick here is to make sure the client we inject in all our tests is this custom Flask test client which takes care of generating real HTTP requests to our dummy server and letting this server with responses to send back. With this custom Flask test Client idea, we don’t have to change anything in our tests: only a few fixtures are updated (or added if you weren’t using these already)

The complete (cleaner) code is available here, including the less relevant parts omitted here: https://gist.github.com/sportebois/86eebf5221b2ab104614ecd9a77f7bdc

🛠 The solution limitations

The first conclusion of this experiment: it works! With this little tweak to generate observable requests and responses Akita can track, we are now able to make Akita generate API definitions from our integration tests.

That said, it’s not as good as the real thing.

  • The main limitation for me: we don’t expose any outgoing traffic, so Akita isn’t able to give any insights about it. The same idea could be leveraged here, but it really depends on what you use to send outgoing requests. The initial assumption of pytests plus Flask covers probably most of Flask developments. Monkey-patching requests might not cover all your outgoing requests, and any mocks you have would make them invisible anyways.
  • The other limitation is that, although I tried hard to minimize the contact surface, it still requires some custom code. The tests themselves don’t need to be updated, but there’s still some custom code to write to be able to monitor an API and generate API definition with Akita. The main promise of the tool is to observe the traffic so that now integration is required. The solution I present here lets us use Akita, but betray the philosophy of the tool.

🔑 Key take-away

Initially, I had been worried about our ability to use Akita to watch network traffic because so many of our tests did not go over the network, but an hour or so of experimentation helped me find this quick workaround. If you’re looking to use Akita without network tests—or if you are looking to generate network tests for any other purpose—you may find this technique of echoing API traffic useful!

The implementation was very easy in this case, because of how the Flask test client made it easy to capture all the requests and responses without having any impact on the existing tests, and Werkzeug is as helpful as usual as soon as it comes to doing any HTTP thing.

But I don’t see what would prevent anyone from applying the same idea in other languages or frameworks.

Share This Article

Join Our Private Beta now!

Thank you!

Your submission has been sent.
Oops! Something went wrong while submitting the form.