May 2, 2023
·
4
Min Read

Try Akita’s Drop-In API Metrics with Our New Demo

by
Versilis Tyson
Share This Article

When your users find issues before you do, you end up scrambling to catch up.

To help solve this problem, Akita provides the fastest time-to-value metrics solution, allowing developers to find and fix real-time issues with your APIs before your users notice. Akita does this by passively watching your network traffic and automatically analyzing your API behavior to identify which endpoints are slow and which endpoints are throwing errors.

In order to try Akita out, all you need to do is let Akita watch some API traffic and let it get to work. But what if you don’t have a local project with that much traffic? Or what if you want to see what you’ll get before you put a tool in your staging or production environments?

I have good news! I have put together a demo so you can now connect to Akita to see the good, the bad, and the ugly of an API before you run Akita on your own system. In this post, I’ll tell you more about what I built and how.

Meet the Akita Demo

With the Akita demo, you can see how Akita tracks your API traffic, errors, and latency at a per-endpoint level without needing to have a functioning app or following any complicated steps.

You can try the Akita demo out by cloning our GitHub repository and then running `chmod +x ./run.sh && ./run.sh`.

How the Demo Works

When we set out to build the demo, the requirements were:

  1. Easy deployment and setup.
  2. Showcasing the core functionality of our API monitoring and metrics tooling.
  3. Demonstrating the interaction between a producer (mock API server) and a consumer (service sending mock API requests).
  4. Collecting and displaying per-endpoint metrics such as latency (p90) and errors.

Before settling on the current design, we explored several other options. Initially, we attempted to create our own services that would send real traffic to each other. Although this approach seemed promising, it turned out to be inflexible, as well as time-consuming to set up and maintain for both us and our users. It was difficult to iterate quickly and adapt the demo to changing requirements needed to replicate the interactions between a user’s API services and Akita, which is crucial for keeping the demo up-to-date to showcase the latest features of our API monitoring and metrics.

Another idea we considered was using a simple echo server to handle API requests that receives data and echoes it back. While this option would have been easier to implement, it fell short because it does not provide enough configurability (such as stubbing) to effectively simulate the complexities of real-world API interactions. Because an echo server merely reflects back the client's messages without any processing or modification, it can’t change response statuses, latency, and potential errors in order to generate the “problematic” calls to show off Akita’s functionality for identifying issues.

To address these challenges, we designed a demo that utilizes three Docker containers: one for our Agent, one for the producer, and one for the consumer. Docker Compose is used to manage and orchestrate these containers, making it possible for users to try out the demo simply by pulling the Github repository and running `chmod +x ./run.sh && ./run.sh`. The agent container monitors the API traffic between the producer and consumer, collecting valuable metrics and deriving API models based on the traces. The producer container serves as a mock API server, providing a set of predefined API endpoints for the consumer to interact with. The consumer container is responsible for sending mock API requests that match the stubs specified for the producer, simulating real-world API usage.

For the demo, we chose to use WireMock as our mock API server due to its flexibility and powerful features. WireMock allows us to easily stub API endpoints with varying response statuses such as 4XXs and 5XXs, simulating potential errors that may occur in real-world scenarios. Additionally, it enables us to replicate varying latency between endpoints using delay distributions. These features help showcase how Akita measures latency and detects errors, providing users with a realistic demonstration of our monitoring capabilities.

Example Akita demo dashboard.

Using the powerful features of Akita, the demo provides users with valuable insights into the performance and reliability of the API, including per-endpoint metrics such as latency (p90) and errors. The demo demonstrates how Akita helps users gain a better understanding of their API's performance and address potential issues before they become critical. I’ve also spent time selecting a handful of API issues (high latency and errors) to add to the demo, so users can now explore the full range of Akita's capabilities in a hands-on environment that reflects the complexities and challenges of operating APIs in the real world.

Run the demo today

Try out the Akita Demo today, and let us know what you think! Have you created a demo API recently? Tell us about it @akitasoftware.

Share This Article
Join Our Beta