Jun 30, 2022
·
5
Min Read

What Drop-in API Observability Looks Like, Pre-Launch and Post-Launch

by
Guilherme Mori
Share This Article

This is a guest post by Guilherme Mori, CTO at zMatch. Guilherme has previously worked in startups for over twelve years in development and devops roles. If you are interested in building the future of electric mobility and clean energy in Brazil, zMatch is hiring!

Seven months ago, I joined as CTO at zMatch, an electric car and clean energy provider company based in Brazil. In the early days, I hired contractors to help build the initial service. As we were getting ready for launch, my job was to assess the system built by contractors and make sure the site was ready to scale to our live users, which resulted in making technology as a business core and hiring an internal team.

As the CTO of zMatch, I first came across Akita when I was looking for better observability tools. At the time, I was working only with contractors and I was preparing to move our software development in-house by giving visibility to what third-party partners were developing. I initially used Akita for understanding the service I got handed. After that, as our engineering team grew, I have been using Akita to keep an eye on our services, ranging from conversational interfaces (chatbots) used to understand users' needs on car buying/selling and clean energy consumption, to virtual garage showroom and mobility subscription. These days, I check Akita every day to help me understand how our microservices are evolving on performance (response times) and success rate (error responses).

This blog post is about my experience using Akita, a new kind of API observability solution. I believe this post is helpful to anyone else who needs to quickly monitor REST endpoints, especially in a per-endpoint manner. For context, I’m running services written in Python (flask), Java (spring boot) and Typescript (angular) on AWS Fargate, using AWS CloudWatch for monitoring. (See the Akita on Fargate docs here.)

Trying Akita out

What excited me most about Akita was the fact that you can simply drop it into your system, give it permission to passively watch network traffic, and let it do its magic. The recommended way to run Akita is to let it listen to traffic in staging or production, so I tried it out by adding a side task inside existing services on staging environments to first understand if it would work and what to expect from the tool.

I run AWS Fargate and was able to run the Akita agent by simply configuring, using our IaC (Pulumi), a task using the provided Akita image and allocating part of the memory and processing of the container. What impressed me was that Akita had little to no footprint on our deployment. Importantly, Akita did not impact processing loss or extra costs inside AWS, a main concern at our company stage. (See the Akita on Fargate docs here.)

Using Akita to understand my inherited API

My first use case for Akita was simply using it to understand what APIs I had. Since contractors did not fully implement any documentation and/or Swagger, it was hard to easily understand what endpoints were available, how they behaved, and how they expected to be called.

Almost immediately after installation, Akita provided all the endpoints’ requirements, as well as some examples of expected values, which allowed us to better understand the service that the contractors had built. Once we could understand the data flow (including not only the request body, but also headers and authorization), improving the system became a lot easier. After we had the mapping, we started working on response time, and endpoint simplification/consolidation.

Akita API model overall discovery for service developed by third-party contractors, with counts redacted.
Akita API model for a specific endpoint with field lists and expected type of value, with counts redacted.

Day-to-day monitoring with Akita

Once I learned what the APIs were, I started managing my API endpoints using the Kong API Gateway, so I shifted to using Akita’s API monitoring feature more than the API modeling feature. With Akita’s API monitoring, I’m able to get per-endpoint information about my endpoint usage. Here’s how I use it:

  • Daily, I check main endpoints to understand if there are issues regarding performance and/or error rate.
  • Weekly, I check that the newly implemented endpoints are being used and the amount of requests.

Although we have CloudWatch monitoring our Fargate Services and Tasks, those tools provide only overall insights regarding the microservice execution. Kong, our gateway provider, was also an alternative to get endpoint information. However, making this data available takes too much extra effort, data transferring between services, costs and time—all factors that mattered and weighed upon in our investment decisions. Akita provided a way for us to have better granularity of our APIs, allowing better and smarter focus and prioritization on where we should improve and develop.

As most tools are fairly complex and require commitment for implementation, Akita came at a good time to give us the resources we needed without having extra costs of implementation (such as more infrastructure to run) or any lead time to implement or get the results. Adding the solution was pretty straightforward, and the learning curve to understand the reports is so quick that new team members are able to get insights almost immediately after joining the company.

Akita implementation using Pulumi (IaC) and AWS ECS Fargate.
Akita Metrics & Errors, with counts redacted.

What API observability means to me

As a startup veteran of over twelve years, I know we have a long way to go before we have the kind of monitoring and observability that I would like to have. I also know that, in most cases, understanding usage and errors for my most heavily trafficked endpoints is enough; I don’t necessarily need to understand complete system behavior or get full traces to keep my service alive and support my users. I like API observability because it lets me not have to worry about logs and traces when I don’t need to. After installing Akita, I can update my services however I want and my Akita dashboards stay up to date without any additional work.

In a startup environment, quickly understanding and taking action upon possible bottlenecks are more important than having the perfect observability and monitoring implementation. Akita sure has given the knowledge and solution to be able to handle this path in a fast, easy, and convenient way.

Share This Article

Join Our Private Beta now!

Thank you!

Your submission has been sent.
Oops! Something went wrong while submitting the form.