Mar 2, 2023
·
6
Min Read

Launching Akita’s Open Beta

by
Jean Yang
Share This Article

Any software team knows how important it is to quickly find and fix customer-impacting issues. 

To help, we at Akita have built the fastest time-to-value monitoring tool, no code changes or custom dashboards necessary.

From one of our users, JM Doerr at Threads:

"Akita’s simplicity appealed to me. I came in with the expectation that it was really easy to set up and use. You just turn things on, and then you don’t have to worry about it. It’s lived up to that."

We’re excited to announce our beta is now open to everyone and would love for you to try it out. With the beta, it’s possible to set up Akita within 30 minutes to see what API endpoints are in use, which endpoints are slow, and which endpoints are throwing errors. 

How Akita came to simplify monitoring

Out of all of the challenges in monitoring and observability, how did we come to focus on time-to-value? The short answer: our private beta brought us here.

Akita’s goals

Given the major shifts in software architectures this last decade, it should not be surprising that monitoring needs have shifted, too.

From the RapidAPI 2019-2020 Developer Survey.

The growth of the API economy, combined with the rise of service-oriented architectures, means that most web applications are a pile of APIs. In these applications, developers are not responsible for just their own service, but how their service interacts with other services. Each software application has become its own ecosystem, with its own emergent behaviors. Gone are the days when a developer could focus on simple, monolithic applications.

When I first started Akita, the goal was simple: make it easier for users to find and fix issues in production. To help teams pay off monitoring debt faster than they accumulate it, I had a few hard-and-fast constraints about what the first product should look like:

  • Since we were building Akita to combat complexity with the rise of APIs across heterogeneous tech stacks, the solution needed to be as language-agnostic as possible.
  • To easily scale across complex, multi-service environments, the solution should require as little developer intervention as possible.

But I wanted user research to drive both the underlying technology we used and the questions we helped developers answer.

Choosing the (e)BPF route

Because tech R&D takes time, the first thing we settled on was a technological approach. After dozens of user interviews, we concluded there was not enough standardization across services meshes or Open Telemetry. Instead, we chose to start with network packet capture: as long as software teams were sending API traffic across the network, we could watch it.

By mid-2020, we had built a way to passively listen to unencrypted API traffic, using a technology called Berkeley packet filter (BPF), via GoPacket. BPF allowed us to reconstruct network packets in order to automatically generate API specs. We intended this API discovery capability to be the foundation for the rest of our product.

Iterating towards simplicity

From mid-2020 to mid-2022, we iterated on top of API spec generation to figure out what was most valuable. As soon as we shipped v0 of our API spec generation tool, our users told us this was too much information. No, they did not want to make dashboards from this data: they wanted less information.

Initially, we thought the way to simplify the product was to tell users how their API behavior was changing. Not only were our users requesting this, but they were also downloading our API specs to diff by hand. This led us to start iterating over prototypes of change analysis features that told users about breaking changes to their APIs.

At the same time, we started making our traffic-watching algorithms real-time, as up-to-date information is critical to teams keeping up with their APIs. It turned out that the latter, rather than the former, led us to our product’s “aha” moment.

Doubling down on time-to-value

This brought us to a turning point in our private beta: May 2022, when our automatic traffic generation algorithms became near real-time.

Up to this point, we thought that change analysis was the minimal viable product and drop-in monitoring was simply Part One of building that. But as soon as drop-in monitoring became real-time, we started getting feedback that automatic API endpoint discovery, combined with near real-time latency and error information, was the "wow." Drop-in monitoring gave our early users what they needed to find and fix issues; change analysis was a nice-to-have.

Summer 2022, we started hearing from our inbound, self-serve users about the value they were getting from us. For instance, one of our users adopted Akita over Datadog because we helped them localize an issue with their payments endpoint within five minutes of integration.

Guilherme Mori, CTO at Brazilian clean energy startup zMatch, told us:

"Akita’s API maps and per-endpoint monitoring have given us the visibility we need into our system, without the work of setting up our custom dashboards. Setting up Akita on Fargate was fast, easy, and more straightforward than alternatives we considered."

The user feedback told us it was time to double down on time-to-value.

Getting ready to open our beta

We had one final challenge before opening our beta: making the product self-serve and seeing if more users were picking it up every month.

As of July 2022, platform-specific quirks meant many of our users needed to talk to us in order to install Akita. After spending a quarter obsessively instrumenting our onboarding funnel and talking to every user who would talk to us, we were able to determine the automations and docs changes that get around many of these issues. Then it became a matter of watching who showed up.

In the last few months, not only have users been able to integrate fully self-serve, but they’ve been able to get results they are happy with without our help. We’ve been able to scale up our user base while growing Akita across teams and retaining happy users, without overwhelming our systems.

We are now at the point where a developer could join our beta and start seeing meaningful information about their APIs within a short time of starting onboarding. As David Gomez-Urquiza, CTO of Deal Engine, says:

"I had heard of eBPF, but I never thought it would be that easy to drop Akita into Kubernetes. I just redeployed with a different deployment template and Akita as a sidecar. It was really valuable to just put Akita in our cluster and look at the traffic right away."

Welcome to our open beta

Today, the monitoring and observability industry is obsessed with giving software teams more data about their system, faster. More data sounds appealing—until you are looking through thousands of logs to figure out what to alert on, or to find what is leading to a specific user error.

In a time when most other solutions are focused on giving users more information, Akita turns the problem on its head and asks: what is the most important, smallest amount of information a developer needs to know in order to find and fix a high-impact issue? More and more, teams do not need to know all the details about a system in order to make sure it stays up: they just need to know if there is a problem—and where.

We’re excited to invite you to the launch of our open beta.

With Akita’s open beta, you can integrate on Docker, container platforms, and Kubernetes in under 30 minutes, no code changes necessary. You get:

Our users have reported being able to integrate in under 30 minutes and being able to catch issues within five minutes of integration, no custom dashboards required.

Theo Budiyanto from Saweria says:

"The Akita set-up experience has been a breeze. I used to work for Splunk, and tried a bunch of other tools, including Sumo Logic and Datadog, before. Akita is literally just plug and play."

Try us out here! (And check out our docs here.)

Photo by Jae Lee on Unsplash.

Share This Article
Join Our Beta