April 7, 2021
December 22, 2020

From Balenciaga to Basics: Lessons from Our Pivot to API Tooling

by
Jean Yang
two person walking next to servers
Share This Article

From the beginning, our goal at Akita has been to make software more robust and trustworthy. This past year, our learnings led us to significantly shift our strategy for how to get there.

In 2020, we began the year as an enterprise security company. We are finishing the year as an API dev tools company. Thanks to this change, we went from counting our users on one hand to building a growing community of users. More importantly, we went from guessing what would help get users to having existing users tell us what problems they want us to solve next.

Here are the lessons we learned that led to our pivot, that may be helpful to anyone else who thinks a lot about how programming tools get adopted. Note that this is more of a checkpoint than a post-mortem: Akita is very much a work in progress!

If your tool requires developer buy-in, make life easier for developers.

When I started Akita, the main problem I set out to tackle was this. In modern, cloud-based environments, it is hard to detect when developers introduce problematic code, and it is hard to understand how to make things better when something goes wrong. When a cloud system has many interconnected services, its emergent behaviors are hard to understand. Given recent privacy laws like GDPR and CCPA and the increasing amount of money companies lose from security breaches, I figured that building a security solution that detects data leaks was a good place to start.

Here’s where the security hypothesis went wrong: we were a security tool designed for developer use, landing us straight into what I now call the security/developer gap. It’s not that developers don’t care about security: they simply have higher priorities like making sure their features ship on time. Akita did not help with building features and also created more work by finding more issues to fix. We sat through meeting after meeting where security or privacy teams would bring us into their companies to pitch developers on our tool—and the developers would have rather been anywhere else.

Every now and then, something interesting happened: a developer would ask if they could use our tool for non-security purposes. It turned out that what we built was also useful for helping developers find any breaking change—and that this was far more useful to developers because it helped them ship features faster. This observation led us to shift to building for these use cases, instead of focusing on the security ones. (And, perhaps unsurprisingly, this newfound popularity with developers has made us more appealing to security teams as well!)

If someone doesn’t have jeans or t-shirts, they’re not going to buy your Balenciaga.

The world I come from, the world of programming languages research, is often about designing the high fashion of programming: more about inspiring ideas than addressing immediate needs. But mainstream programming tools usually get adopted because they solve everyday problems, so we at Akita have learned a lot about building basics instead of Balenciaga. (Balenciaga is currently the most name-dropped designer brand in hip hop.)

In 2020, we started the year building an expensive-to-run, costly-to-integrate API fuzzing tool. We ended the year shipping a lightweight, five-minutes-to-integrate API traffic analyzer. Our fuzzer was precise; it was cutting-edge; it gave guarantees that security and privacy teams told us they had only dreamed of having. The problem? The calculus usually didn’t work out for integrating us. Even for the teams that got over the developer/security gap, our state-of-the-art fuzzer was expensive in all dimensions: the amount of compute it guzzled up; the amount of developer effort required to integrate us; the amount we needed to charge for it because of how hard it was to build. On top of all of this, we were solving only one of many problems security teams needed to solve, and it was more important to them to have solutions for all the problems than to solve a single one very well.

As we worked on making our fuzzer less of a luxury good, we got lucky. Our fuzzer used API specs to automatically generate test traffic to APIs, but it turns out developers generally do not have API specs. This led us to work on automatically generating OpenAPI specs by watching API traffic. When we rolled this out, we noticed that developers were using specs for diffing across pull requests. When we dug into why, we learned that breaking changes were a huge issue for developers, higher-priority than data leaks, and they believed our solution could help more than the source diffs and production monitoring they currently had. After we started advertising our tool for catching breaking changes, we went almost overnight from security teams dragging their feet to developers installing and using us immediately. (And for everybody who has been asking about the fuzzer: don’t worry! We’ve figured out how to make the fuzzer much more accessible. More about that soon.)

An enlightening moment for me was when we asked one of our first users to stack rank our new API spec features with the fuzzer-supported features. Even though we had a hefty contract for the fuzzer features, all of the new features came out on top. What he said was this: our API spec-based tooling didn’t give the same precision as the fuzzer, for sure, but it was so much easier to use and the results were still helpful. Across the board, when we asked our users what they found most compelling, the five-minute install was at the top of the list. This solidified another lesson: a five-minute tool with 50% insights will be more popular than a five-day tool with 90% insights.

Another lesson we learned from this is that you don’t necessarily have to choose between basics and Balenciaga. It’s not that developers eschew fancy techniques, but they tend to only adopt “high fashion” programming tools that meet them where they are. That is, the tool solves a problem they have and doesn’t require too much money or effort to deploy. And just as collaborations like Isaac Mizrahi for Target have brought high fashion design to affordable basics, what we’ve been building at Akita still uses cutting-edge technical methods, but is now far easier to install and use. And, who knows: giving our users a taste of our “basics” might one day lead them back to our luxury items! 🧐

If you’re building something that’s never existed, build for iteration.

When I started Akita, there had been some discussion of whether we should go into a cave for two years and emerge with The Product, or if we should build iteratively with a growing set of users. The former is the standard for most developer infrastructure and language/compiler projects; the latter is the standard for many early-stage startups. Iteration is also necessary for getting a full picture of user constraints if the thing you’re building has never existed before.

Reconciling the usual timelines of infrastructure/languages projects with the rapid iteration characteristic of a startup became a key challenge for us. A typical programming tool may take months to years per iteration, of which there may be several, and then often takes years, if not decades, to reach mainstream adoption. We did not have this kind of time. To optimize our chances of aligning with mainstream engineering tastes, I spent months interviewing security teams and developers before bringing anybody else onto the team, and then spent months more doing in-depth user research as we built out our first prototypes. This is how we came to start out building the fuzzer. After building the fuzzer, every major insight in our product came from getting people to use a version of the product: the realization that specs alone were useful; the focus on breaking changes; smaller innovations about how we present API specs and diffs. In optimizing for insights, my team and I have learned to favor user learnings (for instance, decreasing onboarding friction) over guarantees, performance, and other technical metrics.

Note: even though everyone says you should iterate with users, I’ve come across very few examples of iterating on hard tech products while simultaneously doing principled user work. On the one hand, this makes sense. Not only is it difficult to mix user/product work with deeply technical work, but it’s often not rewarded in the highly technical environments that produce new languages and tools. On the other hand, there’s often a lot of usability lost in the creation of hard tech tools. I’d love to talk to people who are also thinking about this.

What’s next?

In 2020, we’ve found our people and learned a lot about the pressing, everyday problems we can solve for them. These next few months, we’re excited to double down on catching breaking changes on every pull request, as the next step towards building the API graph. Look for an open-sourced CLI from us in 2021! 👯‍♀️

P.S. If this kind of tool sounds like it would be useful to you, try out our private beta.

P.P.S. If working like this sounds fun to you, come join us!

With thanks to Will Crichton, Kayvon Fatahalian, Quinn Wilton, Cole Schlesinger, and Nelson Elhage for edits.

Share This Article

Thank you!

Your submission has been sent.
Oops! Something went wrong while submitting the form.