Feb 28, 2023
Jun 23, 2021
·
10
Min Read

Developer Experience: Stuck Between Abstraction and a Hard Place?

by
Jean Yang
Scared cardboard robot
Share This Article

A few weeks ago, I Tweeted about how there are really two different kinds of developer tools, ones that abstract over functionality and ones that reveal complexity:

My Twitter thread about abstraction tools vs. complexity-revealing tools.

The story behind the thread is this. Even though I’ve been working on developer tools for over a decade, it was only last year that I had two major revelations about developer experience. First, there are really two kinds of developer tools: abstraction tools and complexity-revealing tools. Second, it’s crucial to think about developer experience differently for these two categories.

This blog post is an elaboration of both revelations as it’s relevant for mainstream developers building web applications. I believe it’s important not just for developer tool creators to think about this distinction, but users of dev tools. After all, it’s the users who determine relative popularity, and thus which tools live and die! ⚔️

But first, my life story

I’ve been told I need to slow down my writing for the effect of high quality, so let me first take you on a leisurely tour of my life.

First, I came into computers well before even user experience was A Thing, let alone developer experience. On my first computer, a Gateway 2000, I happily played DOS games and wrote line numbers in my BASIC programs. I was on the Internet as JavaScript was being created.

My undergraduate professor Radhika Nagpal's door.

Second, when I went to school for Computer Science it was still somewhat credible that abstraction could solve all problems. My introductory computer science professor literally had an ABSTRACTION BARRIER on her door. My programming languages professor taught me to believe wholly in the power of using beautiful logic to hide all that is messy and error-prone.

When I decided to spend my life making programming less messy and error prone, I believed I was going to do it by building the right abstractions.

The cracks in the abstraction

A crack in the space/time abstraction.

My belief in the power of abstraction led me to a PhD in programming languages. But this belief quickly started to crumble.

My life’s progression towards the Dark Side of Abstraction started with my main PhD project, for which I created a programming language for automatically enforcing privacy policies. The cool thing about my language, Jeeves, is that you could associate privacy policies directly with sensitive data values—and rely on the language runtime, rather than the programmer, to enforce them.

The first thing I decided to do with this programming language was build a web app, a conference management system. I called out to the database—and realized that abstraction was a lie. My language only enforced its guarantees in the jurisdiction of its own runtime. The guarantees did not extend to database calls!

So I did what any PhD student would do in this situation: write a next paper solving the problem I uncovered. This paper was about how to extend my privacy language across the application and database. But the damage had been done. At night when I closed my eyes, my mind would fill with all the ways to subvert language-level abstraction. Different kinds of databases. Foreign function calls. Remote procedure calls!! 🥀

After years of slow-motion existential crisis, I came to believe there was a bigger problem for me to solve than that of building any single abstraction. I became obsessed with what I now call the Software Heterogeneity Problem: the issue that any system of sufficient size and maturity will involve multiple languages and runtimes. This is what ultimately led me to what we’re doing with API-centric observability at Akita.

“Non-abstraction” tools

When we were getting started with Akita, we knew we weren’t building your typical abstraction tool. We weren’t building a new programming model. We weren’t building a new API. What we didn’t initially realize was that we weren’t building an abstraction tool at all.

A little context for the uninitiated: at Akita, we’re going after the problem of helping developers understand their API-based systems. The rise of software-as-a-service and APIs means that software has moved from being monoliths to entire ecosystems with components moving in and out. Time writing software has turned into time operating software. Much of the time previously spent with a traditional debugger is now spent divining monitoring graphs. We want to build one-click observability tools: tools that help developers quickly understand their systems—and how those systems change.

At first blush, the goal of “one-click” is certainly in line with abstraction tools. But what turns out not to be are many of the “observability” parts. At Akita, our approach works by passively monitoring network traffic, instead of by instrumenting code in any way. Once you turn on the monitoring, getting traces and adding new services is, in fact, one-click. But understanding traces is often not. While it’s possible to completely abstract away simple things like identifying egregious API practices, the bulk of what we’re doing—helping developers understand what their APIs are doing; helping developers identify breaking changes—is best done with some developer input.

Identifying that we could not do away with requiring developer input led me to an important revelation: that we had been treating one-click observability as the problem of designing a weird kind of abstraction tool, but really it was a different kind of challenge: one of designing a complexity-revealing tool.

What happens when we favor abstraction tools

Before we continue, some definitions.

When I say abstraction, I’m referring to the Computer Science concept of making models that can be used and reused without having to rewrite things when certain implementation details change. I’m using the term “abstraction” more loosely to refer to tools that simplify tasks away. Here are some examples of abstraction tools that fit my definition:

  • APIs like Stripe and Twilio. A web API fits the definition of functional abstraction! As complex as an API might be, it’s taking functionality and making it reusable. An API designer has complete control over the API, calls to which provide a clear specification of what to automate.
  • SaaS infrastructure tools like Netlify. With a software-as-a-service tool that becomes part of your tech stack, there’s a clearly defined interface of where the rest of your environment ends and where the tool begins. Again, it’s clear what these services are supposed to automate and what developers can set and forget.
  • Programming languages from GraphQL to Elm to Java. As complex as they are, programming languages also have a well-defined interface between the user and the tool—and a compiler or interpreter fully automates the running of your programs.

For abstraction tools, the tool designer has a lot of control in what the interface between the developer and the tool looks like. There’s a lot that’s possible to polish about developer experience with this level of control!

Complexity-revealing tools, on the other hand, aid developers in solving problems themselves by revealing the necessary information. The difference between an abstraction tool and a complexity-embracing tool is that complexity-revealing tools cannot automate the problem away and must focus instead on providing the developer with the appropriate information to solve the problem themselves. Here are some examples:

  • A debugger. A debugger shows you a stack trace; it shows you a call graph. It does not find your bug for you, but instead helps you find the bug by giving you the tools you need to explore a complex system. Sometimes I see people drink the Kool-Aid that a good type-checker obviates the need for a debugger. I have two words for you: Halting Problem. Any static checker is limited in what it can discern about your program!
  • A performance profiler. Performance profilers don’t make your program faster, but instead show you what’s slow so you can figure out where to focus your optimization efforts. Sure, people have spent a lot of time working on compilers, language runtimes, and JITs to automatically make your program faster, but clearly people still want and need to do work themselves.
  • Observability tools are complexity-revealing tools! This should not come as a surprise, given my entire life story earlier in this post led to this discovery. The myriad issues that can occur and that matter in our incidental distributed systems make it required for developers to have a hand in digging ourselves out.

Whether the tool is working properly is significantly harder for a tool developer to determine, unless they are building solely for users like themselves, experiencing problems like the one they experience.

Unfortunately, when people think about and look for developer tools, they often have abstraction tools in mind. Here are some of the implications:

  • People want “silver bullets” in the form of abstraction tools. It’s possible for many problems to automate the problem away. Some problems (for instance, anomaly detection) often require some user input and cannot be fully abstracted. When a tool developer treats a complexity-revealing problem like an abstraction problem, there’s a ceiling to how good a tool can be. And when a user believes that story, they suffer.
  • When an abstraction tool has complexity-embracing parts, everybody often looks the other way. We have a lot of language for things being “easy,” “one-click,” and “like magic.” In part because this is what’s associated with good developer experience and products that make money, tool creators and users alike often ignore the parts of products that are not like this. The non-automatable parts of tools become the elephants in the room, weighing down developer experience. Problems can only get fixed if you acknowledge them!
  • People often assume complexity-revealing tools to be expert tools. We live in a polarized world right now. Things are either “super easy” or for “the hard-core.” Complexity-revealing tools often fall into the second category. What I’ve seen happen often is that both tool creators and tool users assume a high learning curve. This mutual expectation limits the ultimate impact and usefulness of the tool.

The result is that developer experience for complexity-revealing tools gets left behind.

Towards better developer experience for all tools

The fact that the terms for these two kinds of developer tools are a mouthful should be evidence enough that this is early days of talking through this distinction. I’d love to hear from you about what you think about this distinction and how we can improve our collective body of awareness and knowledge about complexity embracing tools.

Moving forward, here are a few things that could improve developer experience for all tools, and not just abstraction tools:

  • Question the pursuit of the silver bullet. As I mentioned, not all problems have a one-click, full abstraction kind of solution! Here’s a good rule of thumb for all. If you can’t describe how your typical developer should solve a problem, as either the tool creator or the tool user, it’s probably not possible to have a silver-bullet, fully automated solution. Expecting to use a tool that requires some developer input will help save everybody from the pain of catastrophic disappointment.
  • Herald good examples of complexity-revealing tools. I see a lot of languages and APIs get held up as great examples of design. I also see people wondering why their debuggers and monitoring tools can’t provide the same experience. I would love to see more people realizing that they are holding all tools to the standard of full automation and appreciating good design in complexity-revealing tools when they see it!
  • Talk more explicitly about developer experience for complexity-embracing tools. A big challenge for complexity-revealing tools is the difficulty of validating that they work without testing on users, but I see very little conversation about this. I’d love to see people who are actually UX experts (not me 😊) talking more about this.

Something important to point out is that the best tools combine abstraction with revealing complexity. Even if you drive a low-maintenance car, it’s incredibly important to be able to peek under the hood if there’s a problem, without needing to go back to the dealership.

And, as I mentioned, I’ve been working on a very specific complexity embracing tool. If what we’re doing sounds interesting, we’d love to have you join the private beta. 😉


Photo by Daniel Eledut on Unsplash.

Share This Article
Join Our Beta