By Julie Pitt
If you have ever seen or written vague error handling code; if you’ve ever been frustrated by an unhelpful error message like “something went wrong”; if you’ve ever designed an API, this article is for you. I’ll begin with a short story that describes the problems caused by ambiguous failures in client/server protocols and then explore ways to address them.
Enter Application Developer
Say you’re an application developer. You’re writing this awesome app and everything’s going great. It looks very pretty, the UI is responsive and best of all, it’s easy to use. Now all you need is data. Chances are, you’re going to get it from someone else’s API, which invariably requires access to a network and data store of some kind. You’re not too familiar with this API, so you start with something like this (you know, just to try it out):
try { // call the API } catch (Exception e) { // error gobbling sasquatch print(“me want error. nom nom.”) }
That is utterly…un…awesome. You wonder, how can I give this error-gobbling sasquatch the precision of Wolverine, with his nifty retractable claws and whatnot? How can I make my application responsive and resilient so that my users like it? You are determined to do better, so you try again:
try { // call the API } catch (SQLException s) { // hmm wait...what does SQL Exception mean? } catch (IOException i) { // should I try again, or give up? Probably try again? } catch (TimeoutException t) { // Retry. Definitely. } catch (Exception e) { // uh…. print(“fail”) }
I guess that was a little better. At least now you have discrete code blocks that allow you to recover in different ways. It’s kinda like you taped some claws onto Frankenstein’s fists and told him to have at it.
Now say that the API has been updated with a new error condition called ServerBusyException
. You probably want to retry like you would with a timeout, but without changing your code, the ServerBusyException
falls into the sasquatch bucket. Nom nom. Worse yet, when you do change your code, you have to map both TimeoutException
and ServerBusyException
to the retry logic.
Can you do better? Not really. But not to worry; I am here to tell you that it is not your fault. In fact, I would point the finger at the API designer. Whoever designed this API did not properly separate two very different concerns:
- Alleviate the pain
- Gain insight
As the application designer, you should only have to care about the first one. The API designer needs to worry about both.
Alleviate the pain
Alleviating pain means taking action. When you frame it this way, understanding exactly what went wrong is not a prerequisite to handling failures. Another way to look at it is that there is really only a discrete set of possible actions that an application will take to recover from failure. The goal of the API designer is to explicitly define those actions and enumerate them in the contract.
Let’s go back and look at the errors you had to catch in the last section:
SQLException IOException TimeoutException ServerBusyException
How can we make these actionable? The first step is to map them onto specific actions the client application should take:
SQLException -> DoNothing IOException -> Retry TimeoutException -> Retry ServerBusyException -> Retry
We call these action codes, which we can now enumerate:
enum ActionCode { Retry DoNothing }
Generally, any error that is due to some transient failure in the service should be acted upon by retrying the same request using a well-defined retry policy. On the other hand, if there is a bug in the client (e.g., corrupt data or a malformed request), the action taken should be to never try that request again. It is a good idea to limit the number of action codes to the smallest set of recovery scenarios that will lead to a resilient and responsive application.
Once the action codes are defined, we wrap the errors into a generic exception that conveys both which action to take, and detailed information about the failure:
class MyAppException extends Exception { ActionCode actionCode // We’ll get to this one a little later: MyAppError error }
The client code then becomes:
try { // call the API } catch (MyAppException e) { if (e.actionCode == Retry) { // do a retry! } else if (e.actionCode == DoNothing) { // do nothing! } // Here you would want to log what the action and error are logger.error(e) }
Notice how this code completely ignores WHAT went wrong, aside from recording the particulars of the failure in logs and/or metrics. What it does care about is the actionCode field, which it uses to determine the course of action to take. I wrote this example using pseudocode that looks like Java, but there is no reason why you could not model MyAppException
in JSON as part of a REST API.
This model has several properties that are worth noting:
- The API designer is free to add as many
MyAppError
types as he wants, without breaking client applications. To maintain this property, the client code must never act upon or interpret any information inMyAppError
. - The client application only needs to handle each type of action code once. There is no longer a need to figure out which exceptions can be thrown and handle them in multiple places.
- Multiple client applications may implement the same API with consistent and unambiguous failure handling logic. This reduces maintenance costs for service maintainers.
- Action codes are extensible, provided the API is properly versioned. For example, you could introduce one called RenewAuthentication to indicate that a user must be prompted for her username and password. Each new action code is a change to the API contract and requires changes in the client code. Luckily in practice, such changes are infrequent once the initial API stabilizes.
Now that we have a model for conveying actions in our API, why not dispense with error types all together? Unfortunately, your code will have bugs and you’ll need enough insight to detect and fix them.
Gain insight
Detailed error types are the mechanism for understanding what is happening in the application and debugging when something unexpected happens. Remember that action code called DoNothing
? Unless we have fine-grained error types, there is no way we can chip away at precisely what the underlying causes are. Thus, the API designer should add as many error types as necessary to understand the failures in the application.
Let’s take a look at what you might want to put into MyAppError
, in order to understand failures:
class MyAppError { // An easily distinguishable, unique name for the error that is also human-readable. // This is what you would use in the name of a counter, for example. String name // Helpful to put into a diagnostic screen, in case customers need to tell customer service Integer errorCode // Human-readable description of the problem, for developers (not user visible) String description // The exception that caused this error Exception cause }
This is only one possible representation. The point is that as the API designer you can create a rich model of errors with enough metadata that you can tell what is going on in your application and debug if there is a problem. The consuming code should log and collect metrics on these errors to expose both specific failures and aggregates.
Summary
To keep your error handling code pumped full of adamantium, consider the following:
- Design your application to be resilient and responsive by enumerating specific actions it will take in response to failure
- Include each such action in your API specification
- Separate actions from error metadata. Do not act upon error metadata.
- Log and collect metrics on the error metadata so that diagnosis is possible after the fact.
Stay tuned for the sequel, which will discuss protocol layering and failure.
Awesome article! Makes great sense.
Pingback: Layers of Failure, With a Side of Bacon | The Order of Magnitude
This fits nicely with another excellent article – Effective Java Exceptions, http://www.oracle.com/technetwork/java/effective-exceptions-092345.html. In this article the author discusses the concepts of fault and contingencies along with fault barriers.