Anticipating evolution

API designers need to resolve an apparent paradox: how to keep APIs virtually unchanged yet respond to ever-changing customer requirements. It is more intricate a skill than simply applying specific API evolution techniques. It can be compared to a chess master’s ability to anticipate several upcoming moves of a game. Just like beginner chess players, we start by learning the specific API evolution techniques, but we become true experts when we are able to plan ahead for at least a couple of API releases. We are more likely to design long-lasting, successful APIs if we master this skill.

Let’s start with the fundamental rule of API evolution: existing clients must work with a new product release without any changes, not even a recompilation. While breaking changes can be tolerated in internal code, they are prohibited in public APIs. We must either limit ourselves to binary compatible changes or keep the old API unchanged while introducing a new API in parallel, a method called API versioning.

Maintaining backwards compatibility

Backwards compatible changes are preferable because clients can upgrade smoothly and without any human intervention, taking advantage of new features at their convenience. Conversely, API versioning demands an explicit decision to upgrade because code changes are required. Clients frequently choose to defer upgrades, requiring a long period of support for multiple API versions. We should plan to evolve APIs primarily through backwards compatible changes. We should avoid API versioning if possible.

Anticipating evolution means choosing designs which allow the largest number of backwards compatible changes. For example, C++ developers know that adding a field to a C++ class changes its size and breaks binary compatibility with client code into which size was hard-coded by the compiler. Similarly, adding a virtual method modifies the virtual method table, causing clients to call wrong virtual functions (see Listing 1). Because the need for new fields and methods is likely to arise, smart designers move all fields and virtual methods into a hidden implementation class (see Listing 3), leaving only public methods and a single private pointer in the public class (see Listing 2):

Listing 1: Original API class design is hard to evolve

#include <vector>  //exposed direct dependency on STL
#include "Node.h"  //exposed implementation class Node
class OriginalClass {

public:
	int PublicMethod(...);

protected:
	std::vector<Node> children; 

	// Adding a field modifies the size, breaks compatibility
	int count; 

	// Adding a method modifies the vtable, breaks compatibility
	virtual void ProtectedMethod(...);
};

Listing 2: New API class design using the Façade pattern

class ImplementationClass; //declares unknown implementation class

class FacadeClass {

public:
	int PublicMethod(...); 

private:
	ImplementationClass *implementation; //size of a pointer
};

Listing 3: The implementation details are never exposed to the client

#include <vector>  //OK, client code never includes it
#include "Node.h"  //OK, client code never includes it

class ImplementationClass {

public:
	int PublicMethod(...);

protected:
	std::vector<Node> children; 

	//OK, client never instanciates direcly
	int count; 

	//OK, the client has no direct accesses to the vtable
	virtual void ProtectedMethod(...);
};

Binary compatible changes are different depending on platform. Adding a private field or a virtual method is a breaking change in C++, but a backwards compatible change in Java. As one of our teams recently discovered, extending SOAP Web Services by adding an optional field is a compatible change in JAX-WS (Java) but a breaking change in .Net. Providing lists of compatible changes for each platform is outside the scope of this document; this information can be found on the Internet. For example, the Java Language Specification states the binary compatibility requirements and Eclipse.org gives practical advice on maintaining binary compatibility in Java. The KDE TechBase is a good starting point for developers interested in C++ binary compatibility.

While we are comparing platforms, we should mention that standard C is preferable to C++ for API development. Unlike C, C++ does not have a standard Application Binary Interface (ABI). As a result, evolving multi-platform C++ APIs while maintaining binary compatibility can be particularly challenging.

Keeping APIs small and hiding implementation details help maintain backwards compatibility. The less we expose to the clients, the better. Unfortunately, compatibility requirements also extend to implementation details inadvertently leaked into the API. If this happens, we cannot modify the implementation without using API versioning. Carefully hiding implementation details prevents this problem.

We can break backwards compatibility (without modifying method signatures) by changing the behavior. For example, if a method always returned a valid object and it is modified so that it may also return null, we can reasonably expect that some clients will fail. Maintaining the functional compatibility of APIs is a crucial requirement, one that requires even more care and planning than maintaining binary compatibility.

The only backwards compatible behavior changes are weakened preconditions or strengthened postconditions. Think of it as a contractual agreement. Preconditions specify what we ask from the client. We may ask for less, but not more. Postconditions specify what we agreed to provide. We may provide more, but not less. For example, exceptions are preconditions (we expect clients to handle them). It is not allowed to throw new exceptions from existing methods. If a method is an accessor, a part of its postcondition is a guarantee that the method does not change internal state. We cannot convert accessors into mutators without breaking the clients. The invariant is part of the method’s postcondition and should only be strengthened.

API behavior changes are likely to go undetected since developers working with implementation code often do not realize the full impact of their modifications. When we talked about specifying behavior, we already noted the importance of explicitly stating the preconditions, postconditions and invariants, as well as providing automated tests for detecting inadvertent modifications. Now we see that those same practices also help maintain functional compatibility as the API evolves.

SPIs (Service Provider Interfaces) evolve quite differently from APIs because responsibilities of the client and the SPI implementation are often reversed. APIs provide functionality to clients, while SPIs define frameworks into which clients integrate. Clients usually call methods defined in APIs, but often implement methods defined in SPIs. We can add a new method to an interface without breaking APIs, but not without breaking SPIs. The way pre- and postconditions can evolve is often reversed in SPIs. We can strengthen preconditions (this is what the SPI guarantees) and weaken postconditions (this is what we ask from the client to provide) without breaking clients. The differences between APIs and SPIs are not always clear. Adding simple callback interfaces will not turn APIs into SPIs, but callbacks evolve like SPI interfaces.

Surprisingly, we need to worry less about source compatibility, which requires that clients compile without code changes. While binary and source compatibility do not fully overlap, all but a few binary compatible changes are also source compatible. Examples of exceptions are adding a class to a package or a method to a class in Java. These are binary compatible changes, but if the client imports the whole package and also references a class with the same name from another package, compilation fails due to name collision. If a derived class declares a method with the same name as a method added to the base class, we have a similar problem. Source incompatibility issues are rare with binary-compatible APIs and require few changes in client code.

If we focus too much on source compatibility, we increase the risk of breaking binary compatibility since not all source compatible changes are binary compatible. For example, if we change a parameter type from HashMap (derived type) to Map (base type), the client code still compiles. However, when attempting to run an old client, the Java runtime looks for the old method signature and it cannot find it. The risk of breaking binary compatibility is real because during their day-to-day work, developers are more concerned about breaking the build than about maintaining binary compatibility.

Versioning

API versioning cannot be completely avoided. Some unanticipated requirements are impossible to implement using backwards compatible changes. Software technologies we depend on do not always evolve in a backwards compatible fashion (just ask any Visual Basic developer). Also, API quality may also degrade over time if our design choices are restricted to backwards compatible changes. From time to time, we need to make major changes in order to upgrade, restructure, or improve APIs. Versioning is a legitimate method of evolving APIs, but it needs to be used sparingly since it demands more work from both clients and API developers.

Anticipating evolution in the case of explicit versioning means ensuring that an incompatible API version is also a major API version. We should deliberately plan for it to avoid being forced by unexpected compatibility issues. The upgrade effort must be made worthwhile for clients by including valuable new functionality. We should also use this opportunity to make all breaking changes needed to ensure smooth backwards compatible evolution over the several following releases.

API versions must coexist at runtime. How we accomplish this is platform-dependent. Where available, we should use the built-in versioning capabilities; .Net assemblies have them and so does OSGi in Java, although OSGi is not officially part of the Java platform. If there is no built-in versioning support, the two API versions should reside in different namespaces, to permit the same type and method names in both versions. The old version keeps the original namespace while the new version has a namespace with an added version identifier. The API versions should also be packaged into separate dynamic link libraries, assemblies, or archives. Since C does not support namespaces, separate DLLs are needed to keep the same method names. We should make sure we change the service end point (URL) when versioning Web Services APIs, since all traffic goes through the same HTTP port. We should also change the XML namespace used in the WSDL. This ensures that client stubs generated from different WSDL versions can coexist with each other, each in its namespace.

It is often advantageous to re-implement the old API version using the new one. Keeping two distinct implementations means code bloat and increased maintenance effort for years. If the new API version is functionally equivalent to the old one, implementing a thin adaptor layer should not require much coding and testing. As an added benefit, the old API can take advantage of some of the improvements in the new code, such as bug fixes and performance optimizations.

Conclusion

Designing for evolution can be challenging and time consuming. It adds additional constraints to API design which frequently conflict with other design requirements. It is essentially a “pay now versus pay later” alternative. We can spend some effort up front designing easy-to-evolve APIs or we can spend more effort later when we need to evolve the API. Nobody can reasonably predict how an API is likely to evolve; hence nobody can claim with authority that one approach is better than the other. It is thought provoking, however, that nobody has yet come forward saying they regretted making APIs easier to evolve.

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 Canada License.

Making it safe

Being safe means avoiding the risk of pain, injury, or material loss. A safety feature is a design element added to prevent inadvertent misuse of dangerous equipment. For example, one pin of the North American electric plug is intentionally wider to prevent incorrect insertion into a socket. But it was Toyota who first generalized the principle of poka-yoke (“mistake avoidance”), making it an essential part of its world-renowned manufacturing process. When similar principles of preventing, avoiding, or correcting human errors are applied to API design, the number of software defects is reduced and programmer productivity improves. Rico Mariani calls this the “pit of success”:

The Pit of Success: in stark contrast to a summit, a peak, or a journey across a desert to find victory through many trials and surprises, we want our customers to simply fall into winning practices by using our platform and frameworks. To the extent that we make it easy to get into trouble we fail.

Preventing unsafe use

Engineers place all dangerous equipment and materials – high voltage, extreme temperature, or poisonous chemicals – safely behind locked doors or inside sealed casings. Programming languages offer controlled access to classes and methods, but time and again we forget to utilize it. We leave public implementation classes in the API package. We forget to declare methods users shouldn’t call as private. We rarely disallow class construction, and seldom declare classes we don’t want callers to extend as final. We declare public interfaces even when we cannot safely accept implementations other than our own. These oversights are the equivalent of leaving the boiler room unlocked. When inadvertent access to implementation details is possible, accidents are likely to happen.

Our next line of defense is type checking. In a nutshell, type checking attempts to catch programming mistakes at the language level, either at compile time in statically typed languages, or at run time in dynamically typed languages. If interested in the details of what type checking can or cannot do for you in various languages, you should read Chris Smith’s excellent “What to know before debating type systems”. For various theoretical and practical reasons, type checking cannot catch all usage errors. It would be ideal if every statically typed API call which compiles executed safely, but present-day compilers are just not sophisticated enough to make it a reality. However, this does not mean that we should not take advantage of type checks where we can. We may be stating the obvious, yet we often see APIs which are not as type safe as they could be. The ObjectOutputStream class from the Java I/O library declares the

final void writeObject(Object obj) throws IOException

method which throws an exception if the argument is not Serializable. The alternative method signature

public final void writeObject(Serializable obj) throws IOException

could turn this runtime verification into a compile time check.

Every time a method only works for a small subset of all possible parameter values we can make it safer by introducing a more restrictive (read: safer) parameter type. Especially string, integer, or map parameter types deserve close examination because we often use these versatile types unsafely in programming. We take advantage of the fact that practically every other type can be converted into a string or represented as a map, and integers can be many more things than just numbers. This may be reasonable or even necessary in implementation code where we often need to call low-level library functions and where we control both caller and callee. APIs are, yet again, special. API safety is very important and we need to consider design trade-offs accordingly.

When evaluating design trade-offs it helps to understand that we are advocating replacing method preconditions with type invariants. This moves all safety-related program logic into a single location, the new type implementation, and relies on automatic type checking to ensure API safety everywhere else. If it removes strong and complex preconditions from multiple methods it is more likely to be worth the effort and additional complexity. For example, we recommend passing URLs as URL objects instead of strings. Many programming languages offer a built-in URL type; precisely because the rules governing what strings are valid URLs are complicated. The obvious trade-off is that callers need to construct an URL object when the URL is available as a string.

Weighing type safety against complexity is a lot like comparing apples and oranges: we must rely on our intuition, use common sense, and get lots of user feedback.  It is worth remembering that API complexity is measured from the perspective of the caller. It is difficult to tell how much the introduction of a custom type increases complexity without writing code for the use cases. Some use cases may become more complex while others may stay the same or even become simpler. In the case of the URL object, handling string URLs is more complex, but returning to a previously visited URL is roughly the same if we keep URL objects in the history list. Using URL objects result in simpler use cases for clients that build URLs from fragments or validate URLs independently from accessing the resource they refer to.

As a third and final line of defense – since type checking alone cannot always guarantee safe execution – all remaining preconditions need to be verified at run time. Very, very rarely performance considerations may dictate that we forgo such runtime checks in low-level APIs, but such cases are the exceptions. In most cases, returning incorrect results, failing with obscure internal errors, or corrupting persisted data is unacceptable API behavior. Errors resulting from incorrect usage (violated preconditions) should be clearly differentiated from those caused by internal problems and should contain messages clearly describing the mistake made by the caller. That a call caused an internal SQL error is not considered a helpful error message.

We should be particularly careful when providing classes for extension because inheritance breaks encapsulation. What does this mean? Protected methods are not a problem. Their safety can be ensured the same way as for public methods. Much bigger issues arise when we allow derived classes to override methods. Overriding is risky because callers may observe inconsistent state from within the method they override (known as the “fragile base class problem”) or may make inconsistent updates (known as the “broken contract problem”). In other words, calling otherwise safe public or protected methods from within overridden methods may be unsafe. There is no language mechanism to prevent access to public and protected methods from within overridden methods, so we often need to add additional runtime checks as illustrated below:

public Job {

   private cancelling = false;

   public void cancel() {
      ...
      cancelling =  true;
      onCancel();
      cancelling = false;
      ...
    }

    //Override this to provide custom cleanup when cancelling
    protected void onCancel() {
    }

    public void execute() {
      if(cancelling) throw IllegalStateException(“Forbidden call to
         execute() from onCancel()”);
      ...
    }
}

It is generally safer to avoid designing for class extension if it is possible. Unfortunately, simple callbacks may also expose similar safety issues, though only public methods are accessible from callbacks. In the example above, the runtime check is still needed after we make onCancel() a callback, since execute() is a public method.

Preventing data corruption

A method can only be considered safe if it preserves the invariant and prevents the caller from making inconsistent changes to internal data. The importance of preserving invariants cannot be overstated. Not long ago, a customer who used the LDAP interface to update their ADS directory reported an issue with one of our products. Occasionally the application became sluggish and consumed a lot of CPU cycles for no apparent reason. After lengthy investigations, we discovered that the customer inadvertently corrupted the directory by making an ADS group a child of its own. We fixed the issue by adding specific runtime checks to our application, but wouldn’t it be safer if the LDAP API didn’t allow you to corrupt the directory in the first place? The Windows administration tools don’t allow this, but since the LDAP interface does, applications still need to watch out for infinite recursions in the group hierarchy.

The invariant must be preserved even when methods fail. In the absence of explicit transaction support, all API calls are assumed atomic. When a call fails, no noticeable side effects are expected.

Special care must be taken when storing references to client side objects internally, as well as when returning internal object references to the client. The client code can unexpectedly modify these objects at any time, creating an invisible and particularly unsafe dependency between the client code (which we ignore) and the internal API implementation (which the client ignores). On the other hand, it is safe to store and return references to immutable objects.

If the object is mutable, it is a great deal safer to make defensive copies before storing or returning it rather than relying on the caller to do it for us. The submit() method in the example below makes defensive copies of jobs before placing them into its asynchronous execution queue, which makes it hard to misuse:

JobManager    jobManager  = ...; //initializing
Job           job = jobManager.createJob(new QueryJob());      

//adding parameters to the job
job.addParameter("query.sql", "select * from users");
job.addParameter("query.dal.connection", "hr_db");      

jobManager.submit(job); //submitting a COPY of the job to the queue      

job.addParameter("query.sql", "select * from locations"); //it is safe!
jobManager.submit(job) //submitting a SECOND job!

For the same reason, we should also avoid methods with “out” or “in-out” parameters in APIs, since they directly modify objects declared in client code. Such parameters frequently force the caller to make defensive copies of the objects prior to the method call. The .Net Socket.Select() method usage pattern shown bellow made Michi Henning frustrated enough to complain about it in his “API Design Matters“:

ArrayList readList = ...;   // Creating sockets to monitor for reading
ArrayList writeList = ...;  // Creating sockets to monitor for writing
ArrayList errorList;        // Sockets to monitor for errors.

while(!done) {

    SocketList readReady  = readList.Clone();  //making defensive copy
    SocketList writeReady = writeList.Clone(); //making defensive copy
    SocketList errorList  = readList.Clone();  //making defensive copy

    Socket.Select(readReady, writeReady, errorList, 10000);
         // readReady, writeReady, errorList were modified!
    …
}

Finally, APIs should be safe to use in multi-threaded code. Sidestepping the issue with a “this API is not thread safe” comment is no longer acceptable. APIs should be either fully re-entrant (all public methods are safe to call from multiple threads), or each thread should be able to construct its own instances to call. Making all methods thread safe may not be the best option if the API maintains state because deadlocks and race conditions are often difficult to avoid. In addition, performance may be reduced waiting for access to shared data. A combination of re-entrant methods and individual object instances may be needed for larger APIs, as exemplified by the Java Messaging Service (JMS) API, where ConnectionFactory and Connection support concurrent access, while Session does not.

Conclusion

Safety has long been neglected in programming in favor of expressive power and performance. Programmers were considered professionals, expected to be competent enough to avoid traps, and smart enough to figure out the causes of obscure failures. Programming languages like C or C++ are inherently unsafe because they permit direct memory access. Any C API call – no matter how carefully designed – may fail if memory is corrupted. However, the popularity and wide scale adoption of Java and .Net clearly signals a change. It appears that developers are demanding safer programming environments. Let’s join this emerging trend by making our APIs safer to use!

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 Canada License.

Specifying behavior

In the paper “Six Learning Barriers in End-User Programming Systems”, Andrew J. Ko and his colleagues show that programmers make numerous assumptions when working with unfamiliar APIs, over three-quarters of them about API behavior. While programmers can directly examine type definitions and method signatures, they need to infer behavior from method and parameter names. It is not entirely surprising that many such assumptions turn out to be incorrect. Ko’s paper documents a total of 130 cases when programmers failed to complete the assigned task. In 36 of those cases, the programmers did not succeed in making the API call at all. In a further 38 cases, they were unable to understand why the call behaved differently than expected and what to do to correct it. In another 25 cases, they were unable to successfully combine two or more method calls to solve the problem.

Why self-documenting APIs are rare

Under-specified behavior causes serious usability issues in numerous APIs. Many developers honestly believe in self-documenting APIs, but as we will show, fully self-documenting APIs are an ideal towards we should aim, rather than a result we can realistically expect to achieve. Despite our very best efforts, subtle and unintuitive behavior is present in most APIs.

Even in the seemingly clear-cut cases, figuring out the precise behavior without additional help can be unexpectedly daunting. Take the TeamsIdentifier class shown below as an example:

//Uniquely identifies an entity.
class TeamsIdentifier {

   //Constructs an identifier from a string.
   TeamsIdentifier(String id) {...}

   //Returns the id as a String.
   java.lang.String asString() {...}

   //Convenience method to return this id as an array.
   TeamsIdentifier[] asTeamsIdArray() {...}

   // Returns a copy of the object.
   java.lang.Object clone() {...}

   //Checks if two ids are equal.
   boolean equalsId(TeamsIdentifier id) {...}

   // Intended for hibernate use only.
   java.lang.String getTeamsId() {...}

   boolean equals(java.lang.Object o) {...}
   int hashCode() {...}
   void setTeamsId(java.lang.String id) {...}

   //Returns a string representation of the id.
   java.lang.String toString() {...}
}

It looks straightforward enough, you say. Let’s see if you can answer, in total confidence, the following questions:

Expression True or False?
TeamsIdentifier id1 = new TeamsIdentifier(“name”)
TeamsIdentifier id2 = new TeamsIdentifier(“Name”)
id1.equals(id2)
   ?
id1.equalsId(id2)
   ?
id1.toString().equals(“name”)
   ?
id1.getTeamsId().equals(“name”)
   ?
TeamsIdentifier id = new TeamsIdentifier(“a.b.c”)
id.asTeamsIdArray().length == 3
   ?
TeamsIdentifier id = new TeamsIdentifier(“a:b:c”)
id.asTeamsIdArray().length == 3
   ?

Knowing that AssetIdentifier and UserIdentifier both extend TeamsIdentifier, can you answer, again in total confidence, the questions below?

Expression True or False?
AssetIdentifier assetId = new AssetIdentifier(“Donald”)
UserIdentifier userId = new UserIdentifier(“Donald”)
assetId.equals(userId)
   ?
assetId.equalsId(userId)
   ?
assetId.toString().equals(userId.getTeamsId())
   ?

Of course, we can make sensible assumptions about what the correct behavior should be, but we have to honestly admit that we don’t really know. For that we either need to test the API or look at the implementation. Looking at the implementation is rarely a practical option. Learning by trial and error is time consuming and it doesn’t tell us which observed behavior is by design as opposed to merely accidental. For example, if we get the same AssetIdentifier object back every time, we might incorrectly assume that we can write id1 == id2 instead of id1.equals(id2). Our program works correctly only until the next version of the API comes out.

We provide a huge service to our users when we remove guesswork from API usage by properly documenting behavior.

Using code for specifying behavior

Code is more concise and precise than words. It is difficult to think of a good reason why not to use code for specifying API behavior. We are documenting for developers, who should welcome, and have no problem understanding code. The above tables document the behavior of TeamsIdentifier and its derived classes when we enter the appropriate True or False values into the second column. You probably noticed that the code in the first column is similar to what we would write for unit tests. In the case of APIs, unit tests are twice as useful because they also document the expected behavior. Some developers call these code snippets assertions, while those familiar with the work of Professor Bertrand Mayer call this particular method of specifying behavior Design by Contract. Starting with version 4.0, the .Net Framework natively supports design by contract, while third-party tools exist for many other programming languages.

No matter what we call it or what tool we use, we should precisely specify API behavior using code.

Indicating stateless, accessor and mutator methods

The existence of observable internal state is a primary cause of unintuitive behavior, since it allows a method call to modify the result of the next (seemingly unrelated) call. A stateful algorithm controls access rights in multi-user systems. Is it possible to discover, from studying the API alone, how moving a document into a different folder affects its access rights? Isn’t it true that this depends not only on the security settings assigned to the document itself and those of the destination folder, but also on the security settings of its parent folder and recursively up to the root folder? Doesn’t it also depend on the user’s assigned roles, group memberships and perhaps on the security model currently in use? All these settings may be accessible via the API, but they alone won’t tell us how the access control algorithm actually works.

Realizing that state prevents us from designing self-documenting APIs, we could be tempted to stick to stateless APIs. While this isn’t always possible, it is an excellent idea to isolate the impact of internal state to the smallest possible part of APIs. We should have as many stateless methods as possible, since their behavior only depends on the parameter values. In object-oriented environments we should also favor immutable objects, which have state that cannot be changed once the objects are created. Fixed state is obviously less predictable than no state, but more predictable than evolving state.

Where we cannot avoid modifiable state, we should group the affected methods into two distinct categories: accessors, which can only read the state, and mutators, which can also change it. Accessors are like gauges on a control panel, and mutators are like switches and buttons. The accessors produce the same result when called a second or third time in a row, while mutators may produce a different result every time. Inserting a call to an accessor into the middle of an existing program is safe, while inserting a mutator may change the behavior of the subsequent API calls, breaking the program’s logic.

We must explicitly tell callers if a method is stateless, an accessor, or a mutator to help them use it correctly. We cannot rely on them guessing correctly or on naming conventions alone. We won’t be able to start all accessor names with “get” or “is” – show() or print() are accessors, as are many other, less obviously named methods. Because mutators are the most challenging, it is a good idea to keep their number to an absolute minimum and pay careful attention to their design.

Using strong invariants

Not all mutators are equally problematic. The stronger the invariant, the more predictable and intuitive the behavior becomes. The invariant is a set of statements (assertions) about behavior, which always hold true, regardless of state. It is essentially guaranteed, predictable behavior. We will illustrate this with an API, which helps us cover a geometrical shape with a triangular mesh as shown in the figure below:

Triangular mesh

Triangular mesh

Depending on our design, some or all of the following statements may be true after each API call:

  1. The whole geometric area is fully covered with the mesh
  2. All triangles in the mesh are regular (the triangle area is not null, no two nodes overlap each other, the three nodes don’t lie on the same straight line, etc.)
  3. There are no unconnected nodes
  4. No two triangles overlap each other
  5. Every node lies either inside or on the boundary of the geometric shape
  6. Every edge lies either inside or on the boundary of the geometric shape

The simplest API we can imagine, which requires us to insert and connect nodes directly, cannot guarantee any of this and would be rather difficult to use (remember, you cannot see the mesh when programming with an API!). We intuitively know that an API, which could guarantee all of the above invariants, would be much easier to use, but is such an API feasible? While it is not easy to figure them out, such mutators exist, and they are known as the Delaunay mesh refinement operators. Here are four of them:

Triangle split – splits a triangle into three smaller ones by adding a new node in the middle

Edge split – replaces two adjacent triangles with four smaller ones by splitting the common edge into two halves

Edge flip – changes the shape of two adjacent triangles by flipping the shared edge to the other diagonal of the bounding rectangle

Node nudge – changes the shape of the connected triangles by repositioning a node inside the polygon defined by the neighboring nodes

Delauney mesh refinement operators

Delauney mesh refinement operators

Notice how simple it is to describe what each method does? To see the big difference this design makes, try to describe how to correctly refine a mesh by inserting and (re)connecting nodes, and then do it again using the Delaunay operators. Which is easier?

Great APIs have strong invariants, but as we just saw, this doesn’t happen by itself, it requires careful design.

Using weak preconditions

Weak preconditions help callers just like strong invariants. If invariants are constraints on the API designer, preconditions are constraints on the caller: conditions which should be met for the call to succeed. From the caller perspective, the invariants should be strong, and the preconditions weak. In an ideal world, all API calls would succeed and produce correct results for all possible arguments. In the real world, this is either impossible or it conflicts with other design requirements. The trick is to stay as close to the ideal solution as possible.

For example, one of our APIs limits the length of string method parameters to less than 255 characters for efficient database storage and better performance. On the other hand, it would be easier to use without these limitations. Web Services APIs, in general, are infamous for taking complex data structures as arguments, yet they only work when these data structures are appropriately constructed. The documentation rarely states the preconditions explicitly, leading to backbreaking trial-and-error style programming.

To sum it up, weak preconditions (or no preconditions) are better than strong ones, and documented preconditions are far preferable to undocumented ones.

Conclusion

Observable state is just one of the many reasons why self-documenting APIs are a largely unreachable ideal. Reentrancy, performance characteristics, extensibility via inheritance, the use of callbacks, caching, clustering and distributed state can all lead to complex, unintuitive behavior. While careful design using strong invariants and weak preconditions can make API behavior more predictable, behavior still needs to be explicitly specified. The recommended way of specifying behavior is with code in the form of unit tests, assertions or contracts.

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 Canada License.

Choosing memorable names

Choosing good names is half art, half science; part of it is learned from books, the rest comes from experience. After working with APIs for a while, we develop a taste and appreciation for good names. It is comparable to wine tasting: at the beginning, all wines seem to taste the same, but after a while we develop the capacity to detect subtle flavors and tell apart great vintages from so-and-so wines. But a sophisticated wine connoisseur doesn’t necessarily know how to make a good wine; for that he needs to learn the technique of wine making. It is this combination of art and science, intuitive thinking and logical reasoning, which makes naming difficult.

Avoiding naming mistakes

Bad habits are the cause of many common naming blunders. In the early days of computing(1), strict technical limitations forced programmers to write almost indecipherable code. When identifiers were limited to 8 characters and punch cards were only 80 characters wide, abbreviated names – like strcpy or fscanf – were unavoidable. It used to be standard practice to prefix C function names to prevent name conflicts at link time. Underscores(2) and other special characters in names made sense when computer terminals had no separate uppercase and lowercase characters. Hungarian notation is useful for differentiating integers representing genuine numbers (nXXX) from integers representing handles, the equivalent of pointers to complex data structures (hwndXXX – handle to a Window) in languages with fixed type systems and lacking true pointers, such as BASIC or FORTRAN. The name stuck, because developers found it just as incomprehensible as a foreign language (Charles Simonyi, its inventor, was born in Hungary). Today, unlimited identifier lengths, full namespace support, object-oriented programming, and powerful IDEs make these practices unnecessary. We should start our quest for better names by ditching these antiquated and hard-to-read naming conventions.

The next step is to use correct English spelling, grammar, and vocabulary. It is hard enough to memorize APIs, let’s not make users also remember spelling errors, made-up words or other creative use of language. Automated spell checking turned spelling errors into the least forgivable API design mistakes. US English is the de-facto dialect of programming: no matter where we live, we should spell “color” and not “colour” in code. While Printer or Parser are valid English words, appending “-er” to turn a verb into a noun doesn’t always work. We can delete, but there is no “Deleter” in the dictionary(3). The same care should be taken when turning verbs into adjectives: saying that something is “deletable” is incorrect. Finally, we should be aware of the correct word order in composite names: AbstractNamingService sounds better, and it is easier to remember than NamingServiceAbstract, while getNameCaseIgnore is a hopelessly mangled name.

Names are a precious and limited resource, not to be irresponsibly squandered. It is wasteful to use overly generic words, like Manager, Engine, Agent, Module, Info, Item, Container, or Descriptor in names, because they don’t contribute to the meaning: QueryManager, QueryEngine, QueryAgent, QueryModule, QueryInfo, QueryItem, QueryContainer and QueryDescriptor all sound fantastic, but when we see a QueryManager and a QueryEngine together in an API, we have no clue which does what. While synonyms make prose more entertaining, they should be avoided in APIs: remove, delete, erase, destroy, purge or expunge mean essentially the same thing and API users won’t be able to tell the difference in behavior based on the name alone. Using completely meaningless words in names should be a criminal offense. You would think this never happens, but see if you recognize an old product name in TeamsIdentifier or if you can remember CrusadeJDBCSource by recalling the fantasy name of a long-forgotten R&D project? The words we choose should also accurately describe what the API does. This also sounds obvious, yet we have seen a BLOB type which is not an actual Binary Large Object and a Set type which is not a proper Set. Such mistakes can only happen if we don’t slow down to think about the names we are choosing.

Finding names first

Finding meaningful names for some API constructs is so difficult that it should be completely avoided. This is not a joke. We should stay away from naming types, methods and parameters after they are designed. It is almost guaranteed that if we throw unrelated fields together into a type, the best name we will find for this concoction is some sort of Info or Descriptor. Even a marathon brainstorming session fails to find a better name for DocumentWrapperReferenceBuilderFactory, because it is undeniably a factory for producing builders, which can generate references to document wrappers (whatever those are). The method VerifyAndCacheItem both verifies and caches an Item (whatever that is) and an IndexTree is a rather odd data structure indeed. On the other hand, when we know our core concepts before we start thinking about the structure of the API, we can rely on the nouns, verbs, and adjectives to guide us through the process. Similar to the “writing the client code first” guideline, “finding the names first” proposes to revise the traditional sequence of design steps in search of a better outcome.

It may be helpful to speak to non-programmers to find out which words they use to talk about the problem domain. Understandably, the idea of collecting words into a glossary without thinking about how and where they will be used in code sounds somewhat counter-intuitive to us, but people in numerous other professions do this exercise regularly. Let’s pretend that we need to add Web 2.0 style tagging to our API, but we don’t know where to start. We look up the corresponding Wikipedia entry and read the first paragraph:

“In online computer systems terminology, a tag is a non-hierarchical keyword or term assigned to a piece of information (such as an internet bookmark, digital image, or computer file). This kind of metadata helps describe an item and allows it to be found again by browsing or searching. Tags are generally chosen informally and personally by the item’s creator or by its viewer, depending on the system.”

We underline the relevant words, group them into categories, and highlight the relationships between them. Where we find synonyms, we highlight the best match and gray out the alternatives. Where there are well known, long-established names in the domain (Author, User, Content or String), we chose these synonyms over the others:

Verbs Nouns
assign, choose tag, keyword, term, metadata, (string)
find, search bookmark, image, file, piece of information, item, (content)
find, browse bookmark, image, file, piece of information, item, (content)

It is absurdly early to do this, but if we are asked to sketch a draft API at this point, it may have methods like:

   void assignTag(Content content, String tag);
   Content[] searchByTag(String tag);

Far be it from us to claim that this is a good API or that these are the best possible names. We should continue the process of looking for better alternatives. The example just illustrates that it is possible to find good names without thinking of code, and once we have them, they point towards types and methods we need in the API.

When we have the choice between two or more words to name an object or action, the least generic term is the best. Most method names we come across start with “set”, “get”, “is”, “add”, and a handful of other common verbs. This is a real shame, because more expressive verbs exist in many cases. Instead of setting a tag, we can tag something with it. For example:

Typical Better
document.setTag(“Best Practices”); document.tag(“Best Practices”);
if(document.getTag().equals(“Best Practices”)) if(document.isTagged(“Best Practices”))

Shorter names are better, but nowadays the length of names is rarely a serious concern. It works best if we set aside the most powerful nouns as the names of the main types and use longer composite names for methods or secondary types. Then when we build a scheduler, we can call the main interface Scheduler and not JobSchedulingService. If name confusion is a concern, we can use namespaces for clarification, a wiser choice than starting every name with “Job”.

Longer composite names are more meaningful than short ones which depend on parameter names or types to further clarify their meaning. Parameter names and types may be visible in method signatures, but not when we call the method from code:

Method signature Method call
Document.remove(Tag tag); currentDocument.remove(bestPractices);
Document.removeTag(Tag tag);) currentDocument.removeTag(bestPractices);

Many experts recommend writing self-documenting APIs, but only a few insist that we support writing self-documenting client code. At first, it may look like the API user should be entirely responsible for this, until we realize that he can only name his own variables, while we (the API designers) are choosing the names needed to call the API.

Conclusion

Naming is a complex topic, details of which require far more space to cover and fall outside the scope of this document. With so many different factors influencing naming, it is not easy to give straightforward practical advice, other than to avoid stupid mistakes. The problem domain has the strongest influence, since it is easier to describe good names for a specific API than for APIs in general. This statement is seemingly contradicted by the existence of naming conventions. But aren’t the conventions considered helpful advice? In the most generic sense, they may be. Yet when it comes to choosing descriptive, memorable, and intuitive names, the so-called naming conventions are of limited use, primarily addressing consistency concerns. Developers who closely follow naming conventions are designing consistent APIs, which is important, but not sufficient. We separated striving for consistency, with its naming and design conventions (discussed in the previous installment), from the subtle art of choosing memorable names, precisely because so many developers still believe that there is nothing more to good names than following the conventions.

Notes:

(1) This paragraph is not intended as a complete and accurate description of the history of computing. Platforms and languages evolved differently and had varying limitations. For more details, see comments. (Based on reader feedback. Thank you.)

(2) The underscore example is misplaced here because its use is typically a consistency issue. If your platform has an established, consistent naming convention which uses underscores, then by any means, follow it. (Based on reader feedback. Thank you.)

(3) This is not intended as linguistic advice. Languages are constantly evolving and many new words are added to dictionaries every year. “Deleter” appears in some, but not in others. Developer-friendly design acknowledges that a significant number of users are likely not native English speakers, with varying levels of language skills. If they cannot find some terms in dictionaries, this may prevent them from thoroughly understand the precise meaning of a name. (Based on reader feedback. Thank you.)

 

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 Canada License.

Striving for consistency

Being consistent means doing the same thing the same way every time. The human brain is wired to look for patterns and rules because our ability to predict future events (the ripening of fruits, the start of the rainy season, or the migration of animals) has been essential to our survival. Our minds work the same way when developing software. When you see the interface names AssetServices, MetadataServices, and ContentServices, what do you expect the video interface to be called? Isn’t it true that you feel reassured and encouraged when you find the VideoServices interface? Inconsistency doesn’t mean complete chaos and confusion. In an inconsistent world, rules, patterns and conventions are still discernible, but there are numerous unpredictable and inexplicable exceptions.

We call an API consistent when there are no frivolous or unnecessary variations in it. We quickly become familiar with such APIs because they are predictable, easy to learn and remember. Their consistent behavior gives us confidence that we can use them correctly.

Following conventions

Many well-known coding conventions were adopted with the sole purpose of minimizing small, but annoying variations in programs. Pascal casing is no better than camel casing; yet we call our method RemoveTag() in .Net and removeTag() in Java, because otherwise we violate  established conventions and introduce inconsistencies. We name our interface IPublishable in .Net and Publishable in Java, regardless of what we think of the use of “I” to distinguish interface names from class names. We use Hungarian notation when interacting with low-level Windows API functions from C code, even though we consider Hungarian notation a hopelessly outdated annoyance. This is not only true for large platforms, but for smaller APIs as well. We follow established conventions, sometimes silly ones, whether we agree with them or not.

Some APIs are inconsistent by design, but it is far more common for inconsistencies to creep in with subsequent modifications. Consider the following example:

   public interface Capabilities {
      public boolean canCreate();
      public boolean canUpdate();
      public boolean canDelete();
      public boolean canSearch();
      public boolean canSort();
      …
      public boolean isRankingSupported();
   }

The last method looks dreadfully out of place. It is pointless to argue which of the two naming conventions is better. Reverse them and the interface still looks bad. Novice developers are especially prone of engaging in such never-ending, fruitless arguments, not realizing that consistency often trumps other considerations. When adding a new method to an existing interface, simply follow the conventions already in place.

Adopting conventions

De-facto conventions are already in place for many existing APIs. For new APIs, especially large APIs, we need to adopt and document our own conventions. It is almost entirely up to us what conventions we use, provided that they:

  • do not contradict the established conventions of the chosen development platform
  • aim to minimize unnecessary variations
  • do not impose any real restrictions on functionality

For example, a potential for unnecessary variations exists in parameter ordering. We can see this in the C standard I/O library functions, where fgets and fputs have the file descriptor as the last parameter and fscanf and fprintf have it as the first, frustrating millions of developers for more than 30 years. Establishing a convention for parameter ordering eliminates such variations without restricting functionality.

A lot of gratuitous variations can creep into an API concerning the usage of null. Every time a method takes an object parameter, we should know if it accepts null or not. If it doesn’t accept null, we often see unnecessary variations in how the error is handled. If null is accepted, we again see many variations in what this actually means. For methods which return an object reference, we need to know if it ever returns null, and if it does, when and what does it mean? Conventions regarding the usage of null can be helpful in avoiding such uncertainties.

We should keep in mind that we are establishing conventions and not strict rules. We may be tempted to enforce rules like “No method should ever return null; it should either return a valid object or throw an exception” because it is not only consistent behavior, it also makes the API safer to use. The problem is that there are justified deviations from this convention. What should a method designed to look up a specific object do when it doesn’t find it? As a rule (yes, this is a rule), we should only throw exceptions under exceptional circumstances. Looking for something and not finding it can be anticipated and it shouldn’t cause an exception. While there are certain other design options, none of them are as simple as returning null. Consistency is about removing unnecessary variations and there are cases where variations are warranted. “Extreme advice is considered harmful” warns Yaroslaw Tulach in his book Practical API Design.

Using patterns

Patterns can remove further variations from APIs. Unlike the “Gang of Four” design patterns, which are recipes for solving specific design problems, API patterns are used to make large APIs more predictable. In this context, the standard dictionary definition of the term, “elements repeating in a predictable manner”, is used.  API patterns are formed using repetition, periodicity, symmetry, mirroring, and selective substitution as seen in patterns of nature or in decorative arts. We can borrow API patterns from others or make up our own. Since we need predictable APIs, not decorative ones, the simplest patterns are the best.

For example, one of our APIs consists of only two kinds of objects: service objects and data objects. The service objects are named by appending “Services” to the service name (AssetServices, MetadataServices, and so on) and are placed in Java packages that end with “.services”. Every service object is a singleton and can be instantiated calling the static getInstance() method. The data transfer objects have the words “Request” or “Result” appended to their name, like in ExportRequest and ExportResult. When the request has search semantics, the data object is named by appending “Criteria” to the name, for example, RetrieveAssetCriteria. Such patterns are great in large APIs, where simple coding conventions leave plenty of room for other, higher-level discrepancies.

In addition to structural patterns as above, we can establish behavioral patterns. In our API some methods are optional and, depending on the server configuration, they may work or throw an UnsupportedOperationException. There is a Capabilities interface (shown above), with methods like canSearch(), canSort(), or canUpdate(),  which can be called to check if some functionality is available or not. Consistent use of structural and behavioral patterns can make even very large APIs easy to use, since what we learn from using one part of the API can be easily transferred to other parts.

Enforcing consistency

Patterns and conventions have to be enforced when working in large teams because inconsistencies are very likely with several people contributing to the design. API design as a whole should remain a team effort, but ideally a single individual should be responsible for its consistency. This person should be authorized to review, accept, or reject API changes, but – and this is very important – only for consistency reasons. This role is a consistency advocate, not a supreme design guru. For example, Brad Abrams and Krzysztof Cwalina became well-known inside and outside Microsoft after they were appointed to ensure the consistency of the .Net platform. Joshua Bloch had a similar – albeit unofficial – role in the core Java API development while at Sun. Having a reviewer to find and correct inconsistencies and an independent arbitrator to stop the team from wasting time on unproductive disputes can be very helpful.

Compromising

Consistency is so important that it is worth compromising in other areas to achieve it. To put it simply, using the same design everywhere is often better than choosing the best solution for each particular case. For example, exceptions are preferable to error codes, but it is a lot easier to work with error codes than with a mix of error codes and exceptions. We like collections more than arrays, but we like it even less when they are mixed together. This can happen when we try to “improve” the design as the API evolves. We can’t change the old parts due to backwards compatibility requirements, and if we use a different, “better” design for the new parts, we introduce inconsistencies. Right now, one of our APIs is caught right in the middle of such ill-advised migration from arrays to collections.

Avoiding misleading consistency

We should be careful not to introduce false or misleading consistency. Misleading consistency is like false advertising or a broken promise. For example, if there is an interface named Driver and a class named AbstractDriver in the API, developers will expect that AbstractDriver implements Driver and they can inherit from it to create their own implementations. If this is not the case, it is better to name either the class or the interface something else.

Also, we should reserve the standard JavaBeans getter and setter method names for methods accessing local fields. There is nothing more frustrating than to call a seemingly harmless getAssociations() method, watch it block for 25 seconds then see it throw a RemoteException. A different name, like retrieveAssociations() would signal the real behavior much better.

We create false expectations of consistency when our design is consistent only in certain aspects and inconsistent in others. For example, we follow consistent naming conventions, but have no consistent type structure, parameter ordering, error handling or behavior. New team members are the most likely to commit this mistake, because naming conventions and structural patterns are significantly easier to follow than consistent behavior.

Conclusion

The benefits of consistent APIs are obvious and consistent APIs don’t take more time or effort to design than inconsistent ones. We only need to adopt and follow certain patterns and conventions. APIs can be reviewed and inconsistencies corrected even late in the design process. The only essential requirement for consistent API design is discipline. This makes the “strive for consistency” API design guideline the easiest to follow.

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 Canada License.

Keeping it simple

By writing client code first we can avoid the most embarrassing and bothersome API design mistakes. This approach also leads to simpler APIs. Nevertheless, APIs grow in size and complexity as the number of use cases grows. Many developers aren’t prepared to spend extra effort to keep APIs simple because they believe that API size and complexity are inextricably linked. But this cannot be entirely true: both the core Java API and the .Net API are quite large, yet we don’t find them particularly hard to use. While size and complexity are certainly related, they are not the same. There are techniques to make large APIs simple and easy to use.

We propose an easy-to-use measure of API complexity: count all named API constructs used in a scenario – all types, methods, enumerated values, constants, and exceptions – and subtract this number from 21. The higher the result, the simpler the scenario is. If the result turns negative, it is a sign to start thinking about simplifying the API. Let’s apply this method to assess a sample Java code written using our (internal) AuthenticationProvider interface, which prints a list of names and phone numbers from the corporate directory:

try {
   AuthenticationProvider/*20*/ provider =
      new LocalAuthenticationProvider/*19*/();
   SearchCriteria/*18*/ criteria = new SearchCriteria/*17*/(EntityName/*16*/.USER/*15*/);
   criteria.addPropertyToFetch/*14*/(PropertyName/*13*/.COMMON_NAME/*12*/);
   criteria.addPropertyToFetch(PropertyName.PHONE/*11*/);
   criteria.addPropertyToMatch/*10*/(PropertyName.DEPARTMENT/*9*/, "R&D");
   criteria.addPropertyToMatch(PropertyName.LOCATION/*8*/, "Waterloo");
   criteria.setSortProperty/*7*/(PropertyName.COMMON_NAME);
   ProfileIterator/*6*/ iterator = provider.search/*5*/(criteria);
   while(iterator.hasNext()/*4*/){
      Profile/*3*/ profile = iterator.next()/*2*/;
      Property/*1*/ commonName =
         profile.getProperty/*0*/(PropertyName.COMMON_NAME);
      Property phone = profile.getProperty(PropertyName.PHONE);
      System.out.println(commonName.getValue()/*-1*/, “  ”, phone.getValue());
   }
}
catch(AuthenticationProviderException/*-2*/  e) {
}

We count each concept only once and we do not count the standard language and library features. The result of -2 shows that the API could use some improvements.

Accidental complexity

Let’s see what we can do. If we replace our custom ProfileIterator with the standard Java Iterator<Profile>, the “simplicity score” increases from -2 to 1. If we use the standard NullPointerException, InvalidArgumentException, InvalidStateException and RemoteException instead of our own AuthenticationProviderException, the “simplicity score” becomes 2. If we add a direct method like

public String Profile.getValue(PropertyName);

we eliminate one type (Property) and one method call (getValue), raising the “simplicity score” to 4. With only a few simple design changes we managed to reduce the complexity of the scenario to an acceptable level.

We can use this measure of complexity to explain certain API design rules and best practices. For example, asking callers to extend classes or implement interfaces is generally discouraged. Why? Because the caller may need to implement/override several methods for a single scenario, lowering the “simplicity score”. Similarly, if we measure the complexity of the design patterns from the “Gang of four” book, we get low numbers, which is one reason why these patterns aren’t recommended in APIs.

Providing alternate implementations for existing interfaces is an obvious, yet effective, technique for adding functionality to APIs without increasing their complexity. The Java Collection Framework is a good example: instead of providing different types for mutable, immutable, re-entrant and non re-entrant collections, it provides alternate implementations. We can turn a regular Set implementation into a re-entrant Set implementation by calling

public static<T> Set<T> Collections.synchronizedSet(Set<T> s)

Instead of an entire new type, this design only requires an additional method in the API.

When we apply design techniques like the ones above, we minimize accidental complexity. To put it simply, accidental complexity occurs when usage scenarios are more complex than necessary. This happens either because we make incorrect assumptions about what features we need or because we accept feature requests too easily.

Feature requests are often a combination of actual requirements and specific API design suggestions. While we must consider the requirements, we shouldn’t feel compelled to accept the design suggestions. API users are selfish; they ask for the simplest solution for themselves, but not necessarily have the interests of other users at heart. One user’s favorite feature becomes another user’s nightmare if we are not careful. The best way to handle such requests is to accept the requirements, express them as use cases, and then find a design which supports them without adding much complexity to unrelated scenarios. We should follow Joshua Bloch’s advice: “You can’t please everyone so aim to displease everyone equally”.

Essential complexity

Accidental complexity is the easy part of the problem. As we add more and more use cases, the complexity of APIs grows, and no design technique can completely prevent this. This is called essential complexity.

The easiest way to reduce essential complexity is by leaving functionality out. As Joshua Bloch says, “They [extreme programming proponents] do advocate leaving out the bells, whistles, and features you don’t need and add them later, if a real need is demonstrated. And that’s incredibly important, because you can always add a feature, but you can never take it out. Once a feature is there, you can’t say, sorry, we screwed up, we want to take it out because other code now depends on it. People will scream. So, when in doubt, leave it out.” When we designed the authentication service used in the example above, we decided that it will not provide related services like authorization, session management, or storage for user accounts. We received some complaints about this decision over the years, but it also enabled us to keep our API reasonably simple.

We must accept that giving up something valuable is the only way to make a use case simpler in the presence of essential complexity. In the previous example we gave up functionality in exchange for simplicity. When this is not possible, we may try giving up some flexibility by deliberately limiting the supported usage scenarios. If we know how the API will be used, we can work with sensible defaults instead of keeping every option open.

For example, an XML processing library has many options for formatting white space in XML output: whether to insert line feeds and where, whether to indent nested XML elements and by how much, whether to use spaces or tabs, and so on. These options exist because the designer of the XML library didn’t know exactly how the library will be used. But if we are using XML to store configuration information, we know that the files are small, there are no deeply nested XML elements, and the administrator views and edits the file using a simple text editor. Thus, we can choose the XML formatting options ourselves and avoid exposing them through the API.

Another possibility is to trade some control for simplicity. Coarse-grained APIs have more functionality per method call and are simpler to use, but offer less control to the caller. Finer levels of granularity give more control at the expense of many more method calls. When APIs are getting complex, we can give up some of this control and increase the granularity of APIs. For example, Data Transfer Object arguments let methods do more work because they carry a lot of information. On the other hand, Data Transfer Objects themselves are simple data structures, having no methods of their own.

Divide and conquer

If, despite our best efforts, uncomfortable levels of complexity remain in our APIs, we shouldn’t get entirely discouraged. People have been dealing with complex problems for a very long time and have devised practical methods of coping with them. All these methods are applications of the same old “divide and conquer” principle.

We can help our users cope with complexity by organizing our APIs into smaller, more manageable parts. For example, a complex multi-media asset management API can be divided into several functional areas, such as basic asset management, metadata management, search, video processing, and so on. With only 24 methods, the core AssetServices interface handles all essential operations like asset checkout, retrieval, renaming and deletion. MetadataServices needs only 7 methods for saving and retrieving all extensible descriptive asset metadata. The 16 methods of AssetSearchServices interface handle all search functionality. These interfaces are in separate namespaces, each with a single entry point highlighted by the use of a consistent naming pattern. The functional areas are reasonably self-contained. Common scenarios can be realized without referencing more than two functional areas. It is also easy to understand how the various parts are tied together by the use of common asset identifiers.

Dividing APIs into functional areas is just one way of organizing them. We can also separate core features from the advanced functionality or higher level calls from lower-level, more detail-oriented ones. No matter how we do it, we are applying the principles of high cohesion and low coupling of modular software design. Refactoring APIs does not remove any functionality, just organizes it into smaller, more manageable units.

We can also remove complexity from common use cases by designing extension hooks into APIs. Common use cases are covered by default, built-in behavior, while fringe cases by plugging in custom logic. We keep the common use cases simple at the expense of making less common ones more complex, a reasonable tradeoff in many situations. For example, a batch processing system may define an Agent to handle compound jobs, and a Distributor to load balance jobs within a cluster of servers. The default Agent and Distributor implementations are designed to work for a well-defined set of common uses cases. For more advanced scenarios, callers can replace the default behavior by registering their own custom Agent or Distributor implementation. While writing custom Agents and Distributors is a complex task, it is rarely needed.

Conclusion

Keeping APIs simple requires effort:

  • eliminate accidental complexity by choosing the best available design options
  • limit essential complexity by tightly controlling scope, preventing feature creep, and giving up some flexibility or control
  • make complexity manageable by organizing APIs into units of high cohesion and low coupling
  • consider extension hooks to support advanced scenarios without impacting common ones

It is effort well spent, as simplicity is highly desirable in APIs.

 

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 Canada License.

Considering the perspective of the caller

Regular software design produces mediocre APIs because focus on implementation hurts APIs in many different ways. We will illustrate how with an internal Java API to avoid singling out any well-known public APIs. We immediately notice the complexity. The class ContentInstance, for example, appears 5 levels deep in the class hierarchy, and implements 4 additional interfaces (not counting the standard Serializable interface):

com.company.service.javabean
Class ContentInstance

java.lang.Object
  extended by com.company.service.common.DataObject
      extended by com.company.service.javabean.ManagedObject
          extended by com.company.service.javabean.ExtensibleObject
              extended by com.company.service.javabean.ContentItem
			extended by com.company.service.javabean.ContentInstance

All Implemented Interfaces:
    IAttributedObject, IChannelAssociate, IPersistable, IRelatedAttribute, java.io.Serializable

ContentInstance itself defines (or overrides) 33 methods, which is reasonable, but it also inherits a whopping 79 methods from ManagedObject, 8 methods from ExtensibleObject and 7 methods from DataObject for a grand total of over one hundred methods. That’s a lot of methods in a single class! We agree that complex problems require complex solutions and we should have no problem seeing similar complexity in a (hidden) implementation. In APIs, however, we greatly value simplicity. We don’t like spending time browsing through dozens of classes and hundreds of methods. We like it when ContentInstance is always ContentInstance and not ManagedObject or IChannelAssociate depending on the context, which tends to happen a lot when using such complex inheritance hierarchies. We are glad that someone else did the implementation work for us, but we don’t feel the need to understand how they did it. We focus on what we are implementing when using APIs, and frankly, we don’t have much time for anything else. The better the API manages to hide implementation details from us, the more we appreciate it.

Excessive abstraction is another problem which arises from implementation-focused API design. DataObject, ManagedObject, ExtensibleObject, ContentItem and ContentInstance are all pure design abstractions with no corresponding real-world objects or concepts we could immediately relate to. What’s the difference between DataObject and ManagedObject or between ContentItem and ContentInstance? We need to understand the whole API before we can understand its parts, a daunting task with a large API. We are happy to acknowledge that no complex problem can be solved without powerful abstractions. On the other hand, we need to confess that we find it difficult to understand someone else’s abstract concepts, because for this we must think like that other person. We wish the other person thought more like us instead, in familiar concepts like Document, Folder, Project or User.

Complex and counter-intuitive usage patterns are a third annoyance of implementation-focused design.  Read a programmer’s recollection of trying to figure out how to create a new instance of the ContentInstance class: “At first, I didn’t expect that ContentInstance can be instantiated because I found no constructor and no factory method in the class definition. Only after further investigation did I discover the newInstance() factory method ContentInstance inherits from the abstract super class ManagedObject. I was confused by the abstract base class declaring a factory method while the concrete class did not. Eventually, I learned from the documentation that using the ContentInstance and ContentType classes is similar to using Object and Class in the Java reflection API. The correct way of instantiating a ContentInstance was by calling ContentType.newInstance(), the inherited static newInstance() method proving to be a bit of a red herring. While the analogy with the Java Reflection API certainly helped, I started wondering if writing a program using this API would be just as awkward as writing an entire Java program using the reflection API…”

There are many other signs of implementation-focused design in the API. An out-of-process call is evident when IManagedObjectRef.getManagedObject() method throws a java.rmi.RemoteException. The object-relational mapping layer (Castor) is revealed when the class AttributeData inherits from org.exolab.castor.jdo.TimeStampable. Internal caching is obvious from methods like ContentType.clearCache(), while the underlying database schema is visible and accessible through methods like AttributeDefinitionData.getColumn(). With so many implementation details spilling out, we are left to wonder which part of our code will break when we upgrade to the next version of the API.

Designed for use versus designed to implement

The above issues may seem hard to avoid, but in practice they are not. While doing API usability tests at Microsoft, Jeffrey Stylos and his colleagues discovered a surprisingly easy way to do it. It happened almost by accident: on a few occasions, they asked the developers to solve simple programming tasks without giving them any specific APIs to use, instead they allowed them to make up the APIs they wanted to use. They were surprised by what they saw: when asked to send a simple text message using an unspecified messaging interface, all the developers wrote:

TextMessage msg = new TextMessage();

and not a single one of them wrote

MessageFactory factory = (MessageFactory) DirContext.lookup(“MessageFactory”);
TextMessage msg = factory.createTextMessage();

Given the opportunity to design their own graphics API, none of the developers wanted to write

Image.draw(false);

instead, they wanted to use two distinct methods, similar to these:

Image.overlay();  //draw over previous image
Image.draw();       //erase previous image before drawing

Boolean method parameters hardly ever figured in any of the APIs which developers were asking for. None of the developers thought they would need to do complex initialization steps; instead they assumed that the API will work right out-of-the box. Few of them expected that they will be required to extend classes, to implement interfaces, or to catch and handle exceptions. None of them wrote more than a few lines of code for the basic scenarios they were asked to implement. They just assumed that the API will take care of the details and hide the complexity from them.

The APIs the developers designed for themselves to use and someone else to implement were thus very different from the APIs they would design for themselves to implement and someone else to use. The conclusion is that the APIs we design are heavily influenced by our point of view. Let’s call these the caller’s point of view and the implementer’s point of view. Since the API is implemented only once but used many times, it should be clear that the caller’s point of view is dominant in API design.

Write client code first

So how do we design APIs from the caller point of view? By doing the exact same thing the developers were asked to do in the above experiment: writing the client code first. Not just once, but separately for every core usage scenario we want the API to cover. As we do this, repeating API usage patterns will emerge, as well as the types and methods which we need to provide. It helps when the developer writing these usage scenarios is not the same as the one implementing the API, to prevent any accidental “implementation bias”. It also helps if more than one person contributes code scenarios, so that personal preferences and programming style won’t have an undue influence on the API.

It is very important to point out that we are advocating writing real code for solving real problems as use cases, not pretend or throw-away code. The written code should be constantly updated and maintained as the API evolves and it should work correctly when the API is finally implemented. We don’t consider this wasted time, as this code can be reused for samples in the API documentation and as part of the API test suite. We should be skeptical about any API for which code samples are not readily available.

If an application has a graphical user interface, it is very common practice to model the API after the GUI, with method calls corresponding roughly to user actions, method parameters to user input, and results to what is displayed on the screen. This correctly reflects the user’s perspective and has nothing to do with the implementation, right? Well, the caller (programmer) and the user are not the same; they have drastically different needs. Issues with such APIs include:  insistence to log in with user name and password even when writing code for  unsupervised batch processes, excessive dependence on exception handling (the result of reusing existing input validation and error reporting logic), over-abundance of basic data types in method signatures (especially the string data type), data structures that have the same fields as forms displayed on the user’s screen, and so on. Such APIs are perhaps useful for developing alternative GUIs, but are less suitable for other scenarios.

The true test of our commitment to the “consider the caller perspective” guideline comes when we need to provide programmatic access to existing functionality. The implementation already exists. What are we going to do? Start coding scenarios and design an API from scratch? Or do we succumb to temptation, document the implementation we already have and call it an API? After all, any other API will require bridging, will introduce new bugs and may cause some performance problems. Why waste time on it?

If so many embarrassing (for the designer) and irritating (for the user) API design mistakes can be easily avoided by considering the viewpoint of the caller and writing the client code first, why is this not a common practice? Honestly, we don’t know the answer. The necessary time and effort certainly plays a role. Ultimately, just like in the case of User Experience Design, API Design is either part of an organization’s culture or it isn’t. One thing is for sure: considering the viewpoint of the caller is an essential part of any disciplined API design process.

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 Canada License.