Java API Design Checklist

There are many different rules and tradeoffs to consider during Java API design. Like any complex task, it tests the limits of our attention and memory. Similar to the pilots’ pre-flight checklist, this list helps software designers remember obvious and not so obvious rules while designing Java APIs. It is a complement to and intended to be used together with the API Design Guidelines.

We also have some before-and-after code examples to show how this list can help you remember overlooked design requirements, spot mistakes, identify less-than-optimal design choices and opportunities for improvements.

Click the [explain] link next to a checklist item (where available) for details about the rationale, examples, design tradeoffs or other limitations of applicability.

This list uses the following conventions:

   (Do) verb...  - Indicates the required design
    Favor...     - Indicates the best of several design alternatives
    Consider...  - Indicates a possible design improvement
    Avoid...     - Indicates a design weakness
    Do not...    - Indicates a design mistake

1. Package Design Checklist

1.1. General

  • 1.1.1. Favor placing API and implementation into separate packages [explain]
  • 1.1.2. Favor placing APIs into high-level packages and implementation into lower-level packages [explain]
  • 1.1.3. Consider breaking up large APIs into several packages [explain]
  • 1.1.4. Consider putting API and implementation packages into separate Java archives [explain]
  • 1.1.5. Avoid (minimize) internal dependencies on implementation classes in APIs [explain]
  • 1.1.6. Avoid unnecessary API fragmentation [explain]
  • 1.1.7. Do not place public implementation classes in the API package [explain]
  • 1.1.8. Do not create dependencies between callers and implementation classes [explain]
  • 1.1.9. Do not place unrelated APIs into the same package [explain]
  • 1.1.10. Do not place API and SPI into the same package [explain]
  • 1.1.11. Do not move or rename the package of an already released public API [explain]

1.2. Naming

  • 1.2.1. Start package names with the company’s official root namespace [explain]
  • 1.2.2. Use a stable product or product family name at the second level of the package name [explain]
  • 1.2.3. Use the name of the API as the final part of the package name [explain]
  • 1.2.4. Consider marking implementation-only packages by including “internal” in the package name [explain]
  • 1.2.5. Avoid composite names [explain]
  • 1.2.6. Avoid using the same name for both package and class inside the package [explain]
  • 1.2.7. Avoid using “api” in package names [explain]
  • 1.2.8. Do not use marketing, project, organizational unit or geographic location names [explain]
  • 1.2.9. Do not use uppercase characters in package names [explain]

1.3. Documentation

  • 1.3.1. Provide a package overview (package.html) for each package [explain]
  • 1.3.2. Follow standard Javadoc conventions [explain]
  • 1.3.3. Begin with a short, one sentence summary of the API [explain]
  • 1.3.4. Provide enough details to help deciding if and how to use the API [explain]
  • 1.3.5. Indicate the entry points (main classes or methods) of the API [explain]
  • 1.3.6. Include sample code for the main, most fundamental use case [explain]
  • 1.3.7. Include a link to the Developer Guide [explain]
  • 1.3.8. Include a link to the Cookbook [explain]
  • 1.3.9. Indicate related APIs
  • 1.3.10. Include the API version number [explain]
  • 1.3.11. Indicate deprecated API versions with the @deprecated tag
  • 1.3.12. Consider including a copyright notice [explain]
  • 1.3.13. Avoid lengthy package overviews
  • 1.3.14. Do not include implementation packages into published Javadoc

2. Type Design Checklist

2.1. General

  • 2.1.1. Ensure each type has a single, well-defined purpose
  • 2.1.2. Ensure types represent domain concepts, not design abstractions
  • 2.1.3. Limit the number of types [explain]
  • 2.1.4. Limit the size of types
  • 2.1.5. Follow consistent design patterns when designing related types
  • 2.1.6. Favor multiple (private) implementations over multiple public types
  • 2.1.7. Favor interfaces over class inheritance for expressing simple commonality in behavior [explain]
  • 2.1.8. Favor abstract classes over interfaces for decoupling API from implementation [explain]
  • 2.1.9. Favor enumeration types over constants
  • 2.1.10. Consider generic types [explain]
  • 2.1.11. Consider placing constraints on the generic type parameter
  • 2.1.12. Consider using interfaces to achieve similar effect to multiple inheritance
  • 2.1.13. Avoid designing for client extension
  • 2.1.14. Avoid deep inheritance hierarchies
  • 2.1.15. Do not use public nested types
  • 2.1.16. Do not declare public or protected fields
  • 2.1.17. Do not expose implementation inheritance to the client

2.2. Naming

  • 2.2.1. Use a noun or a noun phrase
  • 2.2.2. Use PascalCasing
  • 2.2.3. Capitalize only the first letter of acronyms [explain]
  • 2.2.4. Use accurate names for purpose of the type [explain]
  • 2.2.5. Reserve the shortest, most memorable name for the most frequently used type
  • 2.2.6. End the name of all exceptions with the word “Exception” [explain]
  • 2.2.7. Use singular nouns (Color, not Colors) for naming enumerated types [explain]
  • 2.2.8. Consider longer names [explain]
  • 2.2.9. Consider ending the name of derived class with the name of the base class
  • 2.2.10. Consider starting the name of an abstract class with the word “Abstract” [explain]
  • 2.2.11. Avoid abbreviations
  • 2.2.12. Avoid generic nouns
  • 2.2.13. Avoid synonyms
  • 2.2.14. Avoid type names used in related APIs
  • 2.2.15. Do not use names which differ in case alone
  • 2.2.16. Do not use prefixes
  • 2.2.17. Do not prefix interface names with “I”
  • 2.2.18. Do not use types names used in Java core packages [explain]

2.3. Classes

  • 2.3.1. Minimize implementation dependencies
  • 2.3.2. List public methods first [explain]
  • 2.3.3. Declare implementation methods private
  • 2.3.4. Define at least one public concrete class which extends a public abstract class [explain]
  • 2.3.5. Provide adequate defaults for the basic usage scenarios
  • 2.3.6. Design classes with strong invariants
  • 2.3.7. Group stateless, accessor and mutator methods together
  • 2.3.8. Keep the number of mutator methods at a minimum
  • 2.3.9. Consider providing a default no-parameter constructor [explain]
  • 2.3.10. Consider overriding equals(), hashCode() and toString() [explain]
  • 2.3.11. Consider implementing Comparable [explain]
  • 2.3.12. Consider implementing Serializable [explain]
  • 2.3.13. Consider making classes re-entrant
  • 2.3.14. Consider declaring the class as final [explain]
  • 2.3.15. Consider preventing class instantiation by not providing a public constructor [explain]
  • 2.3.16. Consider using custom types to enforce strong preconditions as class invariants
  • 2.3.17. Consider designing immutable classes [explain]
  • 2.3.18. Avoid static classes
  • 2.3.19. Avoid using Cloneable
  • 2.3.20. Do not add instance members to static classes
  • 2.3.21. Do not define public constructors for public abstract classes clients should not extend [explain]
  • 2.3.22. Do not require extensive initialization

2.4. Interfaces

  • 2.4.1. Provide at least one implementing class for every public interface
  • 2.4.2. Provide at least one consuming method for every public interface
  • 2.4.3. Do not add methods to a released public Java interface
  • 2.4.4. Do not use marker interfaces
  • 2.4.5. Do not use public interfaces as a container for constant fields

2.5. Enumerations

  • 2.5.1. Consider specifying a zero-value (“None” or “Unspecified”, etc) for enumeration types
  • 2.5.2. Avoid enumeration types with only one value
  • 2.5.3. Do not use enumeration types for open-ended sets of values
  • 2.5.4. Do not reserve enumeration values for future use
  • 2.5.5. Do not add new values to a released enumeration

2.6. Exceptions

  • 2.6.1. Ensure that custom exceptions are serialized correctly
  • 2.6.2. Consider defining a different exception class for each error type
  • 2.6.3. Consider providing extra information for programmatic access
  • 2.6.4. Avoid deep exception hierarchies
  • 2.6.5. Do not derive custom exceptions from other than Exception and RuntimeException
  • 2.6.6. Do not derive custom exceptions directly from Throwable
  • 2.6.7. Do not include sensitive information in error messages

2.7. Documentation

  • 2.7.1. Provide type overview for each type
  • 2.7.2. Follow standard Javadoc conventions
  • 2.7.3. Begin with a short, one sentence summary of the type
  • 2.7.4. Provide enough details to help deciding if and how to use the type
  • 2.7.5. Explain how to instantiate the type
  • 2.7.6. Provide code sample to illustrate the main use case for the type
  • 2.7.7. Include links to relevant sections in the Developer Guide
  • 2.7.8. Include links to relevant sections in the Cookbook
  • 2.7.9. Indicate related types
  • 2.7.10. Indicate deprecated types using the @deprecated tag
  • 2.7.11. Document class invariants
  • 2.7.12. Avoid lengthy type overviews
  • 2.7.13. Do not generate Javadoc for private fields and methods

3. Method Design Checklist

3.1. General

  • 3.1.1. Make sure each method does only one thing
  • 3.1.2. Ensure related methods are at the same level of granularity
  • 3.1.3. Ensure no boilerplate code is needed to combine method calls
  • 3.1.4. Make all method calls atomic
  • 3.1.5. Design protected methods with the same care as public methods
  • 3.1.6. Limit the number of mutator methods
  • 3.1.7. Design mutators with strong invariants
  • 3.1.8. Favor generic methods over a set of overloaded methods
  • 3.1.9. Consider generic methods
  • 3.1.10. Consider method pairs, where the effect of one is reversed by the other
  • 3.1.11. Avoid “helper” methods
  • 3.1.12. Avoid long-running methods
  • 3.1.13. Avoid forcing callers to write loops for basic scenarios
  • 3.1.14. Avoid “option” parameters to modify behavior
  • 3.1.15. Avoid non-reentrant methods
  • 3.1.16. Do not remove a released method
  • 3.1.17. Do not deprecate a released method without providing a replacement
  • 3.1.18. Do not change the signature of a released method
  • 3.1.19. Do not change the observable behavior of a released method
  • 3.1.20. Do not strengthen the precondition of an already released API method
  • 3.1.21. Do not weaken the postcondition of an already released API method
  • 3.1.22. Do not add new methods to released interfaces
  • 3.1.23. Do not add a new overload to a released API

3.2. Naming

  • 3.2.1. Begin names with powerful, expressive verbs
  • 3.2.2. Use camelCasing
  • 3.2.3. Reserve “get”, “set” and “is” for JavaBeans methods accessing local fields
  • 3.2.4. Use words familiar to callers
  • 3.2.5. Stay close to spoken English
  • 3.2.6. Avoid abbreviations
  • 3.2.7. Avoid generic verbs
  • 3.2.8. Avoid synonyms
  • 3.2.9. Do not use underscores
  • 3.2.10. Do not rely on parameter names or types to clarify the meaning of the method

3.3. Parameters

  • 3.3.1. Choose the most precise type for parameters
  • 3.3.2. Keep the meaning of the null parameter value consistent across related method calls
  • 3.3.3. Use consistent parameter names, types and ordering in related methods
  • 3.3.4. Place output parameters after the input parameters in the parameter list
  • 3.3.5. Provide overloaded methods with shorter parameter lists for frequently used default parameter values
  • 3.3.6. Use overloaded methods for operations with the same semantics on unrelated types
  • 3.3.7. Favor interfaces over concrete classes as parameters
  • 3.3.8. Favor collections over arrays as parameters and return values
  • 3.3.9. Favor generic collections over raw (untyped) collections
  • 3.3.10. Favor enumeration types over Boolean or integer types
  • 3.3.11. Favor putting single object parameters ahead of collection or array parameters
  • 3.3.12. Favor putting custom type parameters ahead of standard Java type parameters
  • 3.3.13. Favor putting object parameters ahead of value parameters
  • 3.3.14. Favor interfaces over concrete classes as return types
  • 3.3.15. Favor empty collections to null return values
  • 3.3.16. Favor returning values which are valid input for related methods
  • 3.3.17. Consider making defensive copies of mutable parameters
  • 3.3.18. Consider storing weak object references internally
  • 3.3.19. Avoid variable length parameter lists
  • 3.3.20. Avoid long parameter lists (more than 3)
  • 3.3.21. Avoid putting parameters of the same type next to each other
  • 3.3.22. Avoid out or in-out method parameters
  • 3.3.23. Avoid method overloading
  • 3.3.24. Avoid parameter types exposing implementation details
  • 3.3.25. Avoid Boolean parameters
  • 3.3.26. Avoid returning null
  • 3.3.27. Avoid return types defined in unrelated APIs, except core Java APIs
  • 3.3.28. Avoid returning references to mutable internal objects
  • 3.3.29. Do not use integer parameters for passing predefined constant values
  • 3.3.30. Do not reserve parameters for future use
  • 3.3.31. Do not change the parameter naming or ordering in overloaded methods

3.4. Error handling

  • 3.4.1. Throw exception only for exceptional circumstances
  • 3.4.2. Throw checked exceptions only for recoverable errors
  • 3.4.3. Throw runtime exceptions to signal API usage mistakes
  • 3.4.4. Throw exceptions at the appropriate level of abstraction
  • 3.4.5. Perform runtime precondition checks
  • 3.4.6. Throw NullPointerException to indicate a prohibited null parameter value
  • 3.4.7. Throw IllegalArgumentException to indicate an incorrect parameter value other than null
  • 3.4.8. Throw IllegalStateException to indicate a method call made in the wrong context
  • 3.4.9. Indicate in the error message which parameter violated which precondition
  • 3.4.10. Ensure failed method calls have no side effects
  • 3.4.11. Provide runtime checks for prohibited API calls made inside callback methods
  • 3.4.12. Favor standard Java exceptions over custom exceptions
  • 3.4.13. Favor query methods over exceptions for predictable error conditions

3.5. Overriding

  • 3.5.1. Use the @Override annotation
  • 3.5.2. Preserve or weaken preconditions
  • 3.5.3. Preserve or strengthen postconditions
  • 3.5.4. Preserve or strengthen the invariant
  • 3.5.5. Do not throw new types of runtime exceptions
  • 3.5.6. Do not change the type (stateless, accessor or mutator) of the method

3.6. Constructors

  • 3.6.1. Minimize the work done in constructors
  • 3.6.2. Set the value of all properties to reasonable defaults
  • 3.6.3. Use constructor parameters only as a shortcut for setting properties
  • 3.6.4. Validate constructor parameters
  • 3.6.5. Name constructor parameters the same as corresponding properties
  • 3.6.6. Follow the guidelines for method overloading when providing multiple constructors
  • 3.6.7. Favor constructors over static factory methods
  • 3.6.8. Consider a no parameter default constructor
  • 3.6.9. Consider a static factory method if you don’t always need a new instance
  • 3.6.10. Consider a static factory method if you need to decide the precise type of object at runtime
  • 3.6.11. Consider a static factory method if you need to access external resources
  • 3.6.12. Consider a builder when faced with many constructor parameters
  • 3.6.13. Consider private constructors to prevent direct class instantiation
  • 3.6.14. Avoid creating unnecessary objects
  • 3.6.15. Avoid finalizers
  • 3.6.16. Do not throw exceptions from default (no-parameter) constructors
  • 3.6.17. Do not add a constructor with parameters to a class released without explicit constructors

3.7. Setters and getters

  • 3.7.1. Start the name of methods returning non-Boolean properties with “get”
  • 3.7.2. Start the name of methods returning Boolean properties with “is”, “can” or similar
  • 3.7.3. Start the name of methods updating local properties with “set”
  • 3.7.4. Validate the parameter of setter methods
  • 3.7.5. Minimize work done in getters and setters
  • 3.7.6. Consider returning immutable collections from a getter
  • 3.7.7. Consider implementing a collection interface instead of a public propertie of a collection type
  • 3.7.8. Consider read-only properties
  • 3.7.9. Consider making a defensive copy when setting properties of mutable types
  • 3.7.10. Consider making a defensive copy when returning properties of mutable type
  • 3.7.11. Avoid returning arrays from getters
  • 3.7.12. Avoid validations which cannot be done with local knowledge
  • 3.7.13. Do not throw exceptions from a getter
  • 3.7.14. Do not design set-only properties (with public setter no public getter)
  • 3.7.15. Do not rely on the order properties are set

3.8. Callbacks

  • 3.8.1. Design with the strongest possible precondition
  • 3.8.2. Design with the weakest possible postcondition
  • 3.8.3. Consider passing a reference to the object initiating the callback as the first parameter of the callback method
  • 3.8.4. Avoid callbacks with return values

3.9. Documentation

  • 3.9.1. Provide Javadoc comments for each method
  • 3.9.2. Follow standard Javadoc conventions
  • 3.9.3. Begin with a short, one sentence summary of the method
  • 3.9.4. Indicate related methods
  • 3.9.5. Indicate deprecated methods using the @deprecated tag
  • 3.9.6. Indicate a replacement for any deprecated methods
  • 3.9.7. Avoid lengthy comments
  • 3.9.8. Document common behavioral patterns
  • 3.9.9. Document the precise meaning of a null parameter value (if permitted)
  • 3.9.10. Document the type of the method (stateless, accessor or mutator)
  • 3.9.11. Document method preconditions
  • 3.9.12. Document the performance characteristics of the algorithm implemented
  • 3.9.13. Document remote method calls
  • 3.9.14. Document methods accessing out-of-process resources
  • 3.9.15. Document which API calls are permitted inside callback methods
  • 3.9.16. Consider unit tests for illustrating the behavior of the method

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 Canada License.

Writing helpful API documentation

Many APIs are surprisingly poorly documented considering they are supposed to be “documented programmatic interfaces”. Developers prefer writing code over documentation, rarely showing the same enthusiasm and thoughtfulness for the latter. Some developers claim they write self-documenting code. Others like to point out that “nobody reads the documentation”. Such excuses create a vicious circle: we dislike documenting because we are not skilled enough and our skills do not improve because we pass up on opportunities to practice them. We need to make a conscientious effort to break this trend.

How developers use documentation

We already proved that self-documenting APIs are an unreachable ideal. Let’s refute the “Nobody reads the documentation” claim as well. In Two studies of opportunistic programming, Joel Brandt and his colleagues report that, on average, participants in their studies spend 19% of their time foraging the Internet for information. Web access logs to Adobe Flex documentation show 24,293 programmers making 101,289 queries during the month of July 2008 alone. Are these numbers we expect from documentation nobody reads? Then why is the “Nobody reads the documentation” misconception so widespread? Figure 1 compares the percentage of developers skimming the documentation, focusing only on prominent text and headers (“Skim”) to those systematically reading the pages line-by-line (“Line-by-line”). Another axis compares the number of those starting with the provided PDF overview (“PDF overview”) to those preferring to go straight to the reference manual, expecting it to be self-explanatory (“Self-explanatory”).

How developers use documentation

Figure 1: How developers use documentation

The conclusion is clear: documentation is referenced, not read. If reading the documentation cover-to-cover line-by-line is the only way to find information, developers will not find it, creating the false impression that they don’t read the documentation. Let’s take a real-life example from stackoverflow.com as illustration:

Question

With java.sql.ResultSet is there a way to get a column’s name as a String by using the column’s index? I had a look through the API doc but I can’t find anything.

Answer

See ResultSetMetaData:

ResultSet rs = stmt.executeQuery(“SELECT a, b, c FROM TABLE2″);

ResultSetMetaData rsmd = rs.getMetaData();

String name = rsmd.getColumnName(1);

This developer skimmed the 139 methods of the java.sql.ResultSet class, looking for getColumnName() or something similar, but skipped getMetadata() because it looked irrelevant. He preferred asking for help on the Internet to closely inspecting all 139 methods, which illustrates our main point: developers don’t want documentation; they want assistance with the task at hand.

In Documentation Usability: A Few Things I’ve Learned from Watching Users, Tom Johnson writes:

Invariably when I ask people how they prefer to learn new software, they respond, ‘I like someone to show me,’ or ‘I like to play around in the system and then ask a colleague if I get stuck.’ I’ve yet to hear the response, ‘I like long software manuals with lots of text in small print.’ Usually people that prefer this also like to slam their fingers in car doors and chew on tin foil.

Answering questions like a friend

We should act like a friend assisting the programmer when writing API documentation. Someone working with the Java Messaging Service (JMS) asks this question on StackOverflow.com:“What is the purpose of a JMS session? Why isn’t a connection alone sufficient to exchange JMS messages between senders and receivers?” Sounds like a legitimate question, right? How many developers would ask “Please enumerate the methods of the class javax.jms.Session in alphabetical order” from a friend? None? But often this is the only question answered by the Javadoc after clicking a class name!

If you document like a friend, you provide package and class overviews, answering useful questions like: “What is the purpose of this package?” “What can I use this class for?” “Are there any limitations?” “This is not quite what I need, what are some related classes?” Would you make a friend read dozens of pages of API minutiae just to infer the answers to such simple questions?

How do we know what questions to answer? In “Specifying behavior”, we explained that three-quarters of API questions are about behavior. Consequently, documenting preconditions, postconditions and invariants alone completes three-quarters of API documentation! We can also invite people unfamiliar with the API to review it and record the questions and answers in a FAQ. FAQs are easy to create and extend. API documentation is never quite complete and FAQs capture missing facts, clarify ambiguities, or document known issues. Most FAQs are temporary, kept only until the other parts of the documentation are updated. Long collections of FAQs are less useful.

Although we might think that callers don’t care, concepts and abstractions used in the design of the API, as well as the intent behind choosing them, need to be explained. As one developer eloquently says:

When you’re building a framework, there’s an intent … if you can understand what the intent was, you can often code efficiently, without much friction. If you don’t know what the intent is, you fight the system.

Supporting just-in-time learning

Research shows that developers learn APIs incrementally, interleaving short periods of studying documentation with writing code. Helpful API documentation matches this just-in-time learning pattern, consisting of small, self-contained, heavily cross-referenced sections. Developers spend no more than ten minutes with documentation before returning to code, the studies show. This sets the maximum size of an undivided documentation section at about half a page.

Because developers skim the documentation, each section needs to focus on a single subject and highlight what the subject is. Imagine that you work in customer support. You want to link to a section of the documentation as the answer to a specific customer question. When a section contains primarily irrelevant information, you are wasting the customer’s time. If the first sentence does not read like you are answering the question, you’d be too embarrassed to link to it.

Navigation and search are essential for finding the correct documentation section. Navigation produced by tools like Javadoc has some limitations, evident from the generated alphabetical index. Packages, classes and methods are listed there, but adding non-structural entries like “Performance” or “Thread safety” to the index is not directly supported by the tools. Many developers simply type their questions into an Internet search engine and expect it to find the correct answer. API documentation not available on the Internet or not optimized for this type of searching is less useful.

Illustrating use cases

Answers to “How do I?” questions, the kind of use-case driven questions tens of thousands of developers ask daily on Internet forums tend to be more helpful than answers to “What is this?” questions:

The problem is always, when I feel I can’t make progress … when there’s multiple functions or methods or objects together that, individually they makes sense but sort of the whole picture isn’t always clear, from just the docs.

Code snippets are often the straightforward answer to “How do I?” questions. The “How do I get the JDBC column name?” question above was answered with just 3 lines of code. If we follow the “consider the perspective of the caller” guideline, we write use case driven code samples right at the beginning of the API design process. We can turn these code samples effortlessly into a cookbook, an increasingly popular form of developer documentation, by describing briefly the use case each code snippet illustrates.

The code examples must be exemplary. They should closely follow all relevant programming best practices. A sloppy example “can become more of a hindrance than a benefit when there’s a mismatch between the tacit purpose of the example and the goal of the example user” warns Martin P. Robinard.

Tutorials serve a similar purpose, but differ in important aspects. They break up the building of the complete example into smaller, more manageable steps. Tutorials are intended to be completed from start to finish. They intend to teach. They are not reference material. Many programmers started learning Windows programming from the famous Scribble tutorial. New tools for recording video on a computer (screencasting) have made video tutorials very popular. Nevertheless, tutorials can be time-consuming to produce and are rarely needed for simple APIs.

Putting it all together

Developers need API documentation for four main activities:

  1. Remind themselves of details deemed not worth remembering
  2. Clarify and extend their existing knowledge
  3. Engage in just-in-time learning of new skills
  4. Experiment with (sample) code

Reference documentation only supports the first two activities. A separate Programmer’s Guide is needed to support just-in-time learning because research shows that developers waste a great deal of time guessing, inspecting and backtracking when learning directly from reference documentation. Unfortunately, the name Programmer’s Guide still evokes a heavy book, which is not what we are advocating. In addition to a high-level overview, a modern Programmer’s Guide contains only topics that don’t fit nicely into either the Reference Manual or the Cookbook format, such as describing new concepts, conventions, design patterns, and so on. With their just-in-time learning style, programmers make frequent jumps between Programmer’s Guide, Reference Manual and Cookbook, provided these are properly cross-referenced. The table below summarizes the various parts of a complete API documentation.

Documentation Part Programmer Activities Contents Organization
Reference Manual
  • Remembering details
  • Extension of knowledge
  • Package overviews
  • Class overviews
  • Method descriptions
  • Structural top-down
  • Alphabetical Index
Cookbook
  • Experimenting with code
  • Extension of knowledge
  • Just-in-time learning
  • List of use cases
  • Description of use cases
  • Code snippets
  • By use-case
Programmer’s Guide
  • Just-in-time learning
  • Extension of knowledge
  • Introductory overview
  • Glossary
  • Concepts
  • Conventions
  • Design patterns
  • other…
  • By subject
  • Alphabetical Index
Code Example or Tutorial
  • Experimenting with code
  • Source files
  • Build or project files
  • Other resources
  • Video (optional)
  • Development project
FAQ or Knowledge Base
  • Extension of knowledge
  • Clarifications
  • Tips
  • Traps
  • Known issues
  • Questions and answers

Conclusion

It is hard to over-emphasize the importance of good API documentation. Even exceptionally well designed APIs can be frustrating to use if poorly documented. On the other hand, the only way to improve an existing API, without a complete and expensive redesign, is to improve the documentation. API documentation is developer documentation. Since nearly all developer documentation we write is for internal use, we often compromise on quality. It is crucial to understand that public APIs need to be treated differently. The target audience includes external developers or system integrators, who are also customers. The same quality requirements apply as to customer documentation like administration guides or end-user manuals.

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 Canada License.

Anticipating evolution

API designers need to resolve an apparent paradox: how to keep APIs virtually unchanged yet respond to ever-changing customer requirements. It is more intricate a skill than simply applying specific API evolution techniques. It can be compared to a chess master’s ability to anticipate several upcoming moves of a game. Just like beginner chess players, we start by learning the specific API evolution techniques, but we become true experts when we are able to plan ahead for at least a couple of API releases. We are more likely to design long-lasting, successful APIs if we master this skill.

Let’s start with the fundamental rule of API evolution: existing clients must work with a new product release without any changes, not even a recompilation. While breaking changes can be tolerated in internal code, they are prohibited in public APIs. We must either limit ourselves to binary compatible changes or keep the old API unchanged while introducing a new API in parallel, a method called API versioning.

Maintaining backwards compatibility

Backwards compatible changes are preferable because clients can upgrade smoothly and without any human intervention, taking advantage of new features at their convenience. Conversely, API versioning demands an explicit decision to upgrade because code changes are required. Clients frequently choose to defer upgrades, requiring a long period of support for multiple API versions. We should plan to evolve APIs primarily through backwards compatible changes. We should avoid API versioning if possible.

Anticipating evolution means choosing designs which allow the largest number of backwards compatible changes. For example, C++ developers know that adding a field to a C++ class changes its size and breaks binary compatibility with client code into which size was hard-coded by the compiler. Similarly, adding a virtual method modifies the virtual method table, causing clients to call wrong virtual functions (see Listing 1). Because the need for new fields and methods is likely to arise, smart designers move all fields and virtual methods into a hidden implementation class (see Listing 3), leaving only public methods and a single private pointer in the public class (see Listing 2):

Listing 1: Original API class design is hard to evolve

#include <vector>  //exposed direct dependency on STL
#include "Node.h"  //exposed implementation class Node
class OriginalClass {

public:
	int PublicMethod(...);

protected:
	std::vector<Node> children; 

	// Adding a field modifies the size, breaks compatibility
	int count; 

	// Adding a method modifies the vtable, breaks compatibility
	virtual void ProtectedMethod(...);
};

Listing 2: New API class design using the Façade pattern

class ImplementationClass; //declares unknown implementation class

class FacadeClass {

public:
	int PublicMethod(...); 

private:
	ImplementationClass *implementation; //size of a pointer
};

Listing 3: The implementation details are never exposed to the client

#include <vector>  //OK, client code never includes it
#include "Node.h"  //OK, client code never includes it

class ImplementationClass {

public:
	int PublicMethod(...);

protected:
	std::vector<Node> children; 

	//OK, client never instanciates direcly
	int count; 

	//OK, the client has no direct accesses to the vtable
	virtual void ProtectedMethod(...);
};

Binary compatible changes are different depending on platform. Adding a private field or a virtual method is a breaking change in C++, but a backwards compatible change in Java. As one of our teams recently discovered, extending SOAP Web Services by adding an optional field is a compatible change in JAX-WS (Java) but a breaking change in .Net. Providing lists of compatible changes for each platform is outside the scope of this document; this information can be found on the Internet. For example, the Java Language Specification states the binary compatibility requirements and Eclipse.org gives practical advice on maintaining binary compatibility in Java. The KDE TechBase is a good starting point for developers interested in C++ binary compatibility.

While we are comparing platforms, we should mention that standard C is preferable to C++ for API development. Unlike C, C++ does not have a standard Application Binary Interface (ABI). As a result, evolving multi-platform C++ APIs while maintaining binary compatibility can be particularly challenging.

Keeping APIs small and hiding implementation details help maintain backwards compatibility. The less we expose to the clients, the better. Unfortunately, compatibility requirements also extend to implementation details inadvertently leaked into the API. If this happens, we cannot modify the implementation without using API versioning. Carefully hiding implementation details prevents this problem.

We can break backwards compatibility (without modifying method signatures) by changing the behavior. For example, if a method always returned a valid object and it is modified so that it may also return null, we can reasonably expect that some clients will fail. Maintaining the functional compatibility of APIs is a crucial requirement, one that requires even more care and planning than maintaining binary compatibility.

The only backwards compatible behavior changes are weakened preconditions or strengthened postconditions. Think of it as a contractual agreement. Preconditions specify what we ask from the client. We may ask for less, but not more. Postconditions specify what we agreed to provide. We may provide more, but not less. For example, exceptions are preconditions (we expect clients to handle them). It is not allowed to throw new exceptions from existing methods. If a method is an accessor, a part of its postcondition is a guarantee that the method does not change internal state. We cannot convert accessors into mutators without breaking the clients. The invariant is part of the method’s postcondition and should only be strengthened.

API behavior changes are likely to go undetected since developers working with implementation code often do not realize the full impact of their modifications. When we talked about specifying behavior, we already noted the importance of explicitly stating the preconditions, postconditions and invariants, as well as providing automated tests for detecting inadvertent modifications. Now we see that those same practices also help maintain functional compatibility as the API evolves.

SPIs (Service Provider Interfaces) evolve quite differently from APIs because responsibilities of the client and the SPI implementation are often reversed. APIs provide functionality to clients, while SPIs define frameworks into which clients integrate. Clients usually call methods defined in APIs, but often implement methods defined in SPIs. We can add a new method to an interface without breaking APIs, but not without breaking SPIs. The way pre- and postconditions can evolve is often reversed in SPIs. We can strengthen preconditions (this is what the SPI guarantees) and weaken postconditions (this is what we ask from the client to provide) without breaking clients. The differences between APIs and SPIs are not always clear. Adding simple callback interfaces will not turn APIs into SPIs, but callbacks evolve like SPI interfaces.

Surprisingly, we need to worry less about source compatibility, which requires that clients compile without code changes. While binary and source compatibility do not fully overlap, all but a few binary compatible changes are also source compatible. Examples of exceptions are adding a class to a package or a method to a class in Java. These are binary compatible changes, but if the client imports the whole package and also references a class with the same name from another package, compilation fails due to name collision. If a derived class declares a method with the same name as a method added to the base class, we have a similar problem. Source incompatibility issues are rare with binary-compatible APIs and require few changes in client code.

If we focus too much on source compatibility, we increase the risk of breaking binary compatibility since not all source compatible changes are binary compatible. For example, if we change a parameter type from HashMap (derived type) to Map (base type), the client code still compiles. However, when attempting to run an old client, the Java runtime looks for the old method signature and it cannot find it. The risk of breaking binary compatibility is real because during their day-to-day work, developers are more concerned about breaking the build than about maintaining binary compatibility.

Versioning

API versioning cannot be completely avoided. Some unanticipated requirements are impossible to implement using backwards compatible changes. Software technologies we depend on do not always evolve in a backwards compatible fashion (just ask any Visual Basic developer). Also, API quality may also degrade over time if our design choices are restricted to backwards compatible changes. From time to time, we need to make major changes in order to upgrade, restructure, or improve APIs. Versioning is a legitimate method of evolving APIs, but it needs to be used sparingly since it demands more work from both clients and API developers.

Anticipating evolution in the case of explicit versioning means ensuring that an incompatible API version is also a major API version. We should deliberately plan for it to avoid being forced by unexpected compatibility issues. The upgrade effort must be made worthwhile for clients by including valuable new functionality. We should also use this opportunity to make all breaking changes needed to ensure smooth backwards compatible evolution over the several following releases.

API versions must coexist at runtime. How we accomplish this is platform-dependent. Where available, we should use the built-in versioning capabilities; .Net assemblies have them and so does OSGi in Java, although OSGi is not officially part of the Java platform. If there is no built-in versioning support, the two API versions should reside in different namespaces, to permit the same type and method names in both versions. The old version keeps the original namespace while the new version has a namespace with an added version identifier. The API versions should also be packaged into separate dynamic link libraries, assemblies, or archives. Since C does not support namespaces, separate DLLs are needed to keep the same method names. We should make sure we change the service end point (URL) when versioning Web Services APIs, since all traffic goes through the same HTTP port. We should also change the XML namespace used in the WSDL. This ensures that client stubs generated from different WSDL versions can coexist with each other, each in its namespace.

It is often advantageous to re-implement the old API version using the new one. Keeping two distinct implementations means code bloat and increased maintenance effort for years. If the new API version is functionally equivalent to the old one, implementing a thin adaptor layer should not require much coding and testing. As an added benefit, the old API can take advantage of some of the improvements in the new code, such as bug fixes and performance optimizations.

Conclusion

Designing for evolution can be challenging and time consuming. It adds additional constraints to API design which frequently conflict with other design requirements. It is essentially a “pay now versus pay later” alternative. We can spend some effort up front designing easy-to-evolve APIs or we can spend more effort later when we need to evolve the API. Nobody can reasonably predict how an API is likely to evolve; hence nobody can claim with authority that one approach is better than the other. It is thought provoking, however, that nobody has yet come forward saying they regretted making APIs easier to evolve.

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 Canada License.

Making it safe

Being safe means avoiding the risk of pain, injury, or material loss. A safety feature is a design element added to prevent inadvertent misuse of dangerous equipment. For example, one pin of the North American electric plug is intentionally wider to prevent incorrect insertion into a socket. But it was Toyota who first generalized the principle of poka-yoke (“mistake avoidance”), making it an essential part of its world-renowned manufacturing process. When similar principles of preventing, avoiding, or correcting human errors are applied to API design, the number of software defects is reduced and programmer productivity improves. Rico Mariani calls this the “pit of success”:

The Pit of Success: in stark contrast to a summit, a peak, or a journey across a desert to find victory through many trials and surprises, we want our customers to simply fall into winning practices by using our platform and frameworks. To the extent that we make it easy to get into trouble we fail.

Preventing unsafe use

Engineers place all dangerous equipment and materials – high voltage, extreme temperature, or poisonous chemicals – safely behind locked doors or inside sealed casings. Programming languages offer controlled access to classes and methods, but time and again we forget to utilize it. We leave public implementation classes in the API package. We forget to declare methods users shouldn’t call as private. We rarely disallow class construction, and seldom declare classes we don’t want callers to extend as final. We declare public interfaces even when we cannot safely accept implementations other than our own. These oversights are the equivalent of leaving the boiler room unlocked. When inadvertent access to implementation details is possible, accidents are likely to happen.

Our next line of defense is type checking. In a nutshell, type checking attempts to catch programming mistakes at the language level, either at compile time in statically typed languages, or at run time in dynamically typed languages. If interested in the details of what type checking can or cannot do for you in various languages, you should read Chris Smith’s excellent “What to know before debating type systems”. For various theoretical and practical reasons, type checking cannot catch all usage errors. It would be ideal if every statically typed API call which compiles executed safely, but present-day compilers are just not sophisticated enough to make it a reality. However, this does not mean that we should not take advantage of type checks where we can. We may be stating the obvious, yet we often see APIs which are not as type safe as they could be. The ObjectOutputStream class from the Java I/O library declares the

final void writeObject(Object obj) throws IOException

method which throws an exception if the argument is not Serializable. The alternative method signature

public final void writeObject(Serializable obj) throws IOException

could turn this runtime verification into a compile time check.

Every time a method only works for a small subset of all possible parameter values we can make it safer by introducing a more restrictive (read: safer) parameter type. Especially string, integer, or map parameter types deserve close examination because we often use these versatile types unsafely in programming. We take advantage of the fact that practically every other type can be converted into a string or represented as a map, and integers can be many more things than just numbers. This may be reasonable or even necessary in implementation code where we often need to call low-level library functions and where we control both caller and callee. APIs are, yet again, special. API safety is very important and we need to consider design trade-offs accordingly.

When evaluating design trade-offs it helps to understand that we are advocating replacing method preconditions with type invariants. This moves all safety-related program logic into a single location, the new type implementation, and relies on automatic type checking to ensure API safety everywhere else. If it removes strong and complex preconditions from multiple methods it is more likely to be worth the effort and additional complexity. For example, we recommend passing URLs as URL objects instead of strings. Many programming languages offer a built-in URL type; precisely because the rules governing what strings are valid URLs are complicated. The obvious trade-off is that callers need to construct an URL object when the URL is available as a string.

Weighing type safety against complexity is a lot like comparing apples and oranges: we must rely on our intuition, use common sense, and get lots of user feedback.  It is worth remembering that API complexity is measured from the perspective of the caller. It is difficult to tell how much the introduction of a custom type increases complexity without writing code for the use cases. Some use cases may become more complex while others may stay the same or even become simpler. In the case of the URL object, handling string URLs is more complex, but returning to a previously visited URL is roughly the same if we keep URL objects in the history list. Using URL objects result in simpler use cases for clients that build URLs from fragments or validate URLs independently from accessing the resource they refer to.

As a third and final line of defense – since type checking alone cannot always guarantee safe execution – all remaining preconditions need to be verified at run time. Very, very rarely performance considerations may dictate that we forgo such runtime checks in low-level APIs, but such cases are the exceptions. In most cases, returning incorrect results, failing with obscure internal errors, or corrupting persisted data is unacceptable API behavior. Errors resulting from incorrect usage (violated preconditions) should be clearly differentiated from those caused by internal problems and should contain messages clearly describing the mistake made by the caller. That a call caused an internal SQL error is not considered a helpful error message.

We should be particularly careful when providing classes for extension because inheritance breaks encapsulation. What does this mean? Protected methods are not a problem. Their safety can be ensured the same way as for public methods. Much bigger issues arise when we allow derived classes to override methods. Overriding is risky because callers may observe inconsistent state from within the method they override (known as the “fragile base class problem”) or may make inconsistent updates (known as the “broken contract problem”). In other words, calling otherwise safe public or protected methods from within overridden methods may be unsafe. There is no language mechanism to prevent access to public and protected methods from within overridden methods, so we often need to add additional runtime checks as illustrated below:

public Job {

   private cancelling = false;

   public void cancel() {
      ...
      cancelling =  true;
      onCancel();
      cancelling = false;
      ...
    }

    //Override this to provide custom cleanup when cancelling
    protected void onCancel() {
    }

    public void execute() {
      if(cancelling) throw IllegalStateException(“Forbidden call to
         execute() from onCancel()”);
      ...
    }
}

It is generally safer to avoid designing for class extension if it is possible. Unfortunately, simple callbacks may also expose similar safety issues, though only public methods are accessible from callbacks. In the example above, the runtime check is still needed after we make onCancel() a callback, since execute() is a public method.

Preventing data corruption

A method can only be considered safe if it preserves the invariant and prevents the caller from making inconsistent changes to internal data. The importance of preserving invariants cannot be overstated. Not long ago, a customer who used the LDAP interface to update their ADS directory reported an issue with one of our products. Occasionally the application became sluggish and consumed a lot of CPU cycles for no apparent reason. After lengthy investigations, we discovered that the customer inadvertently corrupted the directory by making an ADS group a child of its own. We fixed the issue by adding specific runtime checks to our application, but wouldn’t it be safer if the LDAP API didn’t allow you to corrupt the directory in the first place? The Windows administration tools don’t allow this, but since the LDAP interface does, applications still need to watch out for infinite recursions in the group hierarchy.

The invariant must be preserved even when methods fail. In the absence of explicit transaction support, all API calls are assumed atomic. When a call fails, no noticeable side effects are expected.

Special care must be taken when storing references to client side objects internally, as well as when returning internal object references to the client. The client code can unexpectedly modify these objects at any time, creating an invisible and particularly unsafe dependency between the client code (which we ignore) and the internal API implementation (which the client ignores). On the other hand, it is safe to store and return references to immutable objects.

If the object is mutable, it is a great deal safer to make defensive copies before storing or returning it rather than relying on the caller to do it for us. The submit() method in the example below makes defensive copies of jobs before placing them into its asynchronous execution queue, which makes it hard to misuse:

JobManager    jobManager  = ...; //initializing
Job           job = jobManager.createJob(new QueryJob());      

//adding parameters to the job
job.addParameter("query.sql", "select * from users");
job.addParameter("query.dal.connection", "hr_db");      

jobManager.submit(job); //submitting a COPY of the job to the queue      

job.addParameter("query.sql", "select * from locations"); //it is safe!
jobManager.submit(job) //submitting a SECOND job!

For the same reason, we should also avoid methods with “out” or “in-out” parameters in APIs, since they directly modify objects declared in client code. Such parameters frequently force the caller to make defensive copies of the objects prior to the method call. The .Net Socket.Select() method usage pattern shown bellow made Michi Henning frustrated enough to complain about it in his “API Design Matters“:

ArrayList readList = ...;   // Creating sockets to monitor for reading
ArrayList writeList = ...;  // Creating sockets to monitor for writing
ArrayList errorList;        // Sockets to monitor for errors.

while(!done) {

    SocketList readReady  = readList.Clone();  //making defensive copy
    SocketList writeReady = writeList.Clone(); //making defensive copy
    SocketList errorList  = readList.Clone();  //making defensive copy

    Socket.Select(readReady, writeReady, errorList, 10000);
         // readReady, writeReady, errorList were modified!
    …
}

Finally, APIs should be safe to use in multi-threaded code. Sidestepping the issue with a “this API is not thread safe” comment is no longer acceptable. APIs should be either fully re-entrant (all public methods are safe to call from multiple threads), or each thread should be able to construct its own instances to call. Making all methods thread safe may not be the best option if the API maintains state because deadlocks and race conditions are often difficult to avoid. In addition, performance may be reduced waiting for access to shared data. A combination of re-entrant methods and individual object instances may be needed for larger APIs, as exemplified by the Java Messaging Service (JMS) API, where ConnectionFactory and Connection support concurrent access, while Session does not.

Conclusion

Safety has long been neglected in programming in favor of expressive power and performance. Programmers were considered professionals, expected to be competent enough to avoid traps, and smart enough to figure out the causes of obscure failures. Programming languages like C or C++ are inherently unsafe because they permit direct memory access. Any C API call – no matter how carefully designed – may fail if memory is corrupted. However, the popularity and wide scale adoption of Java and .Net clearly signals a change. It appears that developers are demanding safer programming environments. Let’s join this emerging trend by making our APIs safer to use!

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 Canada License.

Specifying behavior

In the paper “Six Learning Barriers in End-User Programming Systems”, Andrew J. Ko and his colleagues show that programmers make numerous assumptions when working with unfamiliar APIs, over three-quarters of them about API behavior. While programmers can directly examine type definitions and method signatures, they need to infer behavior from method and parameter names. It is not entirely surprising that many such assumptions turn out to be incorrect. Ko’s paper documents a total of 130 cases when programmers failed to complete the assigned task. In 36 of those cases, the programmers did not succeed in making the API call at all. In a further 38 cases, they were unable to understand why the call behaved differently than expected and what to do to correct it. In another 25 cases, they were unable to successfully combine two or more method calls to solve the problem.

Why self-documenting APIs are rare

Under-specified behavior causes serious usability issues in numerous APIs. Many developers honestly believe in self-documenting APIs, but as we will show, fully self-documenting APIs are an ideal towards we should aim, rather than a result we can realistically expect to achieve. Despite our very best efforts, subtle and unintuitive behavior is present in most APIs.

Even in the seemingly clear-cut cases, figuring out the precise behavior without additional help can be unexpectedly daunting. Take the TeamsIdentifier class shown below as an example:

//Uniquely identifies an entity.
class TeamsIdentifier {

   //Constructs an identifier from a string.
   TeamsIdentifier(String id) {...}

   //Returns the id as a String.
   java.lang.String asString() {...}

   //Convenience method to return this id as an array.
   TeamsIdentifier[] asTeamsIdArray() {...}

   // Returns a copy of the object.
   java.lang.Object clone() {...}

   //Checks if two ids are equal.
   boolean equalsId(TeamsIdentifier id) {...}

   // Intended for hibernate use only.
   java.lang.String getTeamsId() {...}

   boolean equals(java.lang.Object o) {...}
   int hashCode() {...}
   void setTeamsId(java.lang.String id) {...}

   //Returns a string representation of the id.
   java.lang.String toString() {...}
}

It looks straightforward enough, you say. Let’s see if you can answer, in total confidence, the following questions:

Expression True or False?
TeamsIdentifier id1 = new TeamsIdentifier(“name”)
TeamsIdentifier id2 = new TeamsIdentifier(“Name”)
id1.equals(id2)
   ?
id1.equalsId(id2)
   ?
id1.toString().equals(“name”)
   ?
id1.getTeamsId().equals(“name”)
   ?
TeamsIdentifier id = new TeamsIdentifier(“a.b.c”)
id.asTeamsIdArray().length == 3
   ?
TeamsIdentifier id = new TeamsIdentifier(“a:b:c”)
id.asTeamsIdArray().length == 3
   ?

Knowing that AssetIdentifier and UserIdentifier both extend TeamsIdentifier, can you answer, again in total confidence, the questions below?

Expression True or False?
AssetIdentifier assetId = new AssetIdentifier(“Donald”)
UserIdentifier userId = new UserIdentifier(“Donald”)
assetId.equals(userId)
   ?
assetId.equalsId(userId)
   ?
assetId.toString().equals(userId.getTeamsId())
   ?

Of course, we can make sensible assumptions about what the correct behavior should be, but we have to honestly admit that we don’t really know. For that we either need to test the API or look at the implementation. Looking at the implementation is rarely a practical option. Learning by trial and error is time consuming and it doesn’t tell us which observed behavior is by design as opposed to merely accidental. For example, if we get the same AssetIdentifier object back every time, we might incorrectly assume that we can write id1 == id2 instead of id1.equals(id2). Our program works correctly only until the next version of the API comes out.

We provide a huge service to our users when we remove guesswork from API usage by properly documenting behavior.

Using code for specifying behavior

Code is more concise and precise than words. It is difficult to think of a good reason why not to use code for specifying API behavior. We are documenting for developers, who should welcome, and have no problem understanding code. The above tables document the behavior of TeamsIdentifier and its derived classes when we enter the appropriate True or False values into the second column. You probably noticed that the code in the first column is similar to what we would write for unit tests. In the case of APIs, unit tests are twice as useful because they also document the expected behavior. Some developers call these code snippets assertions, while those familiar with the work of Professor Bertrand Mayer call this particular method of specifying behavior Design by Contract. Starting with version 4.0, the .Net Framework natively supports design by contract, while third-party tools exist for many other programming languages.

No matter what we call it or what tool we use, we should precisely specify API behavior using code.

Indicating stateless, accessor and mutator methods

The existence of observable internal state is a primary cause of unintuitive behavior, since it allows a method call to modify the result of the next (seemingly unrelated) call. A stateful algorithm controls access rights in multi-user systems. Is it possible to discover, from studying the API alone, how moving a document into a different folder affects its access rights? Isn’t it true that this depends not only on the security settings assigned to the document itself and those of the destination folder, but also on the security settings of its parent folder and recursively up to the root folder? Doesn’t it also depend on the user’s assigned roles, group memberships and perhaps on the security model currently in use? All these settings may be accessible via the API, but they alone won’t tell us how the access control algorithm actually works.

Realizing that state prevents us from designing self-documenting APIs, we could be tempted to stick to stateless APIs. While this isn’t always possible, it is an excellent idea to isolate the impact of internal state to the smallest possible part of APIs. We should have as many stateless methods as possible, since their behavior only depends on the parameter values. In object-oriented environments we should also favor immutable objects, which have state that cannot be changed once the objects are created. Fixed state is obviously less predictable than no state, but more predictable than evolving state.

Where we cannot avoid modifiable state, we should group the affected methods into two distinct categories: accessors, which can only read the state, and mutators, which can also change it. Accessors are like gauges on a control panel, and mutators are like switches and buttons. The accessors produce the same result when called a second or third time in a row, while mutators may produce a different result every time. Inserting a call to an accessor into the middle of an existing program is safe, while inserting a mutator may change the behavior of the subsequent API calls, breaking the program’s logic.

We must explicitly tell callers if a method is stateless, an accessor, or a mutator to help them use it correctly. We cannot rely on them guessing correctly or on naming conventions alone. We won’t be able to start all accessor names with “get” or “is” – show() or print() are accessors, as are many other, less obviously named methods. Because mutators are the most challenging, it is a good idea to keep their number to an absolute minimum and pay careful attention to their design.

Using strong invariants

Not all mutators are equally problematic. The stronger the invariant, the more predictable and intuitive the behavior becomes. The invariant is a set of statements (assertions) about behavior, which always hold true, regardless of state. It is essentially guaranteed, predictable behavior. We will illustrate this with an API, which helps us cover a geometrical shape with a triangular mesh as shown in the figure below:

Triangular mesh

Triangular mesh

Depending on our design, some or all of the following statements may be true after each API call:

  1. The whole geometric area is fully covered with the mesh
  2. All triangles in the mesh are regular (the triangle area is not null, no two nodes overlap each other, the three nodes don’t lie on the same straight line, etc.)
  3. There are no unconnected nodes
  4. No two triangles overlap each other
  5. Every node lies either inside or on the boundary of the geometric shape
  6. Every edge lies either inside or on the boundary of the geometric shape

The simplest API we can imagine, which requires us to insert and connect nodes directly, cannot guarantee any of this and would be rather difficult to use (remember, you cannot see the mesh when programming with an API!). We intuitively know that an API, which could guarantee all of the above invariants, would be much easier to use, but is such an API feasible? While it is not easy to figure them out, such mutators exist, and they are known as the Delaunay mesh refinement operators. Here are four of them:

Triangle split – splits a triangle into three smaller ones by adding a new node in the middle

Edge split – replaces two adjacent triangles with four smaller ones by splitting the common edge into two halves

Edge flip – changes the shape of two adjacent triangles by flipping the shared edge to the other diagonal of the bounding rectangle

Node nudge – changes the shape of the connected triangles by repositioning a node inside the polygon defined by the neighboring nodes

Delauney mesh refinement operators

Delauney mesh refinement operators

Notice how simple it is to describe what each method does? To see the big difference this design makes, try to describe how to correctly refine a mesh by inserting and (re)connecting nodes, and then do it again using the Delaunay operators. Which is easier?

Great APIs have strong invariants, but as we just saw, this doesn’t happen by itself, it requires careful design.

Using weak preconditions

Weak preconditions help callers just like strong invariants. If invariants are constraints on the API designer, preconditions are constraints on the caller: conditions which should be met for the call to succeed. From the caller perspective, the invariants should be strong, and the preconditions weak. In an ideal world, all API calls would succeed and produce correct results for all possible arguments. In the real world, this is either impossible or it conflicts with other design requirements. The trick is to stay as close to the ideal solution as possible.

For example, one of our APIs limits the length of string method parameters to less than 255 characters for efficient database storage and better performance. On the other hand, it would be easier to use without these limitations. Web Services APIs, in general, are infamous for taking complex data structures as arguments, yet they only work when these data structures are appropriately constructed. The documentation rarely states the preconditions explicitly, leading to backbreaking trial-and-error style programming.

To sum it up, weak preconditions (or no preconditions) are better than strong ones, and documented preconditions are far preferable to undocumented ones.

Conclusion

Observable state is just one of the many reasons why self-documenting APIs are a largely unreachable ideal. Reentrancy, performance characteristics, extensibility via inheritance, the use of callbacks, caching, clustering and distributed state can all lead to complex, unintuitive behavior. While careful design using strong invariants and weak preconditions can make API behavior more predictable, behavior still needs to be explicitly specified. The recommended way of specifying behavior is with code in the form of unit tests, assertions or contracts.

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 Canada License.

Choosing memorable names

Choosing good names is half art, half science; part of it is learned from books, the rest comes from experience. After working with APIs for a while, we develop a taste and appreciation for good names. It is comparable to wine tasting: at the beginning, all wines seem to taste the same, but after a while we develop the capacity to detect subtle flavors and tell apart great vintages from so-and-so wines. But a sophisticated wine connoisseur doesn’t necessarily know how to make a good wine; for that he needs to learn the technique of wine making. It is this combination of art and science, intuitive thinking and logical reasoning, which makes naming difficult.

Avoiding naming mistakes

Bad habits are the cause of many common naming blunders. In the early days of computing(1), strict technical limitations forced programmers to write almost indecipherable code. When identifiers were limited to 8 characters and punch cards were only 80 characters wide, abbreviated names – like strcpy or fscanf – were unavoidable. It used to be standard practice to prefix C function names to prevent name conflicts at link time. Underscores(2) and other special characters in names made sense when computer terminals had no separate uppercase and lowercase characters. Hungarian notation is useful for differentiating integers representing genuine numbers (nXXX) from integers representing handles, the equivalent of pointers to complex data structures (hwndXXX – handle to a Window) in languages with fixed type systems and lacking true pointers, such as BASIC or FORTRAN. The name stuck, because developers found it just as incomprehensible as a foreign language (Charles Simonyi, its inventor, was born in Hungary). Today, unlimited identifier lengths, full namespace support, object-oriented programming, and powerful IDEs make these practices unnecessary. We should start our quest for better names by ditching these antiquated and hard-to-read naming conventions.

The next step is to use correct English spelling, grammar, and vocabulary. It is hard enough to memorize APIs, let’s not make users also remember spelling errors, made-up words or other creative use of language. Automated spell checking turned spelling errors into the least forgivable API design mistakes. US English is the de-facto dialect of programming: no matter where we live, we should spell “color” and not “colour” in code. While Printer or Parser are valid English words, appending “-er” to turn a verb into a noun doesn’t always work. We can delete, but there is no “Deleter” in the dictionary(3). The same care should be taken when turning verbs into adjectives: saying that something is “deletable” is incorrect. Finally, we should be aware of the correct word order in composite names: AbstractNamingService sounds better, and it is easier to remember than NamingServiceAbstract, while getNameCaseIgnore is a hopelessly mangled name.

Names are a precious and limited resource, not to be irresponsibly squandered. It is wasteful to use overly generic words, like Manager, Engine, Agent, Module, Info, Item, Container, or Descriptor in names, because they don’t contribute to the meaning: QueryManager, QueryEngine, QueryAgent, QueryModule, QueryInfo, QueryItem, QueryContainer and QueryDescriptor all sound fantastic, but when we see a QueryManager and a QueryEngine together in an API, we have no clue which does what. While synonyms make prose more entertaining, they should be avoided in APIs: remove, delete, erase, destroy, purge or expunge mean essentially the same thing and API users won’t be able to tell the difference in behavior based on the name alone. Using completely meaningless words in names should be a criminal offense. You would think this never happens, but see if you recognize an old product name in TeamsIdentifier or if you can remember CrusadeJDBCSource by recalling the fantasy name of a long-forgotten R&D project? The words we choose should also accurately describe what the API does. This also sounds obvious, yet we have seen a BLOB type which is not an actual Binary Large Object and a Set type which is not a proper Set. Such mistakes can only happen if we don’t slow down to think about the names we are choosing.

Finding names first

Finding meaningful names for some API constructs is so difficult that it should be completely avoided. This is not a joke. We should stay away from naming types, methods and parameters after they are designed. It is almost guaranteed that if we throw unrelated fields together into a type, the best name we will find for this concoction is some sort of Info or Descriptor. Even a marathon brainstorming session fails to find a better name for DocumentWrapperReferenceBuilderFactory, because it is undeniably a factory for producing builders, which can generate references to document wrappers (whatever those are). The method VerifyAndCacheItem both verifies and caches an Item (whatever that is) and an IndexTree is a rather odd data structure indeed. On the other hand, when we know our core concepts before we start thinking about the structure of the API, we can rely on the nouns, verbs, and adjectives to guide us through the process. Similar to the “writing the client code first” guideline, “finding the names first” proposes to revise the traditional sequence of design steps in search of a better outcome.

It may be helpful to speak to non-programmers to find out which words they use to talk about the problem domain. Understandably, the idea of collecting words into a glossary without thinking about how and where they will be used in code sounds somewhat counter-intuitive to us, but people in numerous other professions do this exercise regularly. Let’s pretend that we need to add Web 2.0 style tagging to our API, but we don’t know where to start. We look up the corresponding Wikipedia entry and read the first paragraph:

“In online computer systems terminology, a tag is a non-hierarchical keyword or term assigned to a piece of information (such as an internet bookmark, digital image, or computer file). This kind of metadata helps describe an item and allows it to be found again by browsing or searching. Tags are generally chosen informally and personally by the item’s creator or by its viewer, depending on the system.”

We underline the relevant words, group them into categories, and highlight the relationships between them. Where we find synonyms, we highlight the best match and gray out the alternatives. Where there are well known, long-established names in the domain (Author, User, Content or String), we chose these synonyms over the others:

Verbs Nouns
assign, choose tag, keyword, term, metadata, (string)
find, search bookmark, image, file, piece of information, item, (content)
find, browse bookmark, image, file, piece of information, item, (content)

It is absurdly early to do this, but if we are asked to sketch a draft API at this point, it may have methods like:

   void assignTag(Content content, String tag);
   Content[] searchByTag(String tag);

Far be it from us to claim that this is a good API or that these are the best possible names. We should continue the process of looking for better alternatives. The example just illustrates that it is possible to find good names without thinking of code, and once we have them, they point towards types and methods we need in the API.

When we have the choice between two or more words to name an object or action, the least generic term is the best. Most method names we come across start with “set”, “get”, “is”, “add”, and a handful of other common verbs. This is a real shame, because more expressive verbs exist in many cases. Instead of setting a tag, we can tag something with it. For example:

Typical Better
document.setTag(“Best Practices”); document.tag(“Best Practices”);
if(document.getTag().equals(“Best Practices”)) if(document.isTagged(“Best Practices”))

Shorter names are better, but nowadays the length of names is rarely a serious concern. It works best if we set aside the most powerful nouns as the names of the main types and use longer composite names for methods or secondary types. Then when we build a scheduler, we can call the main interface Scheduler and not JobSchedulingService. If name confusion is a concern, we can use namespaces for clarification, a wiser choice than starting every name with “Job”.

Longer composite names are more meaningful than short ones which depend on parameter names or types to further clarify their meaning. Parameter names and types may be visible in method signatures, but not when we call the method from code:

Method signature Method call
Document.remove(Tag tag); currentDocument.remove(bestPractices);
Document.removeTag(Tag tag);) currentDocument.removeTag(bestPractices);

Many experts recommend writing self-documenting APIs, but only a few insist that we support writing self-documenting client code. At first, it may look like the API user should be entirely responsible for this, until we realize that he can only name his own variables, while we (the API designers) are choosing the names needed to call the API.

Conclusion

Naming is a complex topic, details of which require far more space to cover and fall outside the scope of this document. With so many different factors influencing naming, it is not easy to give straightforward practical advice, other than to avoid stupid mistakes. The problem domain has the strongest influence, since it is easier to describe good names for a specific API than for APIs in general. This statement is seemingly contradicted by the existence of naming conventions. But aren’t the conventions considered helpful advice? In the most generic sense, they may be. Yet when it comes to choosing descriptive, memorable, and intuitive names, the so-called naming conventions are of limited use, primarily addressing consistency concerns. Developers who closely follow naming conventions are designing consistent APIs, which is important, but not sufficient. We separated striving for consistency, with its naming and design conventions (discussed in the previous installment), from the subtle art of choosing memorable names, precisely because so many developers still believe that there is nothing more to good names than following the conventions.

Notes:

(1) This paragraph is not intended as a complete and accurate description of the history of computing. Platforms and languages evolved differently and had varying limitations. For more details, see comments. (Based on reader feedback. Thank you.)

(2) The underscore example is misplaced here because its use is typically a consistency issue. If your platform has an established, consistent naming convention which uses underscores, then by any means, follow it. (Based on reader feedback. Thank you.)

(3) This is not intended as linguistic advice. Languages are constantly evolving and many new words are added to dictionaries every year. “Deleter” appears in some, but not in others. Developer-friendly design acknowledges that a significant number of users are likely not native English speakers, with varying levels of language skills. If they cannot find some terms in dictionaries, this may prevent them from thoroughly understand the precise meaning of a name. (Based on reader feedback. Thank you.)

 

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 Canada License.

Striving for consistency

Being consistent means doing the same thing the same way every time. The human brain is wired to look for patterns and rules because our ability to predict future events (the ripening of fruits, the start of the rainy season, or the migration of animals) has been essential to our survival. Our minds work the same way when developing software. When you see the interface names AssetServices, MetadataServices, and ContentServices, what do you expect the video interface to be called? Isn’t it true that you feel reassured and encouraged when you find the VideoServices interface? Inconsistency doesn’t mean complete chaos and confusion. In an inconsistent world, rules, patterns and conventions are still discernible, but there are numerous unpredictable and inexplicable exceptions.

We call an API consistent when there are no frivolous or unnecessary variations in it. We quickly become familiar with such APIs because they are predictable, easy to learn and remember. Their consistent behavior gives us confidence that we can use them correctly.

Following conventions

Many well-known coding conventions were adopted with the sole purpose of minimizing small, but annoying variations in programs. Pascal casing is no better than camel casing; yet we call our method RemoveTag() in .Net and removeTag() in Java, because otherwise we violate  established conventions and introduce inconsistencies. We name our interface IPublishable in .Net and Publishable in Java, regardless of what we think of the use of “I” to distinguish interface names from class names. We use Hungarian notation when interacting with low-level Windows API functions from C code, even though we consider Hungarian notation a hopelessly outdated annoyance. This is not only true for large platforms, but for smaller APIs as well. We follow established conventions, sometimes silly ones, whether we agree with them or not.

Some APIs are inconsistent by design, but it is far more common for inconsistencies to creep in with subsequent modifications. Consider the following example:

   public interface Capabilities {
      public boolean canCreate();
      public boolean canUpdate();
      public boolean canDelete();
      public boolean canSearch();
      public boolean canSort();
      …
      public boolean isRankingSupported();
   }

The last method looks dreadfully out of place. It is pointless to argue which of the two naming conventions is better. Reverse them and the interface still looks bad. Novice developers are especially prone of engaging in such never-ending, fruitless arguments, not realizing that consistency often trumps other considerations. When adding a new method to an existing interface, simply follow the conventions already in place.

Adopting conventions

De-facto conventions are already in place for many existing APIs. For new APIs, especially large APIs, we need to adopt and document our own conventions. It is almost entirely up to us what conventions we use, provided that they:

  • do not contradict the established conventions of the chosen development platform
  • aim to minimize unnecessary variations
  • do not impose any real restrictions on functionality

For example, a potential for unnecessary variations exists in parameter ordering. We can see this in the C standard I/O library functions, where fgets and fputs have the file descriptor as the last parameter and fscanf and fprintf have it as the first, frustrating millions of developers for more than 30 years. Establishing a convention for parameter ordering eliminates such variations without restricting functionality.

A lot of gratuitous variations can creep into an API concerning the usage of null. Every time a method takes an object parameter, we should know if it accepts null or not. If it doesn’t accept null, we often see unnecessary variations in how the error is handled. If null is accepted, we again see many variations in what this actually means. For methods which return an object reference, we need to know if it ever returns null, and if it does, when and what does it mean? Conventions regarding the usage of null can be helpful in avoiding such uncertainties.

We should keep in mind that we are establishing conventions and not strict rules. We may be tempted to enforce rules like “No method should ever return null; it should either return a valid object or throw an exception” because it is not only consistent behavior, it also makes the API safer to use. The problem is that there are justified deviations from this convention. What should a method designed to look up a specific object do when it doesn’t find it? As a rule (yes, this is a rule), we should only throw exceptions under exceptional circumstances. Looking for something and not finding it can be anticipated and it shouldn’t cause an exception. While there are certain other design options, none of them are as simple as returning null. Consistency is about removing unnecessary variations and there are cases where variations are warranted. “Extreme advice is considered harmful” warns Yaroslaw Tulach in his book Practical API Design.

Using patterns

Patterns can remove further variations from APIs. Unlike the “Gang of Four” design patterns, which are recipes for solving specific design problems, API patterns are used to make large APIs more predictable. In this context, the standard dictionary definition of the term, “elements repeating in a predictable manner”, is used.  API patterns are formed using repetition, periodicity, symmetry, mirroring, and selective substitution as seen in patterns of nature or in decorative arts. We can borrow API patterns from others or make up our own. Since we need predictable APIs, not decorative ones, the simplest patterns are the best.

For example, one of our APIs consists of only two kinds of objects: service objects and data objects. The service objects are named by appending “Services” to the service name (AssetServices, MetadataServices, and so on) and are placed in Java packages that end with “.services”. Every service object is a singleton and can be instantiated calling the static getInstance() method. The data transfer objects have the words “Request” or “Result” appended to their name, like in ExportRequest and ExportResult. When the request has search semantics, the data object is named by appending “Criteria” to the name, for example, RetrieveAssetCriteria. Such patterns are great in large APIs, where simple coding conventions leave plenty of room for other, higher-level discrepancies.

In addition to structural patterns as above, we can establish behavioral patterns. In our API some methods are optional and, depending on the server configuration, they may work or throw an UnsupportedOperationException. There is a Capabilities interface (shown above), with methods like canSearch(), canSort(), or canUpdate(),  which can be called to check if some functionality is available or not. Consistent use of structural and behavioral patterns can make even very large APIs easy to use, since what we learn from using one part of the API can be easily transferred to other parts.

Enforcing consistency

Patterns and conventions have to be enforced when working in large teams because inconsistencies are very likely with several people contributing to the design. API design as a whole should remain a team effort, but ideally a single individual should be responsible for its consistency. This person should be authorized to review, accept, or reject API changes, but – and this is very important – only for consistency reasons. This role is a consistency advocate, not a supreme design guru. For example, Brad Abrams and Krzysztof Cwalina became well-known inside and outside Microsoft after they were appointed to ensure the consistency of the .Net platform. Joshua Bloch had a similar – albeit unofficial – role in the core Java API development while at Sun. Having a reviewer to find and correct inconsistencies and an independent arbitrator to stop the team from wasting time on unproductive disputes can be very helpful.

Compromising

Consistency is so important that it is worth compromising in other areas to achieve it. To put it simply, using the same design everywhere is often better than choosing the best solution for each particular case. For example, exceptions are preferable to error codes, but it is a lot easier to work with error codes than with a mix of error codes and exceptions. We like collections more than arrays, but we like it even less when they are mixed together. This can happen when we try to “improve” the design as the API evolves. We can’t change the old parts due to backwards compatibility requirements, and if we use a different, “better” design for the new parts, we introduce inconsistencies. Right now, one of our APIs is caught right in the middle of such ill-advised migration from arrays to collections.

Avoiding misleading consistency

We should be careful not to introduce false or misleading consistency. Misleading consistency is like false advertising or a broken promise. For example, if there is an interface named Driver and a class named AbstractDriver in the API, developers will expect that AbstractDriver implements Driver and they can inherit from it to create their own implementations. If this is not the case, it is better to name either the class or the interface something else.

Also, we should reserve the standard JavaBeans getter and setter method names for methods accessing local fields. There is nothing more frustrating than to call a seemingly harmless getAssociations() method, watch it block for 25 seconds then see it throw a RemoteException. A different name, like retrieveAssociations() would signal the real behavior much better.

We create false expectations of consistency when our design is consistent only in certain aspects and inconsistent in others. For example, we follow consistent naming conventions, but have no consistent type structure, parameter ordering, error handling or behavior. New team members are the most likely to commit this mistake, because naming conventions and structural patterns are significantly easier to follow than consistent behavior.

Conclusion

The benefits of consistent APIs are obvious and consistent APIs don’t take more time or effort to design than inconsistent ones. We only need to adopt and follow certain patterns and conventions. APIs can be reviewed and inconsistencies corrected even late in the design process. The only essential requirement for consistent API design is discipline. This makes the “strive for consistency” API design guideline the easiest to follow.

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 Canada License.

Follow

Get every new post delivered to your Inbox.

Join 31 other followers