The History of APIs
The history of APIs (the application programming interface) is both simple and complex. While it is easy to identify when the first modern APIs became popular, earlier forms of system integration existed long before APIs dominated the web.
Early forms of computers and applications were stand-alone. That means they each had their own ways of processing, which resulted in unique forms of data. However, as computers evolved from their academic and military beginnings to enter the consumer market, the need for systems to talk to one another became a necessity.
Let us have a look at the early beginnings of software integrations and on to the future of API integrations. Knowing the history of APIs will allow us to better understand them.
The Beginning of Software Integrations
During the early development of computing machines, distributed systems were created to decentralize the computing processes. The separation of systems required a stable method of communication.
One of the early forms of integration technologies was called Electronic Data Interchange (EDI). Like many technologies, it was inspired by developments in military logistics. The first concepts can be traced back to the 1948 Berlin airlift and the need to transmit large data over baud teletype modems. However, it was not until the early 1970s when the first integrated system was developed using EDI.
The London Airport Cargo EDP (electronic data processing) Scheme (LACES) at Heathrow Airport in London, UK allowed forwarding agents to send information into the customs processing system directly. EDI provided very specific technical requirements, including data transmission, message flow, document format, and the software that would interpret the documents. It was first designed to be independent of other software and communication technologies.
Remote procedure calls (RPC) is another big step toward modern integration developed during the 1970s. It can be traced back to the early ARPANET documents. But, it was not until 1982 that the earliest RPC implementation was created by Brian Randell and his team for their Newcastle Connection composed of UNIX machines.
Soon, Xerox PARC’s Andrew Birrell and Bruce Nelson created “Lupine” in their Cedar environment which created type-safe bindings and efficient communication protocol for their systems. Eventually, Lupine evolved into Sun’s RPC, which was the actual basis for Network File System (NFS).
The Beginning of Software Integrations
The 1990s saw the surge of business-level systems and can be considered the gateway decade for our technology-based society. The World Wide Web was also taking its first few giant steps which led to even greater demand for distributed systems and client-server topologies.
It is during the early part of the decade that object-oriented programming helped facilitate integration between various systems through Common Object Request Broker Architecture (CORBA). Its first version which was mapped in C programming language was released in October 1991. Then, a C++ mapping version (1.1) was released on February 1992.
CORBA allows communication between systems and software written in different languages and in different machines. The implementation of CORBA is independent of operating systems, programming languages, and hardware platforms. It takes advantage of an interface definition language (IDL) that it uses to transmit objects over the network. Then, CORBA specifies a mapping from IDL to specific languages like C or C++. As such, standard mappings were created for Ada, COBOL, Java, Lisp, Object Pascal, Python, Ruby, and Smalltalk, to name a few. Also, unofficial mappings exist for C#, Erlang, Perl, and Visual Basic.
CORBA is one of the earliest forms of modern API that allows communication between software with different implementations. Just like the API that we know today, it bridges independent systems without forcing changes in their implementation and structure.
Various proprietary EAI’s were further developed in the mid-90s. Essentially, these technologies bundled adapter frameworks, messaging and communicating systems, content transformation modules, and various methods for putting all these services together. EAI’s served as a “translator” for various systems (most are proprietary) which enabled them to use each others’ services to create better technologies.
Enterprise Service Buses (ESB) and Service-Oriented Architecture (SOA)
As the 1990s progressed, more and more technologies were made available on the World Wide Web, which offered better integration and communication between systems, especially at the transport layer. The decade ended with the development of service-oriented architecture (SOA) which is a more specific form of the more general client-server architecture that API’s use today.
The creation of SOA was a significant change in the direction that CORBA and related services were heading. Once the World Wide Web Consortium (W3C) released its XML specification, the future of systems integration became much more focused on web services.
SOA is a form of software design where services are divided into application components that uses communication protocol over a network. Its basic principles are independent of products, vendors, technologies, and frameworks. Services use protocols that indicate how they transmit and interpret data using metadata. The description metadata also describes functions. Essentially, SOA aims to allow clients to piece together various functionalities from existing services to create applications. Services provide a simple interface to the clients which abstract the underlying mechanisms which simplify the integration.
Along with SOA, enterprise service bus (ESB) provides a much seamless method of developing applications on top of various services. ESB is a form of a communication system between services in SOA. The idea of ESB is similar to the bus concept in computer hardware architecture. It aims to provide a standard, structured, and general-purpose method of integrating loosely coupled components (services). It should be independently deployed and disparate within a network.
The ESB and SOA (along with other related technologies) created a larger stack combining application layer, business application monitoring, and messaging layer.
After the evolution of ESB from Roy W. Schulte and David Chappell’as ideas in 2002, various private and public projects and organizations began adopting the technology. By the mid-2000s, IBM, Microsoft, Oracle, and their competitors have released their ESB products. At the same time, Apache led the release of open-source versions along with Red Hat.
The Development of Microservices Architecture
As SOA became more prevalent in various industries, a variant of the architecture became the go-to choice for most developers: the microservices architecture. In a microservices architecture, services are fine-grained, which improves modularity. They are independently deployable and organized around business functions. In this model, a microservice can be implemented using different programming languages, databases, hardware, and software platform that best fits its functions.
Then, these microservices are combined and integrated to create various applications. Services are self-contained business functionality with their own interfaces, components, and may even implement their own layered architecture.
The microservices naturally enforces a modular structure in application development as noted by Martin Fowler, a British software developer, and author. As such, the emerging cloud technologies of the mid-2000s and early 2010s favored microservices due to their lightweight nature.
One of the first mentions of a microservice was during a workshop of software architects in May 2011 near Venice, Italy. Early ideas were mentioned by Dr. Peter Rodgers in his presentation at the Web Services Edge conference in 2005. Juval Löwy had a similar precursor idea which he described as the next evolution of Microsoft architecture.
After the 2010s, microservices architecture became common in developing web services and applications. APIs did not only became relevant, but the dominant method of integrating these services together. The movement from desktop applications to web applications in recent years meant a greater demand for more efficient methods of integration.
The Future of API Integrations
The era of Web API, which began in 2005, is now making way for the era of public API, which started in 2010. The rapid changes in various forms of industries required computing based on the web. The main driving industries are e-commerce, social media, mobile computing, and the Cloud.
The products in the industries mentioned required faster and more accurate models of development along with more efficient integration architecture. Services are now processing bigger sets of data than anticipated in the past decades.
GraphQL is the new kid on the block that is gaining rapid acceptance. Developed by Facebook for their internal systems in 2012, the company made it public in 2015. In November 2018, the GraphQL project was moved from Facebook to the GraphQL Foundation hosted by the Linux Foundation.
GraphQL provides a powerful yet flexible approach to developing web APIs. It has been compared with REST and other web service architectures. Through GraphQL, clients are able to define the structure of the data required, which are returned by the server. The method prevents excessively large amounts of data from being returned. It also much more stable in handling reading, writing, and subscribing to real-time data. Additionally, GraphQL includes its own query language, execution semantics, static validation, and more which essentially makes it a system itself.
The creator of GraphQL, Lee Byron, has set out to make it omnipresent across various web platforms. And since 2012, it seems that the technology is rapidly fulfilling its goals.