RSS

The Evolution of Software Development Part One

22 Jul

Episode 99

In the previous episode, we were discovering a history of cloud computing. Today we are going to continue with a historical theme and take a closer look at how software development changed over time. The art of software development is well over 70 years old now. Comparing with classic engineering disciplines, like building bridges or roads, one could say that it is still an infancy stage. The pace and extent of changes it had undergone are astonishing though. Numerous sources are focusing on the history of computers, programming languages, and software architectures, but in this series of articles, we will focus on how the craft of creating the software itself evolved and analyze a number of ideas and breakthroughs that had left the most significant imprints on it.

hongqi-zhang-magical-forge

Image by Hongqi Zhang

Software development evolution is of course woven in computers, programming languages, and architecture evolution but it is much more. Numerous tools, techniques, movements, processes, and practices accompany it. Everything for the sake of efficiency and being able to deliver value faster and more reliable in the ever-growing complexity of modern technological stacks. The value itself evolved with growing possibilities as well.

Early Theory and Practice

Any tale related to the history of software would be incomplete without mentioning Ada Lovelace, widely recognized as the first programmer, well before any computers were physically built. She is famous for her theoretical work on general-purpose calculations on Charles Babbage Analytical engine in the 1840s. The machine itself was not constructed until modern times. The term “computer” itself up to 1920 was used for people who were performing calculations manually. The mathematical foundations of modern computer science were laid by Kurt Gödel in 1931 with his Incompleteness theorem. The idea of the Algorithm was formalized by Alan Turing and Alonzo Church in 1936. In the same year, the Turing Machine concept saw the light of a day. As of computer architecture, the ground-breaking theoretical milestone was a paper by John Von Neumann describing a computing model named after him, where the machine consists of CPU, control unit, memory holding both instructions and data, external mass storage, input, and output. This is pretty much how computers are built up to now.

Meanwhile, the theory was starting to be put into practice. The first working electromechanical programmable Turing-Complete computer was the Z3 finished by Konrad Zuse in 1941. The first general-purpose electronic computer, ENIAC, was constructed in 1945. Its first program was research on the feasibility of a thermonuclear weapon, although initially it was being designed to calculate artillery firing tables. Programming of first digital computers in the mid-1940s consisted of physically wiring up plugboards. The concepts of subroutines and flowcharts were born around the same time. In the fifties, the computers, like the IBM 704, switched to punched cards.

Assembler and Software Engineering

Working with pure machine code, or the first generation of programming languages was basically preparing a list of bytes representing machine instructions. It was not especially user-friendly endeavor. The first additional layer of abstraction were assembly languages also known as second-generation programming languages. While being a low-level language with a very strong dependence machine instructions set, it offered symbolic names for instructions, directives, macros, labels, and other means of simplifying early programmer’s life. The first assembly languages date back to the theoretical work of Kathleen Booth in 1947. The first working assembler was written for EDSAC in late 1948 and used mnemonics designed by David Wheeler. We could say that the separation of software and hardware started to emerge at this time.

be40cfc72794aa7a446b33acb5d032f5

Image by Maria Chiara Gatti

The term software engineering was coined a bit later, in the sixties, and is being attributed to Margaret Hamilton, a lead developer on the software which will later take us to the Moon aboard the Apollo mission. It was a time when creating software was somehow bouncing between an art, a science, and an engineering discipline. It was however clear that it’s an entity separated from hardware and needs to be treated differently. The widespread adaptation of the term followed the 1968 NATO conference on software engineering.

Compilers and Unified Hardware Architecture

The main problem with early computers was that every machine design had a different instruction set, architecture, and assembler, thus it was impossible to write a portable piece of code. Programs written in assembly language were also still difficult to comprehend and expensive to maintain. This lead to the development of the third generation of programming languages – more abstract, independent of the machine instructions set and easier to understand by people. Constructions like “if” statements and loops are still bread and butter of programming today. Grace Hopper was one of the pioneers who worked on early compilers and linkers in the fifties. In 1953 she proposed that data-processing problems should be expressed using English keywords and in 1955 implemented with her team a prototype of FLOW-MATIC language. In 1957 FORTRAN compiler was delivered by a team led by John W. Backus at IBM. The year 1958 brought us ALGOL and 1959 followed with COBOL – language that is still widely used in the depths of old business, finance, and administration systems.

Moving from software to hardware, the next interesting milestone was a release of the IBM System/360 family of mainframe computers. IBM made a clear distinction between computer architecture and hardware implementation of the architecture. Almost all machines in the family used the same instruction set, which would allow the re-usability of software components. Businesses could start with a smaller machine and replace it with a larger one without rewriting the whole codebase. The System/360 is considered as one of the most influential designs in computer history up to date and contributed immensely to the need for software developers.

Object-oriented Languages and Design Patterns

The growing complexity of codebases and search for a good way to map real-world concepts to computer systems lead to the development of object-oriented languages. Simula, released in 1964, is acknowledged as the first one. The paradigm emerged as victorious in the early 1990s with the popularity of C++ and eventually led to Java and C# which are dominating the enterprise industry now. Objects attempt to model real-world entities and stimulate encapsulation – hiding object details from outside interaction that promote code reuse and reduce chances of breaking something accidentally.

dwarf_forge_by_graffiti_freak_d4tcicx-fullview

Image by graffiti-freak

A design pattern is a concept that originated in buildings architecture in the seventies. It was adopted to software in the eighties by Kent Beck and Ward Cunningham. The concept stabilized in the nineties with a famous book written by a group informally called Gang of Four. Design patterns attempt to solve commonly encountered software problems via a set of well-defined methods, objects, and interaction models. Some would ironically say that design patterns had to be developed to solve problems that object-oriented languages introduced in the first place, but that’s a common course of action – introducing a new thing to fix problems with old thing gives rise to new problems that require yet another thing that will introduce its problems. And the circle continues.

Personal Computer and IDEs

Many consider Altair 8800, introduced in 1974 as the first personal computer although perhaps the most iconic design was the IBM PC from 1981. The invention of the microprocessor led to a drastic decrease in computer prices. Standardization of hardware components interfaces led to an explosion of vendors and compatible parts. Suddenly it was not an expensive mainframe that only medium and large companies could afford, but a small box that could be bought by individual users. As a result, lots of people took an interest in programming, and software companies started to emerge in garages. Microsoft, Apple, Oracle – all those were founded in mid-seventies.

Moving from physical punch cards to computer terminals was a big leap, however, there was still a long way to go. Hardware progress in computing power allowed the emergence of Integrated Development Environments – a software that assisted writing software. Maestro I, developed in 1977 is considered to be the first IDE. Aside from basic functions like programing language syntax highlighting or debugging, modern IDEs offer a plethora of features, tools, and options to reduce the feedback loop between writing the code and running the code as much as possible, as well as provide code completion, refactoring assistance, potential problems detection and much more.

The Internet and Web APIs

It might be considered trivial to say that the Internet was a disrupting factor in software development as it was for basically any business. Nowadays everything is online, and if you are not online, you don’t exist. Companies needed to build customer-facing systems too or they were doomed to fail, so the demand for enterprise applications skyrocketed. Another factor was an access to information. Even before social media became widespread and content sharing blossomed, being able to access a vast source of information, technical documentation, technology news, and the software itself gave the IT industry a significant boost. The appearance of the World Wide Web ultimately led to a branching of software developers into, among others, back-end and front-end specialists as well.

jonathan-tiong-knight-smithy

Image by Jonathan Tiong

Later, when companies amassed a significant amount of valuable data and fulfilled their basic business model needs, a hunt for new income sources and options had begun. Using data for other purposes than serving individual clients was an opportunity to tap into new sources of income. Using data that someone else had already stockpiled with no small effort in order to enhance already functional product was another opportunity. If you run a barber service reservation website, presenting a map with a graphical depiction of the location of the barbershop is not required for the business to run, but it’s a nice touch. But someone already created those online maps, so why not use it? Machine to machine interactions over the public Internet started to be increasingly common. IT systems were getting interconnected at a lightning pace. Software development shifted towards empowering this global network.

What’s next?

It’s not an easy task to arrange the topic we have covered so far in exact chronological order. Many ideas are very old but only managed to gain popularity much later. Many are co-dependent or synergized. This will be especially apparent in the next part of our journey when we are going to discuss the not obvious influences of Gamers and Hackers, followed by a dive into Version Control, Open Source, Common Runtime Environments, Virtual Machines, Agile, DevOps, Continuous Integration, and Automated Tests. Stay tuned.

 

 

 

 
1 Comment

Posted by on July 22, 2020 in Technology

 

Tags: ,

One response to “The Evolution of Software Development Part One

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

 
%d bloggers like this: