Unix’s de­vel­op­ment is no doubt one of the most important mile­stones in this history of computing. The operating system not only in­tro­duced some of today’s most basic concepts in in­form­a­tion tech­no­logy, such as the hier­arch­ic­ally struc­tured file system, it also has served as the basis for numerous other systems, like Apple’s macOS and iOS, or the open source Linux. In turn, this has led to the emergence of numerous de­riv­at­ives, like Ubuntu, Debian, or mobile Android. But how exactly did Unix become one of the most in­flu­en­tial computer man­age­ment ap­plic­a­tions, and why was its de­vel­op­ment team able to ex­clus­ively record ideas initially on black­boards and notebooks?

Multics joint project laid the found­a­tions

In 1965, a working group presented their idea for a new operating system at the Joint Computer Con­fer­ence. The group consisted of employees from the Mas­sachu­setts Institute of Tech­no­logy (MIT), General Electric, and Bell Labor­at­or­ies (Bell Labs) or AT&T (part of Nokia’s research and de­vel­op­ment de­part­ment since 2016). They named the operating system Mul­ti­plexed In­form­a­tion and Computing Service, or Multics for short. They pursued com­pletely new ap­proaches, focusing on time-sharing in par­tic­u­lar. Multics was among the first systems to allow multiple users to work sim­ul­tan­eously on one computer by sharing the un­der­ly­ing processor’s computing time.

The Multics working group needed a computer with specific re­quire­ments to get their project off the ground: on the one hand, it had to have clearly formatted in­struc­tions to be able to use the higher pro­gram­ming language PL/I from IBM intended for de­vel­op­ment. On the other hand, it had to support the planned multi-user operation and work asyn­chron­ously to minimise per­form­ance losses in memory man­age­ment. For this reason, the GE-635 and later the GE-645 from General Electric were selected. The de­vel­op­ment was carried out on the multi-user system CTSS, which had been developed by MIT back in the 1960s and was already up and running. Delays in de­vel­op­ment of the PL/I compiler, financial bot­tle­necks, internal dif­fer­ences, and growing external pressure even­tu­ally led Bell Labs to withdraw from the project in 1969.

Multics becomes Unix

Multics was developed further at MIT and later dis­trib­uted com­mer­cially on Honeywell 6180 machines by Honeywell In­ter­na­tion­al Inc., after its ac­quis­i­tion by General Electric (until 1986). However, the computer scientist Ken Thompson, who was an employee at Bell Labs at the time, could not let go of a multi-user system: together with Dennis Ritchie and a small team at AT&T, he began planning his own system, based on Multics prin­ciples. But the search for a suitable computer initially proved to be fruitless – and as Bell Labs resisted the purchase of a suitable copy, the de­velopers began recording their notes and progress for a planned file system on notebooks and black­boards.

Finally, a used PDP-7 minicom­puter from Digital Equipment Cor­por­a­tion (DEC) was acquired for the planned project. This computer system, which was “only” the size of a wall unit, ran with GECOS (General Electric Com­pre­hens­ive Operating System), which served as a de­vel­op­ment platform from then on. Valuable software tools like a command line (SH) and editor (ED) and the already existing file system in paper form were quickly developed – initially still in an assembly language (hardware-oriented, but sim­pli­fied for humans). Since the new operating system only allowed two users to work on a process at the same time (unlike Multics), the team named it Unics based on the template. Due to lim­it­a­tions for file name lengths in GECOS, the final name Unix was decided upon.

First B, then C: Unix gets its own higher pro­gram­ming language

After the Bell Lab team had written Unix and some other ele­ment­ary programs, it was time to replace the assembly language used for this purpose with a less complex variation. However, the plan to develop the pre-existing IBM language Fortran was rejected after a short time. Instead, work began on their own language and was strongly oriented towards PL/I – the Multics language – and the BCPL (basic combined pro­gram­ming language) developed at MIT. Sub­sequently, Ritchie and his col­leagues rewrote some of the system tools in this language until they even­tu­ally received a new PDP-11 computer in 1970, and were once again forced to rethink their technique. This was because the new system ar­chi­tec­ture was not word oriented like the PDP-7 computer and the pro­gram­ming language B, but was instead byte oriented.

In the next two years, Bell Labs developed the successor C, whose syntax and other features can be found in numerous modern pro­gram­ming languages like c++, JavaS­cript, PHP, or Perl. When the language was mature enough in 1973, the de­vel­op­ment team started rewriting the complete Unix kernel in C. The result was published by the Unix team in the mid-1970s. Since AT&T was not allowed to sell any software at the time, being a state con­trolled tele­com­mu­nic­a­tions industry, Unix (version 6) which was a multi-user system that also allowed several processes sim­ul­tan­eously, was made available to all in­ter­ested uni­ver­sit­ies free of charge – including a C compiler, which made the system usable on almost all platforms.

Hardware friendly, and open source: Unix conquers the developer scene

With the release of Unix software for edu­ca­tion­al in­sti­tu­tions, the success of the new operating system quickly became more and more apparent, initially as a toy among pro­gram­ming circles. Common work processes on the IBM main­frames and PDP machines during that time continued to run on native systems like RSX-11, RT-11 or IST. For de­velopers, though, the value of the source code provided by the kernel and the in­di­vidu­al ap­plic­a­tions was not just a learning effect: the low demands Unix made on hardware and its high usability en­cour­aged ex­per­i­ment­a­tion and further de­vel­op­ment, which was par­tic­u­larly well received by the Uni­ver­sity of Cali­for­nia, Berkeley (Thomson’s former home uni­ver­sity) – although the fact that he took up a guest pro­fess­or­ship in its newly created computer science faculty in 1976 probably played a sig­ni­fic­ant role.

Bill Joy and Chuck Haley, two graduate students at the time, improved the Pascal system developed by Thompson and pro­grammed a com­pletely new text editor with ex – the pre­de­cessor of vi, which can still be found in unixoid system standard in­stall­a­tions today. In 1977, under Joy’s direction, a modified variant of Unix appeared, which contained the im­prove­ments and further de­vel­op­ments made so far. The Berkeley Software Dis­tri­bu­tion (BSD), which later in­teg­rated the TCP/IP network protocol into the Unix universe, and was able to meet the re­quire­ments of a free operating system for the first time (thanks to its own BSD license), and is con­sidered to be one of the most important Unix modi­fic­a­tions to date.

The 1980s: com­mer­cial­isa­tion and the Unix wars

In the following years more and more modi­fic­a­tions were developed, including ones that focus on other aspects, like finance. For example, Microsoft acquired a Unix V7 license in 1979 to develop ports for Intel and Motorola pro­cessors, among other things. In the following year, they released Xenix, which was ori­gin­ally planned as a standard operating system for PCs but ended up placing hardware demands that were too high. Microsoft finally placed further de­vel­op­ments in the hands of the software man­u­fac­turer SCO (Santa Cruz Operation) to con­cen­trate on OS/2 and further de­vel­op­ment of MS-DOS.

Bill Joy also jumped on the bandwagon in 1982 with his newly founded company Sun Mi­crosys­tems, using the pro­pri­et­ary BSD-based system SunOS (pre­de­cessor of Solaris), which was spe­cific­ally designed to use on servers and work­sta­tions.

However, the real battle for Unix fans was fought between AT&T, which by now had received com­mer­cial dis­tri­bu­tion per­mis­sion, and Berkeley Uni­ver­sity, which was able to highlight valuable in­nov­a­tions, thanks to their large number of sup­port­ing pro­gram­mers. AT&T first tried to conquer the market with System III (1981) and with the new optimised version of System V (1983), both of which were based on Unix V7. The Uni­ver­sity of Berkeley then sim­ul­tan­eously released 4.3BSD, for which 1,000 licenses were issued within 18 months. This made it much more popular than the paid System V, which lacked the file fast system (FFS) and the network cap­ab­il­ity (thanks to in­teg­rated TCP/IP) of Berkeley’s variant.

With System V’s fourth release (1988), AT&T im­ple­men­ted these two and many other BSD features, as well as for Xenix and SunOF, which led to many users switching to the com­mer­cial option.

Thanks, Penguin: Unix becomes a server solution

Whilst different Unix systems initially competed with each other for sales and loyalty, Apple and Microsoft began their rivalry in the personal computer sector and later in the server field. Whilst Microsoft won the race when it comes to home PCs, a system based on Unix concepts suddenly appeared on the scene in 1991 with Linux, which in the following years won over the server en­vir­on­ment. Thanks to the freely licensed kernel and freely available GNU software, the developer Linus Torvalds had fulfilled the desire for a com­pet­it­ive open source operating system and won over the market at the time. Until today, numerous Unix Linux de­riv­a­tions like Debian, CentOS, Red Hat, or Ubuntu are used as system software for all kinds of servers. Ubuntu in par­tic­u­lar is becoming more and more popular for home PCs. Linux, which we have an article on is by far not the only important Unix successor in today’s software world: since Mac OS X 10.0 or Mac OS X Server 1.0, the Apple operating system uses Darwin, a free BSD variant, as a sub­struc­ture. Berkeley Unix itself is even rep­res­en­ted several times with numerous other free de­riv­at­ives like Free BSD, Open BSD, or Net BSD. With iOS (same system base as macOS) and Android (based on Linux kernel), the two most widely used operating systems for mobile devices also belong to the Unix family.

What is Unix? The most important milestone features of the system

When it was in­tro­duced, many of Unix’s dis­tin­guish­ing features were absolute novelties that were not just intended to influence the de­vel­op­ment of unixoid systems and dis­tri­bu­tions, but were also taken up by com­pet­it­ors Apple and Microsoft in their operating systems. Es­pe­cially when you take the following char­ac­ter­ist­ics into con­sid­er­a­tion, Richie, Thompson, and their col­leagues involved with Unix were pioneers of modern operating systems at that time:

Hier­arch­ic­al, universal file system

An ele­ment­ary part of Unix right from the beginning was the hier­arch­ic­ally-organised file system, which allows the user to structure files into folders. Any number of sub­dir­ect­or­ies can be assigned to the root directory, which is marked with a “/”. Following the basic principle of “Everything is a file,” Unix also maps drives, hard disks, terminals, or other computers as device files in the file system. Some de­riv­at­ives, including Linux, even mark processes and their prop­er­ties as files in the procfs virtual file system.

Mul­ti­task­ing

Another decisive factor in Unix’s success was the ability to execute several processes or programs sim­ul­tan­eously without them in­ter­fer­ing with each other. The operating system was based on the method of pre-emptive mul­ti­task­ing right from the start. With this method, the scheduler (which is part of the operating system kernel) manages the in­di­vidu­al processes through a priority system. It was only much later during the 1990s that Apple and Microsoft began im­ple­ment­ing com­par­able process man­age­ment solutions.

Multi-user system

Even Multics’ main goal was a system that would allow several users to work sim­ul­tan­eously. To do this, an owner is assigned to each program and process. Even if Unix was initially limited to two users, this feature was part of the system software portfolio right from the start. The advantage of this kind of multi-user system was not just the op­por­tun­ity to access the per­form­ance of a single processor at the same time, but also the as­so­ci­ated rights man­age­ment. Ad­min­is­trat­ors can now define access rights and available resources for different users. Initially, however, it was also a pre­requis­ite that the hardware of each re­spect­ive computer was involved.

Network cap­ab­il­ity

With 4.2BSD, Berkeley’s Unix became one of the first operating systems to integrate the internet protocol stack in 1983, providing the found­a­tion for the internet and simple network con­fig­ur­a­tion, and the ability to act as a client or server. In the late 1980s, the fourth version of System V (already mentioned) was also a variety of the com­mer­cial AT&T system, which adds the kernel to the legendary protocol family. Windows should only support TCP/IP with 3.11 (1993) and an ap­pro­pri­ate extension.

Platform in­de­pend­ence

Whilst other operating systems and their ap­plic­a­tions were still tailored to a specific processor type at the time Unix was created, the Bell Labs team pursued the approach of a portable system right from the start. Although the first language was an assembly language, the project created a new, higher pro­gram­ming language as soon as the basic structure of the systems software was created. This language was the pre­de­cessor of the his­tor­ic­al C language. Although the com­pon­ents written in C were still strongly bound to PDP machine ar­chi­tec­ture, which Ritchie and his col­leagues used as a basis for their work, despite the included compiler. Lately, with the strongly revised Unix V7 version (1979), however, the operating system rightly earned its repu­ta­tion as a portable system.

The Unix toolbox principle and the shell

Unix systems combine a multitude of useful tools and commands, which are usually only designed for a few special tasks. For example, Linux uses GNU tools. For general problem solving, the principle is to find answers in a com­bin­a­tion of standard tools instead of de­vel­op­ing specific new pro­gram­ming. The most important tool has always been the shell (SH), a text-oriented command in­ter­pret­er that provides extensive pro­gram­ming options. This classic user interface can also be used without a graphic user interface, even if that kind of interface naturally increases user comfort. However, the shell does offer some sig­ni­fic­ant ad­vant­ages for ex­per­i­enced users:

  • Sim­pli­fied operation thanks to in­tel­li­gent auto-com­ple­tion
  • Copy and paste functions
  • In­ter­act­ive (direct access) and non-in­ter­act­ive (execution of scripts) states are usable
  • Higher flex­ib­il­ity, since the in­di­vidu­al ap­plic­a­tions (tools, commands) can be combined almost freely
  • Stand­ard­ised and stable user interface, which is not always guar­an­teed with a GUI
  • Script work paths are auto­mat­ic­ally doc­u­mented
  • Quick and easy im­ple­ment­a­tion of ap­plic­a­tions

Con­clu­sion: if you want to un­der­stand how operating systems work, take a look at Unix

The rise of Microsoft and Apple, directly linked to their creators Bill Gates and Steve Jobs, is un­doubtedly un­par­alleled. However, the found­a­tion of these two giant success stories was laid by the pi­on­eer­ing work of Dennis Ritchie, Ken Thompson, and the rest of the Unix team between 1969 and 1974. Unix does not just produce its own de­riv­at­ives, but also in­flu­ences other operating systems with concepts like the hier­arch­ic­ally struc­tured file system, the powerful shell, or high port­ab­il­ity. To implement the latter, the most in­flu­en­tial pro­gram­ming language in computer history, C, was developed almost in passing.

To be aware of the pos­sib­il­it­ies of language and general operating system func­tion­al­ity, there is no better il­lus­trat­ive object than a Unix system. You do not even have to use one of the classic variants: Linux dis­tri­bu­tions like Gentoo or Ubuntu have adapted to modern demands without giving up basic features like maximum control over the system. You are somewhat more limited in your pos­sib­il­it­ies with the beginner-friendly macOS, which masters the balancing act between the powerful Unix base and a well-designed graphic user interface with flying colors.

Go to Main Menu