How a Nonstandard OS Led to Unix Standards

October 11, 2019

From 1988 through 1992, I worked as a system administrator for the Environmental Defense Fund in Washington, DC. As a nonprofit, it was too expensive for them to provide each employee with a PC, and Windows and PC networking were primitive back then, so their offices around the US used Wyse 50 terminals connected to a single computer in the copier room. When I arrived, that computer was a 68020-based Charles River Data Systems Universe 68, running their UNOS operating system.

1988-04 044-13 CRDS Universe 68-35T edfdc-Dfine-close


Command-Line Utilities on UNOS

UNOS was sort of like Unix System V if you squinted, but it changed many of the details, so porting software to it ran into all kinds of incompatibilities. The command line utilities were maddeningly deficient and different from the 4.2 and 4.3BSD Unix I had used at college. Apparently CRDS was trying to make Unix more user-friendly or more like some other operating systems; their version of cp was named copy, dd was called debe (a pun on girls’ names?), etc. and the command line options were multiple characters long. Completely useless for running shell scripts written for Unix.

I started looking around for a way to replace them. I found the source code for half-finished versions of some utilities in the GNU source tree at MIT, where I had access as a volunteer thanks to my friendship with Mike Haertel, a former roommate who wrote GNU grep and diff. I finished them and wrote some more missing utilities, and thus started the GNU fileutils, textutils, and shellutils, which were later all rolled up into the coreutils. I also ported GNU Emacs to UNOS, including its tricky “undump” method of restoring a RAM image from a file for faster startup. EDF paid me to spend some of my time on this work, because they wanted more usable computer systems themselves.

So a good deal of GNU/Linux code was written to make up for the deficiencies of UNOS. After a few years, EDF switched to timeshared 486 computers running SCO Xenix, then SCO Unix, and we ran the GNU utilities on them, too, because they were better than the SCO versions. EDF finally went to networked Windows PCs for everyone, but they’re still using GNU/Linux for their web presence, at least, and using descendants of the utilities they funded.

Kermit on UNOS

Long before SSH, Kermit was a popular file transfer protocol which worked over both serial connections, like modems, and network connections, like TCP/IP. It was often paired with terminal emulators. It was implemented for nearly every type of computer made in the 1980s and 1990s, in many programming languages. Yes, it was named after the muppet, with permission.

EDF needed a good way to transfer files to and from its UNOS systems (besides UUCP), so I ported C-Kermit (the version for systems that use the C programming language) to UNOS. For that, I had to deal with many quirks in UNOS system calls.

Standards Because of UNOS

Mike O’Dell, Internet pioneer, related on an email list that the incompatibilities of UNOS led to the creation of the UNIFORUM association and the POSIX standards, so the US government wouldn’t have a sole source for Unix products that were interoperable. Thus, the obscure and quirky OS from CRDS contributed to creating both the official standards for Unix and what is now the most-used implementation of them.


A “wiki” Computer Before it Was a Web Technology

October 11, 2019

Around 1991, I was working as a student programmer and system administrator in the “Hackers Pitt” of the College of Engineering at the University of Maryland, College Park.

This story of how “wiki” entered our vocabularies via computers, 3-4 years before the first Web wiki, was recalled in November, 2007 over email by two other student programmers. I edited the emails lightly for clarity.

Chris Ross starts the story:

It was someone illegally using accounts on the Eng systems, and they were logging in from the University of Hawaii. The hostname they logged in from, at least once, maybe even “usually” or “always,” was wikiwiki. This led us to looking up what the heck that meant, and in the days before google and wikipedia, that got us a Hawaiian dictionary. I think, actually, Robin photocopied that page out of a dictionary in the library, and that ended up in the Pitt. So, we knew a bunch of words that started with “w”.

wiki means fast, and wikiwiki, as you may recall, means very fast.

Years later, when I got a MicroVAX resuscitated as part of my habit of fussing with old hardware, I looked up the Hawaiian word for very slow, which was lolohi (lohi is slow).

So, *much* more information about the actual detective work that was being done to track down the guy in Hawaii (which IIRC I heard ended with the FBI knocking on his door saying “Umm, stop that.” or something like that. I don’t necessarily assume “door” is literal here).

Kurt Lidl elaborates:

The college of engineering got broken into from some person that was entering from the Univ of Hawaii, from the host “wiki” or “wikiwiki”, I think. Which, as Chris points out, is Hawaiian for “quick” or “fast”. In Hawaii, you use verb doubling for more emphasis on something — wiki-wiki means “very fast”. Robin was working at the undergraduate library at the time, and they had a copy of the Hawaiian-English dictionary, and she did photocopy a single page out of the “w” section of the dictionary for our use — which is why I still know that “wili kope” is Hawaiian for coffeegrinder and “wili wili wai” is Hawaiian for lawnsprinker. But I digress. (I’m not 100% sure on the spelling for lawn sprinker…)

When Debbie and I got married and went to Hawaii for our honeymoon, we picked up a copy of the Hawaiian-English dictionary. Having a copy of this allowed for further abuse of Hawaiian as a source of machine names in the R&D group later. I know that lolohi was used prior to that, however.

The breakin happened through the new to the College of Electrical Engineering Solbourne computer (a 64bit sparc-alike processor, running a varient of SunOS 4.1.xx that had support for SMP. This was attractive because it wasn’t Solaris 2.0, which is what Sun was pushing as the “only way to get real SMP”). As I recall, we couldn’t replace the telnet and/or rlogind on that machine with the kerberized ones (they just didn’t work on that machine), so that single machine was vulnerable, and someone’s password was sniffed through an account on that machine. Once they got in, they got onto a bunch of other machines.

This incident led to our hacking of “top”, “ps” and maybe one or two other things to “not show” processes that were owned by particular uids. There was a list of these UIDs placed into /usr/lib/libwiki.a (just an ascii list, one per line) and the libwiki.a file was carefully set to have creation/mod time of that of all the other sun-supplied libraries, so it would not show up at the top of the list if you did a “ls -ltr” in /usr/lib.

Their ‘wiki’ was indeed a fast machine – a Vax 8600 class with a bunch of (big at the time) disks on it.

I know that some law-enforcement got involved to have them “stop it”, but I don’t think there was ever anything other than “we know you did this, we have these records, now go away, or else”.

My conclusion:

So, years later when I heard about a new kind of web site called a “wiki”, I already knew the word. I’ve never been to Hawaii, but Hawaiian came to us.


Pubnix Access Systems: A Pioneering Web Hosting Service

October 11, 2019

The Context

It was the early 1990s.

The Internet was beginning its transition from supporting only U.S. government and university research to supporting ordinary businesses. The main applications on the Internet so far were email, Usenet news, and anonymous FTP. The World Wide Web was still a primitive research project. Gopher and WAIS were obscure and of little consequence.

Most homes had nothing faster than a 28.8kbps dial-up modem with which to access the Internet or other online services like bulletin boards or AOL.

Microsoft Windows was still only a fragile shell over MS-DOS. Linux was still an obscure and immature hobby project. Neither was suitable for running a server on. MacOS X was almost a decade away. But BSD Unix, a favorite operating system at universities, had recently become available on the PC platform.

The Beginning

A group of friends who had worked together for the University of Maryland, College Park as student programmers and system administrators were trickling out into the real world. Many of them reunited in Fairfax, Virginia at UUNET Technologies, the first commercial Internet service provider.

When Kurt Lidl left UMCP for UUNET (with a stop along the way at SURAnet), he had an inspiration. Most dialup services of the time, such as AOL, were oriented toward home computers and were limited to their own little world, with no connection to the larger, growing world of the Internet. Kurt wanted to create a public-access Unix system that people could dial up to, login to a shell account, and connect to the Internet and use email. He came up with the name “Pubnix” for this public Unix service. Although Pubnix started out as a separate venture, Kurt soon sold its assets to UUNET, and it became a small semi-autonomous division of UUNET with Kurt managing it.

Soon Josh Osborne and Dave MacKenzie joined Kurt at UUNET and helped him start developing Pubnix. They wanted it to be as automated as possible, so they created a system to generate configuration files with Perl programs using an SQL database, University Ingres (the only DBMS at the time that ran on BSD Unix). Hours were spent at whiteboards designing the database schema and provisioning systems for users and servers. Within a few months, the service was up and running on a few DEC Pentium PCs running BSDI BSD/OS in UUNET’s offices at Fairview Park in Fairfax. The BSDI company had started in the UUNET offices, so there was a close connection from the beginning.

There were jokes about creating public terminal rooms and combining them with a microbrewery to create a “brewpubnix.”


As they were designing Pubnix and building the technology to run it, the Internet was changing. The World Wide Web gained enough capabilities to be a useful tool for companies to use, not just researchers. Kurt realized that they needed to change directions, as shell account systems were becoming obsolete, and that the Pubnix shell accounts technology could form the foundation of a shared web hosting platform, one of the first in the world. So they adapted their configuration system to provision a version of the NCSA httpd customized to support virtual hosts, and Josh modified the kernel of the BSD/OS operating system to support IP rate limiting by virtual IP address, so each customer could have a certain amount of bandwidth. By late 1994, UUNET’s virtual web hosting product had customers. Unlike GeoCities, which also started in 1994, each customer’s web site had its own domain name. Around that time, Chris Ross came to UUNET from UMCP and joined the group.

As customers were added, the Pubnix platform grew in capabilities. University Ingres was replaced by a port of the XDB SQL database which Kurt commissioned. The NCSA httpd was replaced by the more flexible Apache web server. SSL connections using the Netscape web server and Real Media hosting on Linux were added.

Dave MacKenzie had the idea to add virtual FTP hosting, and in a few days he modified the WU FTPD to support virtual hosting, added database and Perl script support, and presented the completed product to UUNET’s marketing group to add to the product price list and start selling. In those days as a small pioneering company, it was that easy to come up with a new product. This may have been the world’s first virtual FTP hosting service. It gradually took over from UUNET’s original scheme of selling customers a subdirectory of Now they could have their own FTP host name.

UUNET started a Windows NT based web hosting service as well, which was independent of the Pubnix effort and seemed to have learned none of the lessons from it. Provisioning and monitoring were done manually, as NT was hard to configure programmatically, so the staff to customer ratio for the NT service was much higher. The independence from being an independent division of UUNET also proved to have a downside, in that the rest of the company began to marginalize and ignore the Pubnix product line, even though it was probably the most profitable product line the company had.


More people joined the Pubnix group, including Andy Crerar, Russell Street, Joey McDonald, Li Glover, and Peter Davis. Eventually, as UUNET grew, Kurt left the Pubnix group to start an R&D team. It was also based on the BSD/OS platform, and became a technology supplier to the Pubnix group in a symbiotic relationship, as the R&D systems were configured by the same SQL database. New technologies were incorporated as they became available, including Kerberos v5, Apache Stronghold for SSL,  ProFTPd, and Postgres. The service was running hundreds of web sites on a mixture of dedicated and shared servers.

UUNET was bought by MFS (Metropolitan Fiber Systems), who were soon bought by WorldCom, who then also bought MCI, and Pubnix became a tiny part of a huge conglomerate. Along the way, the former AOL web hosting division was acquired, and the Pubnix staff was moved into their offices in Reston, Virginia. The AOL web hosting ran on customized Solaris servers with little centralized or automated management. With input from Dave MacKenzie, they developed an Oracle database called FARMS to drive configuring their systems.

The End

As MCI WorldCom was going bankrupt in a financial scandal, most of the Pubnix and R&D staff left or volunteered to be laid off. At the end of 2002, the product line was sunsetted. Kurt Lidl tried to buy back the BSD/OS hosting business from them, but got stonewalled by MCI WorldCom’s legal department until it was too late.

For more context, see this history of UUNET’s product lines, which mentions Unix-based web hosting.